Pride of Ownership in AI Era
Key Points
- The core of “pride of ownership” hinges on three timeless questions—did you author it, do you truly understand it and its provenance, and can you take responsibility for its outcomes—whether in school, work, or property transactions.
- Even though AI introduces new tools, these underlying criteria for accountability and integrity do not change, and expecting them to shift leads to conflict in both public and private institutions.
- Disputes about AI usage often stem from perceived gaps in answering those three questions, prompting groups to clamp down when they feel ownership, authorship, or provenance are unclear.
- By deliberately ensuring we can affirm authorship, comprehension, and outcome responsibility, we can integrate AI responsibly and maintain productive, trust‑based collaborations.
Sections
- Redefining Ownership in an AI Era - The speaker explains how traditional questions of authorship, provenance, and outcome responsibility are being reshaped by AI, and why cultural shifts are needed.
- AI‑Enhanced Knowledge and Provenance - It explains how AI can be used to deepen product and domain expertise rather than replace it, while emphasizing the need to transparently document AI contributions for personal conviction and legal accountability.
- Responsibility and Provenance in AI Workflows - The speaker argues that preserving domain expertise and transparent artifact provenance should be an agreed‑upon standard, guiding responsible AI tool usage like ChatGPT or Copilot.
Full Transcript
# Pride of Ownership in AI Era **Source:** [https://www.youtube.com/watch?v=SXomDjPP4Xg](https://www.youtube.com/watch?v=SXomDjPP4Xg) **Duration:** 00:07:38 ## Summary - The core of “pride of ownership” hinges on three timeless questions—did you author it, do you truly understand it and its provenance, and can you take responsibility for its outcomes—whether in school, work, or property transactions. - Even though AI introduces new tools, these underlying criteria for accountability and integrity do not change, and expecting them to shift leads to conflict in both public and private institutions. - Disputes about AI usage often stem from perceived gaps in answering those three questions, prompting groups to clamp down when they feel ownership, authorship, or provenance are unclear. - By deliberately ensuring we can affirm authorship, comprehension, and outcome responsibility, we can integrate AI responsibly and maintain productive, trust‑based collaborations. ## Sections - [00:00:00](https://www.youtube.com/watch?v=SXomDjPP4Xg&t=0s) **Redefining Ownership in an AI Era** - The speaker explains how traditional questions of authorship, provenance, and outcome responsibility are being reshaped by AI, and why cultural shifts are needed. - [00:03:32](https://www.youtube.com/watch?v=SXomDjPP4Xg&t=212s) **AI‑Enhanced Knowledge and Provenance** - It explains how AI can be used to deepen product and domain expertise rather than replace it, while emphasizing the need to transparently document AI contributions for personal conviction and legal accountability. - [00:06:56](https://www.youtube.com/watch?v=SXomDjPP4Xg&t=416s) **Responsibility and Provenance in AI Workflows** - The speaker argues that preserving domain expertise and transparent artifact provenance should be an agreed‑upon standard, guiding responsible AI tool usage like ChatGPT or Copilot. ## Full Transcript
We're talking about pride of ownership
today. And I know you might think AI,
Nate is an AI guy. I promise you this
gets back to AI. At the end of the day,
most of the conflicts we see in our
public institutions and in the private
workplace are about how we handle pride
of ownership in an AI world. And I want
to take you through a brief tour of the
before times so you get a sense of how
much has changed and how our underlying
frameworks have not shifted because I
think that gives us the ability to show
where we need to advance our work
culture, our educational culture to
truly make AI a useful tool. We begin
before AI. We had three implicit
questions that we asked every time in
work, at school, if we took pride of
ownership in something. It is, did you
author it? Do you truly know the
material? Can you show me the chain of
provenence for this material? That
happens with property and transactions,
but it also happens with work. Can you
show me the reviews it went through? It
was a conversation I've had about mini
docks. Professors have asked how many
books I read. They're implicitly asking
about the chain of providence of the
idea. So, show me that you can keep the
idea and have a sense of integrity
there. And finally, can you show that
you have ownership on outcomes? If
you're in class, it's do you have
ownership on the grade? If you are at
work, it's do you have ownership of your
KPIs? The point is those questions are
not new. We have been asking those
questions since Ian Nir complained about
his copper shipments because someone was
not upholding their end of the agreement
and not taking pride in their work. And
yes, I am making a very nerdy reference
and I hope someone appreciates it. The
point here is that those underlying
components of pride of ownership will
not change in the age of AI. They won't.
Stop expecting them to. When you get
into a fight, whether it's in uh at
school, whether it's at work, whatever
your context about whether AI is
appropriate to use, it almost always
comes down to pride of
ownership. You need to be able to answer
all three of those questions
affirmatively in order to have a
positive communal AI productivity
experience. And I know that sounds weird
like it's very hippie to say it that way
but fundamentally when we use AI
onetoone where I am talking to the AI if
I am doing so in the context of group
work the group is affected and demands
to know what's going on implicitly or
explicitly and if they feel like they
don't the instinct of a lot of groups is
to clamp down and so if you were in an
environment like that you have to
recognize those are the three questions
people are asking is do you know your
work? Can you keep your work? Do you
understand the chain of
providence? And can you be responsible
for the outcomes? Can you hang your hat
on what happens
afterward? Those are the things that
work is expecting. And I believe we can
answer yes on those in the age of AI.
It's not impossible. You can do it. You
can in fact use AI to prompt and ask
yourself questions of the data that you
have at your disposal. So you know your
product area, so you know your domain,
so you know your educational subject
matter better. You can actually increase
your product knowledge. Yes is a
spectrum and you can be more yes on
product knowledge or domain
knowledge if you use AI well. And so in
a sense, part of the interesting thing
about AI is that people tend to assume
you can use AI as a cheat sheet and skip
the product knowledge part. But it
doesn't have to be that way. It can be
the reverse. Similarly, with provenence,
you can be transparent about where you
are using, evolving, and thinking about
your arguments. Sometimes that's as
simple as saying, "Chat GPT and I have
been working on this together."
Sometimes that's as simple as saying
this is an argument. I evolved it after
this conversation. Then I processed it
through chat GPT and I used this prompt
and came back. And that more formal
sense might be appropriate if legal
might get involved. There are cases now
where documents are being created with
legal implications inside workspaces and
you have to have some providence and
that may include the prompts.
But even if you're not that formal, it
is still appropriate to understand for
yourself how you gained conviction in a
space. And I think that's the heart of
it. And so maybe it's not about logging
all your prompts and this and that,
although maybe that's a best practice.
It's about is your workspace and the
artifacts you're leaving behind in a
position where a stranger who is fluent
in the art could come in, look at the
workspace, look at the
artifacts and evolve to a place of
similar high
conviction that you have about your
angle of approach on your work. Does
that make sense? It's like could they
look at what you have left
behind and gain a sense of conviction
because you have left enough behind that
there's a bit of a chain of providence
on your thinking. That's what keeping
means to me. That's keeping your
thinking.
Finally on
outcomes that I I think that has changed
the least in the age of AI like at the
end of the day you are still accountable
for the KPIs and there is still a gut
level accountability to say it's on
me the grade for the essay it's on me if
you're an
education the performance of the team
it's on me if you're a manager that kind
of thing that level of commitment is not
different than it was before, but people
sometimes think you're going to skip it
because they think that people will
depend on AI and then blame AI. I got
news for you. You you just cannot blame
the machine. It's the old the old joke
from the IBM slide that a machine can't
make managerial
decisions. Well, joking aside, it's
still kind of true from a human
perspective. Humans expect humans to own
and be
accountable. So you got to be
accountable. And that's as old as taking
accountability for your copper imports.
And I am just going full nerd here. I
hope someone appreciates it. So there
you go. It's about making sure that you
are responsible for knowing your domain,
keeping the providence of your work in
an artifact form that people can
understand how you evolve conviction,
and you're responsible for your
outcomes. That's true before AI and
that's true with AI. And I think if we
understood that and we talked about that
more, we would have fewer arguments
about when and where to use chat GPT or
co-pilot because they would be grounded
in what's really going on, which is an
agreement in how we work. We need to
reforge those agreements in how we work.
And I think that framework is a way to
think about it. Cheers.