Top ChatGPT Mistakes Killing Productivity
Key Points
- Keeping a conversation “single‑threaded” (continuously adding new prompts without resetting) fills the AI’s context window and progressively degrades its intelligence.
- The more irrelevant or contradictory information stored in the context, the lower the AI’s performance, so a leaner context yields smarter responses.
- When the AI starts repeating mistakes, pause, request a concise summary of the crucial points, and then begin a fresh conversation using that summary as the new context.
- Changing topics (e.g., from cat clothing to Bitcoin trading) should trigger a new conversation rather than continuing the old thread, preventing context overload and keeping the AI focused.
Sections
- Avoid Single‑Threaded ChatGPT Context Overload - The speaker explains that lingering in one conversation thread fills the AI’s context window, reducing its intelligence, and recommends starting fresh chats whenever the topic changes.
- Boost AI Interaction with Dictation - The speaker advocates using dictation to speak to AI—since speaking outpaces typing and reading outpaces listening—to increase communication throughput, add context, and maximize efficiency, while also promoting a free 30‑day AI insight series and consulting offers.
- Creating Voice‑Enabled AI Personas - A walkthrough of setting up a custom therapist persona in ChatGPT’s mobile advanced voice mode and using AI‑generated prompts to avoid writing them from scratch.
- Prompt Tweaking Yields Divergent Results - The speaker shows that minor prompt adjustments or multiple runs of the same probabilistic model can dramatically change AI output trajectories, outlining a four‑step method to uncover more desirable results.
- Embrace AI Prompting Abundance - The speaker urges viewers to stop crafting prompts manually, adopt an abundance mindset toward AI-generated content, and take advantage of free AI insight resources and consulting offers.
Full Transcript
# Top ChatGPT Mistakes Killing Productivity **Source:** [https://www.youtube.com/watch?v=60b8Ucy2Lhs](https://www.youtube.com/watch?v=60b8Ucy2Lhs) **Duration:** 00:17:18 ## Summary - Keeping a conversation “single‑threaded” (continuously adding new prompts without resetting) fills the AI’s context window and progressively degrades its intelligence. - The more irrelevant or contradictory information stored in the context, the lower the AI’s performance, so a leaner context yields smarter responses. - When the AI starts repeating mistakes, pause, request a concise summary of the crucial points, and then begin a fresh conversation using that summary as the new context. - Changing topics (e.g., from cat clothing to Bitcoin trading) should trigger a new conversation rather than continuing the old thread, preventing context overload and keeping the AI focused. ## Sections - [00:00:00](https://www.youtube.com/watch?v=60b8Ucy2Lhs&t=0s) **Avoid Single‑Threaded ChatGPT Context Overload** - The speaker explains that lingering in one conversation thread fills the AI’s context window, reducing its intelligence, and recommends starting fresh chats whenever the topic changes. - [00:05:05](https://www.youtube.com/watch?v=60b8Ucy2Lhs&t=305s) **Boost AI Interaction with Dictation** - The speaker advocates using dictation to speak to AI—since speaking outpaces typing and reading outpaces listening—to increase communication throughput, add context, and maximize efficiency, while also promoting a free 30‑day AI insight series and consulting offers. - [00:09:09](https://www.youtube.com/watch?v=60b8Ucy2Lhs&t=549s) **Creating Voice‑Enabled AI Personas** - A walkthrough of setting up a custom therapist persona in ChatGPT’s mobile advanced voice mode and using AI‑generated prompts to avoid writing them from scratch. - [00:12:47](https://www.youtube.com/watch?v=60b8Ucy2Lhs&t=767s) **Prompt Tweaking Yields Divergent Results** - The speaker shows that minor prompt adjustments or multiple runs of the same probabilistic model can dramatically change AI output trajectories, outlining a four‑step method to uncover more desirable results. - [00:16:30](https://www.youtube.com/watch?v=60b8Ucy2Lhs&t=990s) **Embrace AI Prompting Abundance** - The speaker urges viewers to stop crafting prompts manually, adopt an abundance mindset toward AI-generated content, and take advantage of free AI insight resources and consulting offers. ## Full Transcript
You're probably making at least three of
these CHBT mistakes right now. After
hundreds of conversations across many
different industries, I've ranked the
worst offenders. These mistakes are
likely quietly killing your productivity
and taking hours away from you every
single week. So, let's fix that starting
with number eight. All right. So, there
are many different mistakes that people
make with JBT. I've just pulled out
eight that are the worst offenders and
I've ranked them in reverse order. So,
this is number eight and we'll work our
way up to number one. Now, the first one
here is going to be singlethreaded
conversations. What does that mean? It
means that somebody is running on a
conversation when they shouldn't. So,
you should start a fresh conversation
for two reasons. One, either you've
changed the topic, which we'll do for
tea, or two, your context is being uh
bloated. It's being run on. So, what do
I mean by context? So, this is basically
the memory of the AI, how much memory or
information you can stuff into the AI's
head without its intelligence degrading.
And we'll start with that one first
because that's what this graph
represents. So this graph here on the
vertical axis is the intelligence of the
AI. So higher is better. And then down
here on the horizontal axis we have the
context window. So this is how full it
is. So on the right hand side it's
really full. On the left hand side it's
not full at all. And you can see the
connection here where when there's less
context in the context window, which is
basically thinks the AI's head. When
there's less information in the AI's
head, it's going to be more intelligent.
But the more information that we shove
into the AI's head that's irrelevant or
contradicting or a variety of other
things, the intelligent is going is
going to degrade over time. The way we
can avoid this is when you start to see
the AI making reoccurring errors or
going in the wrong direction in a
consistent way. You need to stop, pause,
need to ask the AI specifically to
summarize the area of the conversation
you care about. So be targeted in what
it summarizes. It'll summarize that in a
one-pager. Once you've had it summarized
in a one pager, you're going to start a
fresh conversation and you're going to
revert back to a less full context
window for the AI's head and you're
going to put this summary in there.
You're going to start the conversation
again. By doing this, you're going to
increase the chances of the AI achieving
the task you wanted to. That's the first
one is context. The next one is new
conversations. So, let's say this green
box here is one really big conversation
you're having with AI. And we'll say
that this first portion here is going to
represent a conversation with you and
the AI about cat clothing and different
types of clothing you can put on your
cats. But then you want to change the
conversation and down here you change
the conversation to how to trade Bitcoin
effectively. So tips and tricks on
buying and selling Bitcoin. So you can
see these are completely separate
topics, but often I see this time and
time again where somebody will have the
same conversation with the AI in the
same thread changing the topic back and
forth. By doing this, you're confusing
the AI because everything that is said
prior here is going to be included in
the context. So in the AI's head. So
it's going to be thinking about cat
clothing, Bitcoin, and anything else you
talk about with the AI in that thread.
So, it's important to state that if
you're going to have a new conversation
on a separate topic, you should create a
completely separate conversation and
talk only about that topic in that
thread. This is going to give you a
higher quality response based off the
task you're giving to the AI. Now,
that's our first culprit, singlethreaded
conversations. The next issue is over
relying on memory. Within chatbt,
there's a very specific feature called
memory. And I'll actually show you this
and show you what it looks like. So, if
I go into chatbt and I go to my settings
down here, I select my face, go to
settings, and if you go to
personalization, and then you scroll
down, you'll see there's a section here
called memory. So, what memory is is
it's remembering different aspects of
your preferences when interacting with
AI. This is useful, but it's useful for
smaller subsets of things. And what I
see people doing is they're relying too
much on memory and not enough on the
dedicated tools to improving the AI's
output for a specific task. So, we have
memory, we have GPTs and projects. The
intention of memory is to remember a
small subset of things such as your
favorite color, preferences around
writing style in general, uh where you
live, maybe what you do for work, things
like that, general items about you and
about your life and about the general
things you prefer when interacting with
AI. But when it comes to a very specific
task like writing a certain type of
report, doing a certain type of analysis
on a data set on a reoccurring basis,
doing a certain type of research on a
reoccurring basis. So anything that's
repetitive and that's somewhat deep in
nature, you want to dedicate that to a
GPT or a project. By doing this, you're
going to increase the likelihood that
the AI is going to achieve given task
because you've given it a very thorough
system prompt for this specific GPT or
project. and you've also given it a
series of files that it can reference to
know what good looks like when it's
performing this task. So don't rely
solely on memory for very specific and
repetitive tasks. Only rely on memory
for generalized things that the AI can
then utilize across conversations
irrelevant of what the task is. So don't
overly rely on memory. That's the second
mistake. Oh hey, quick pause in your
regular programming. This video is
brought to you by me. Two quick things.
First, below is the 30-day AI insight
series completely free. you'll get 30
insights in your inbox of how you can
apply II to your business and your work.
The second thing is if you'd like to
work with me, I have a series of
offerings below to see if there's a good
fit between the two of us. With that
being said, let's get back into the
video. The third one is typing, not
talking. So, this really cool animation
that we have here depicts different ways
that we can have throughput with AI. So,
this first one here is typing. The
second one here is speaking. The third
one here is reading. You can see that
reading is the fastest, words per
minute. Speaking second fastest, typing
is the least fast. Now what we want to
do is we want to increase the ability to
communicate with AI faster. And the
reason we want to do that is if we
remove the friction of communicating
with AI, we're going to provide more
context to it in a reoccurring basis. So
my advice to anybody is to use dictation
as much as possible. And by doing this,
you're going to speak to the AI. So this
is going to be the speaking process.
You're speaking to AI that's then
converted to text that you feed to the
AI and then it writes back to you. And
we can speak faster than we type. And we
can read faster than we listen or at
least skim read faster than we listen.
And by combining these two, we're
increasing the throughput between
ourselves and the AI, giving it more
context and getting more context from
it. So use dictation as much as possible
instead of typing all the time. Our next
mistake is simply replacing chatbt with
Google. So what I see people do is
they'll just ask Google like questions
to chat and that's it and nothing else.
That's a big mistake. Don't do that.
Reason being is that we've been trained
to ask Google like questions over the
many decades that we've had the internet
and access to Google or Google like
search. And in here you can see we have
a basic question like best laptop of
2025. So this is a Google like question
because we're asking for key terms.
We're doing keyword search on a
database.
But when you're talking to AI and you're
having it, maybe you're using it for
research. And if you're using it for
research purposes, you need to provide
it more context so it can have a
tailored response back to you and give
you the exact answer you need. To do
that, you should use dictation to
provide context on what you're trying to
achieve, the sources you care about
getting that data from, any specific
outcomes you expect from the research,
andor insights that you're looking for
from the research. And then a bonus tip
here when you're doing research and to
avoid the Google style search is use
GPT5 extended thinking with web search
enabled. And I'll show you exactly how
to do that in a second. Let's uh let's
go look at that. So if I go to chatbt, I
open up chat here. I'm going to go to
this dropown. And if you pay for plus,
which is $20 a month, you're going to
have access to auto instant and
thinking. You're going to choose
thinking because it's a more intelligent
model. You're going to get a blue button
that pops up here. You're going to
select this dropown. And you're going to
have two options. If you have plus,
you're going to have standard and
extended. You're going to recommend I'm
going to recommend extended because
that's more thinking involved. You're
going to do this plus button here.
You're going to go to more and then web
search. So, what you have now is you
have web search enabled. You have
extended thinking enabled. You're going
to use the dictation tool here to
provide as much context as possible. So,
the AI then can go off and find all the
information you need for a given task or
question and it'll come back with a very
high quality response. So, the moral of
the story of this one is don't use chat
like Google. Don't use basic keyword
searches. instead provide the context it
needs using the highquality power that
it has behind it intelligence to get
better responses. Our next mistake is
underusing advanced voice mode or not
using it at all. This is oftentimes I
see people don't even know this exists
and you can use it for a variety of use
cases that are really useful and some
that I listed out here is practice for
sales. Maybe you're preparing for sales
and you want to train up your sales
team. Well, you can set up a given
persona inside of chatbt. You can have
your sales team negotiate with that
persona. So, it could be a skeptical CFO
or something like that and they can
practice role playinging and then they
can go into the field and practice for
real. Another one is prepping for
presentations or maybe prepping for an
interview. You can do that as well with
advanced voice mode. Give it a persona
of a certain type of person you're
presenting to and or interviewing being
interviewed by and you can prep for that
as well. And then role playinging on the
go. This is a really good one for
therapy. So, maybe you want to have a
conversation around interpersonal
relationship skills and communication.
You can have a very specific persona for
a therapist and you can have that
conversation while you're on the go,
either driving or walking. Now, how do
you get access to this? Well, in chatbt,
if you go here, this button here that
says advanced voice mode, this is the
conversation. So, what's happening is
you're going to talk to AI and it's
going to talk back to you. So, it's
voice to voice. And the beautiful part
about chatbt today is this is available
in the mobile app and it's very good,
like I said, to do on the go. But also,
if you want to give it a dedicated
persona, you can set up a system prompt
inside of a project. So if I go to the
sidebar here, I go to my projects, you
can create a new project. Once you've
created a new project, you can give it a
system prompt and give it a persona.
With that persona, you can do that role
playing and the practice and everything
I stated previously by using adv
advanced voice mode in that project and
or custom GPT. And that's our other
mistake is underusing voice mode. Our
next mistake here is writing prompts
from scratch. So since I've dedicated an
entire video to this topic, we're going
to do this one very quickly. You can
watch that here. The two things that you
need to learn here is when writing
prompts, you should have AI write them
for you. You can do that in two ways.
One is you can have an AI research the
best practices for prompting of a a
given model such as GPT5, Opus4.1, etc.
You can say research best practices for
this model. Then use those best
practices that you researched to write a
prompt on my behalf for this task and
it'll do that for you and you'll have an
improved prompt. or and you can take
that prompt and you can feed that into a
prompt optimizer where OpenAI and
Anthropic both have prompt optimizers
that will rewrite your prompts using
best practices for their models
specifically and then inject that into
your prompt improving it. So you can use
tools and AI to optimize your prompts
and you don't have to write them from
scratch. Now we're moving on to the top
two mistakes. So our second mistake here
is all around the usage of AI and not
using it enough. So when I see people or
at least beginners play around with
chatbt, they don't really know when to
use it. So they don't use it that often.
And that's a big mistake because we need
a lot of exposure to AI so we can build
proper AI intuition for knowing how to
use it and when to use it. And I think
this chart represents that really
effectively. So here we have vertical
access which represents understanding.
So how well does somebody understand how
to utilize AI effectively? And then we
have on the horizontal axis usage
frequency. So how often do they use the
AI? And you can see the more frequently
they use AI, their understanding
increases. And here we have AI
intuition. Now, the important part here
is embracing failure and being okay with
the fact that you can fail with AI
consistently. And that's all right
because through that process of failure,
you're going to learn on how to make the
most of these tools. I recommend always
having open in another tab inside of
your Chrome browser, your Firefox
browser, whatever else, a tab dedicated
to Chatbt Cloud, or whatever tool you're
using. always go to that tool anytime
you have a question or repetitive task
you run into. And by doing this over and
over and over, even if the question you
know is likely impossible for the AI to
answer, you're going to figure out what
are the boundaries of AI, what it can do
and what it can't do. And by doing that,
your understanding is going to increase
and you'll eventually build a stronger
intuition of when to leverage AI. That's
number two. And number one is going to
be a mindset shift. So this is going to
be the shift from a scarcity mindset
into a abundance mindset. And this goes
all around the concept of intelligence
becoming a commodity. If we're having AI
commoditized and everybody can get
access to it, we're commoditizing
intelligence to a degree. And we need to
embrace that. And that means that we can
use these tools abundantly sampling them
and seeing what different outputs look
like. And this chart here represents a
point that I want to make around
iterations. So here we have a starting
point. This is the same starting point.
And what this represents is maybe we
make a slight change to a prompt. So
here we have prompt A coming out here in
the light blue and we land in this
position. But here we have prompt B. So
this is prompt A. This is prompt B. So
for prompt B, we're going to make a
slight change. Maybe we add one or two
words to the prompt. And by just adding
one to two words, we have a slight
deviation in the trajectory. But at the
end of this trajectory, we get in a
completely different location. And maybe
this location where prompt B is is
exactly what we wanted. But if we never
tried multiple attempts with the same
prompt or slight variations of the
prompt, we would have never known the AI
could actually achieve what we wanted it
to. We would have assumed that it could
only get to point A. And there are
different ways that one can go about
this. I just I just have four here, four
different levels of ways that I do this,
but there are many different ways. But
I'm going to show you the four that I
use. So the first one here is asking the
same question to the same model multiple
times. So why would we want to do that?
Well, if you ask the same question to
the same model, these models, they're
probabilistic in nature. What does that
mean probabilistic? It simply means that
when you input the same exact item, the
output may vary even if the input's the
same with the same model. And by doing
this, by doing this multiple times, you
can see the different outputs that that
arrive. So, an example here is maybe you
have the AI write five different emails
with the same input to see if there is a
specific variant that you like or maybe
you have the AI create five different
visuals for a presentation and you can
AB test those and see which one you
prefer based off of either the same
input or slightly variations and that's
going to be our next level which is
making a slight variation to the prompt
but running it through the same model.
So, maybe you have five different
variations of a prompt with maybe one or
two word changes and you give it to the
same model and multiple threads. So
these are all new conversations and
you're seeing how the outputs look. The
next level is asking the same question
but to different different intelligence
levels of the same model. So like I
mentioned inside of chatbt we have
different levels of intelligence. We
have auto instant thinking and then even
within thinking you have standard and
extended. If you have the plus version
and if you have pro you have light,
standard, extended and heavy. So these
are different levels of reasoning. So
you can use those different levels of
reasoning to see the outputs that equate
to those. And it's important to note
that the level of reasoning or
intelligence doesn't always equate to
better. I found time and time again that
if there's a very specific task that I
have, if I give that task to a really
smart model, it may overthink and
overreason and go in the wrong
direction. And this has been proven out
through research. So if you have a
simple task or a simple question, that
may be more optimal for a less
intelligent or lower reasoning model to
achieve that task for you faster and
also more accurately. So it's okay to
ask the same question to different
levels of intelligence. And then
finally, the one that I prefer and I
like to do the most, which is asking the
same question to different models. So
I'll ask the same question for for
instance for visuals. I can ask uh Cloud
Sonnet 4.5, GPT5, Gro 4, and Gemini 2.5
Pro all to create different visuals for
me or the same visuals for me and seeing
what their outputs look like. I then can
AB test the visuals. I can then grab
different things that I like from each
and combine them into one. So these are
four different levels of being abundant
in your mindset and using the AI in a
way that's more abundant in nature and
not scarce. And as a quick recap, the
eight mistakes that I see people make
time and time again are first single
threads, so you're staying in the same
conversation. You're not starting new
conversations when you should. You're
overrelying on memory when you should be
using projects or GPTs. You're typing
and not talking, which you should be
using dictation. You're using chatbt as
a replacement for Google, which is a bad
move. You are underusing advanced voice
mode for different types of tasks that
are outside of what most people do.
You're writing your prompts manually
when you should use AI to do so. You're
not using AI enough because you're
probably overwhelmed by the vast nature
of what it can do, but instead you
should embrace failure and do this more
often. And then finally, you have the
mindset of scarcity because historically
intelligence has always been scarce, but
now that it's a commodity, we need to
take an abundance mindset and sample the
outputs from these AIs to figure out
what's more suitable for us. And that's
it. That's the video. So, if you enjoyed
this, please reshare with your friends.
And like I said previously, two things.
One, below is the 30-day AI insight
series, completely free. You'll get
insights in your inbox of how you can
apply AI to your business and your work.
The second thing is if you'd like to
work with me, below is a series of
offerings that I have to see if there's
a good fit between the two of us. With
that being said, you should check out
the next video, which is going to be
around here because YouTube gods thinks
that you'll love it. See you next time.