Learning Library

← Back to Library

Top ChatGPT Mistakes Killing Productivity

Key Points

  • Keeping a conversation “single‑threaded” (continuously adding new prompts without resetting) fills the AI’s context window and progressively degrades its intelligence.
  • The more irrelevant or contradictory information stored in the context, the lower the AI’s performance, so a leaner context yields smarter responses.
  • When the AI starts repeating mistakes, pause, request a concise summary of the crucial points, and then begin a fresh conversation using that summary as the new context.
  • Changing topics (e.g., from cat clothing to Bitcoin trading) should trigger a new conversation rather than continuing the old thread, preventing context overload and keeping the AI focused.

Full Transcript

# Top ChatGPT Mistakes Killing Productivity **Source:** [https://www.youtube.com/watch?v=60b8Ucy2Lhs](https://www.youtube.com/watch?v=60b8Ucy2Lhs) **Duration:** 00:17:18 ## Summary - Keeping a conversation “single‑threaded” (continuously adding new prompts without resetting) fills the AI’s context window and progressively degrades its intelligence. - The more irrelevant or contradictory information stored in the context, the lower the AI’s performance, so a leaner context yields smarter responses. - When the AI starts repeating mistakes, pause, request a concise summary of the crucial points, and then begin a fresh conversation using that summary as the new context. - Changing topics (e.g., from cat clothing to Bitcoin trading) should trigger a new conversation rather than continuing the old thread, preventing context overload and keeping the AI focused. ## Sections - [00:00:00](https://www.youtube.com/watch?v=60b8Ucy2Lhs&t=0s) **Avoid Single‑Threaded ChatGPT Context Overload** - The speaker explains that lingering in one conversation thread fills the AI’s context window, reducing its intelligence, and recommends starting fresh chats whenever the topic changes. - [00:05:05](https://www.youtube.com/watch?v=60b8Ucy2Lhs&t=305s) **Boost AI Interaction with Dictation** - The speaker advocates using dictation to speak to AI—since speaking outpaces typing and reading outpaces listening—to increase communication throughput, add context, and maximize efficiency, while also promoting a free 30‑day AI insight series and consulting offers. - [00:09:09](https://www.youtube.com/watch?v=60b8Ucy2Lhs&t=549s) **Creating Voice‑Enabled AI Personas** - A walkthrough of setting up a custom therapist persona in ChatGPT’s mobile advanced voice mode and using AI‑generated prompts to avoid writing them from scratch. - [00:12:47](https://www.youtube.com/watch?v=60b8Ucy2Lhs&t=767s) **Prompt Tweaking Yields Divergent Results** - The speaker shows that minor prompt adjustments or multiple runs of the same probabilistic model can dramatically change AI output trajectories, outlining a four‑step method to uncover more desirable results. - [00:16:30](https://www.youtube.com/watch?v=60b8Ucy2Lhs&t=990s) **Embrace AI Prompting Abundance** - The speaker urges viewers to stop crafting prompts manually, adopt an abundance mindset toward AI-generated content, and take advantage of free AI insight resources and consulting offers. ## Full Transcript
0:00You're probably making at least three of 0:01these CHBT mistakes right now. After 0:04hundreds of conversations across many 0:05different industries, I've ranked the 0:07worst offenders. These mistakes are 0:09likely quietly killing your productivity 0:11and taking hours away from you every 0:13single week. So, let's fix that starting 0:15with number eight. All right. So, there 0:17are many different mistakes that people 0:18make with JBT. I've just pulled out 0:20eight that are the worst offenders and 0:22I've ranked them in reverse order. So, 0:23this is number eight and we'll work our 0:24way up to number one. Now, the first one 0:26here is going to be singlethreaded 0:28conversations. What does that mean? It 0:30means that somebody is running on a 0:32conversation when they shouldn't. So, 0:34you should start a fresh conversation 0:35for two reasons. One, either you've 0:37changed the topic, which we'll do for 0:39tea, or two, your context is being uh 0:42bloated. It's being run on. So, what do 0:44I mean by context? So, this is basically 0:46the memory of the AI, how much memory or 0:49information you can stuff into the AI's 0:51head without its intelligence degrading. 0:53And we'll start with that one first 0:54because that's what this graph 0:55represents. So this graph here on the 0:57vertical axis is the intelligence of the 0:59AI. So higher is better. And then down 1:01here on the horizontal axis we have the 1:03context window. So this is how full it 1:05is. So on the right hand side it's 1:06really full. On the left hand side it's 1:08not full at all. And you can see the 1:09connection here where when there's less 1:11context in the context window, which is 1:13basically thinks the AI's head. When 1:15there's less information in the AI's 1:17head, it's going to be more intelligent. 1:18But the more information that we shove 1:20into the AI's head that's irrelevant or 1:23contradicting or a variety of other 1:24things, the intelligent is going is 1:26going to degrade over time. The way we 1:28can avoid this is when you start to see 1:30the AI making reoccurring errors or 1:33going in the wrong direction in a 1:34consistent way. You need to stop, pause, 1:37need to ask the AI specifically to 1:39summarize the area of the conversation 1:41you care about. So be targeted in what 1:43it summarizes. It'll summarize that in a 1:46one-pager. Once you've had it summarized 1:48in a one pager, you're going to start a 1:49fresh conversation and you're going to 1:51revert back to a less full context 1:54window for the AI's head and you're 1:56going to put this summary in there. 1:57You're going to start the conversation 1:58again. By doing this, you're going to 2:00increase the chances of the AI achieving 2:02the task you wanted to. That's the first 2:04one is context. The next one is new 2:06conversations. So, let's say this green 2:08box here is one really big conversation 2:11you're having with AI. And we'll say 2:13that this first portion here is going to 2:16represent a conversation with you and 2:18the AI about cat clothing and different 2:21types of clothing you can put on your 2:22cats. But then you want to change the 2:24conversation and down here you change 2:26the conversation to how to trade Bitcoin 2:29effectively. So tips and tricks on 2:31buying and selling Bitcoin. So you can 2:33see these are completely separate 2:34topics, but often I see this time and 2:36time again where somebody will have the 2:38same conversation with the AI in the 2:40same thread changing the topic back and 2:42forth. By doing this, you're confusing 2:44the AI because everything that is said 2:47prior here is going to be included in 2:49the context. So in the AI's head. So 2:51it's going to be thinking about cat 2:52clothing, Bitcoin, and anything else you 2:55talk about with the AI in that thread. 2:56So, it's important to state that if 2:58you're going to have a new conversation 2:59on a separate topic, you should create a 3:01completely separate conversation and 3:03talk only about that topic in that 3:05thread. This is going to give you a 3:06higher quality response based off the 3:08task you're giving to the AI. Now, 3:09that's our first culprit, singlethreaded 3:11conversations. The next issue is over 3:14relying on memory. Within chatbt, 3:17there's a very specific feature called 3:18memory. And I'll actually show you this 3:20and show you what it looks like. So, if 3:21I go into chatbt and I go to my settings 3:24down here, I select my face, go to 3:25settings, and if you go to 3:27personalization, and then you scroll 3:29down, you'll see there's a section here 3:30called memory. So, what memory is is 3:33it's remembering different aspects of 3:34your preferences when interacting with 3:36AI. This is useful, but it's useful for 3:38smaller subsets of things. And what I 3:40see people doing is they're relying too 3:42much on memory and not enough on the 3:44dedicated tools to improving the AI's 3:46output for a specific task. So, we have 3:49memory, we have GPTs and projects. The 3:51intention of memory is to remember a 3:53small subset of things such as your 3:55favorite color, preferences around 3:57writing style in general, uh where you 4:00live, maybe what you do for work, things 4:02like that, general items about you and 4:05about your life and about the general 4:06things you prefer when interacting with 4:08AI. But when it comes to a very specific 4:10task like writing a certain type of 4:12report, doing a certain type of analysis 4:14on a data set on a reoccurring basis, 4:16doing a certain type of research on a 4:18reoccurring basis. So anything that's 4:20repetitive and that's somewhat deep in 4:23nature, you want to dedicate that to a 4:25GPT or a project. By doing this, you're 4:28going to increase the likelihood that 4:29the AI is going to achieve given task 4:31because you've given it a very thorough 4:33system prompt for this specific GPT or 4:36project. and you've also given it a 4:38series of files that it can reference to 4:41know what good looks like when it's 4:42performing this task. So don't rely 4:44solely on memory for very specific and 4:47repetitive tasks. Only rely on memory 4:49for generalized things that the AI can 4:51then utilize across conversations 4:53irrelevant of what the task is. So don't 4:55overly rely on memory. That's the second 4:57mistake. Oh hey, quick pause in your 4:59regular programming. This video is 5:00brought to you by me. Two quick things. 5:02First, below is the 30-day AI insight 5:05series completely free. you'll get 30 5:07insights in your inbox of how you can 5:08apply II to your business and your work. 5:10The second thing is if you'd like to 5:12work with me, I have a series of 5:13offerings below to see if there's a good 5:14fit between the two of us. With that 5:15being said, let's get back into the 5:16video. The third one is typing, not 5:19talking. So, this really cool animation 5:21that we have here depicts different ways 5:23that we can have throughput with AI. So, 5:25this first one here is typing. The 5:26second one here is speaking. The third 5:28one here is reading. You can see that 5:30reading is the fastest, words per 5:31minute. Speaking second fastest, typing 5:33is the least fast. Now what we want to 5:35do is we want to increase the ability to 5:37communicate with AI faster. And the 5:40reason we want to do that is if we 5:41remove the friction of communicating 5:42with AI, we're going to provide more 5:44context to it in a reoccurring basis. So 5:46my advice to anybody is to use dictation 5:49as much as possible. And by doing this, 5:51you're going to speak to the AI. So this 5:54is going to be the speaking process. 5:55You're speaking to AI that's then 5:57converted to text that you feed to the 5:59AI and then it writes back to you. And 6:01we can speak faster than we type. And we 6:04can read faster than we listen or at 6:05least skim read faster than we listen. 6:07And by combining these two, we're 6:09increasing the throughput between 6:10ourselves and the AI, giving it more 6:12context and getting more context from 6:14it. So use dictation as much as possible 6:16instead of typing all the time. Our next 6:18mistake is simply replacing chatbt with 6:22Google. So what I see people do is 6:24they'll just ask Google like questions 6:25to chat and that's it and nothing else. 6:28That's a big mistake. Don't do that. 6:29Reason being is that we've been trained 6:32to ask Google like questions over the 6:34many decades that we've had the internet 6:36and access to Google or Google like 6:37search. And in here you can see we have 6:39a basic question like best laptop of 6:412025. So this is a Google like question 6:43because we're asking for key terms. 6:45We're doing keyword search on a 6:47database. 6:49But when you're talking to AI and you're 6:51having it, maybe you're using it for 6:52research. And if you're using it for 6:53research purposes, you need to provide 6:55it more context so it can have a 6:56tailored response back to you and give 6:58you the exact answer you need. To do 7:00that, you should use dictation to 7:02provide context on what you're trying to 7:04achieve, the sources you care about 7:06getting that data from, any specific 7:08outcomes you expect from the research, 7:10andor insights that you're looking for 7:12from the research. And then a bonus tip 7:15here when you're doing research and to 7:16avoid the Google style search is use 7:19GPT5 extended thinking with web search 7:21enabled. And I'll show you exactly how 7:23to do that in a second. Let's uh let's 7:24go look at that. So if I go to chatbt, I 7:27open up chat here. I'm going to go to 7:29this dropown. And if you pay for plus, 7:30which is $20 a month, you're going to 7:32have access to auto instant and 7:33thinking. You're going to choose 7:34thinking because it's a more intelligent 7:35model. You're going to get a blue button 7:37that pops up here. You're going to 7:38select this dropown. And you're going to 7:40have two options. If you have plus, 7:41you're going to have standard and 7:42extended. You're going to recommend I'm 7:43going to recommend extended because 7:44that's more thinking involved. You're 7:46going to do this plus button here. 7:47You're going to go to more and then web 7:49search. So, what you have now is you 7:51have web search enabled. You have 7:53extended thinking enabled. You're going 7:55to use the dictation tool here to 7:56provide as much context as possible. So, 7:58the AI then can go off and find all the 8:00information you need for a given task or 8:03question and it'll come back with a very 8:04high quality response. So, the moral of 8:06the story of this one is don't use chat 8:09like Google. Don't use basic keyword 8:11searches. instead provide the context it 8:13needs using the highquality power that 8:15it has behind it intelligence to get 8:18better responses. Our next mistake is 8:20underusing advanced voice mode or not 8:22using it at all. This is oftentimes I 8:24see people don't even know this exists 8:25and you can use it for a variety of use 8:26cases that are really useful and some 8:28that I listed out here is practice for 8:31sales. Maybe you're preparing for sales 8:33and you want to train up your sales 8:34team. Well, you can set up a given 8:36persona inside of chatbt. You can have 8:38your sales team negotiate with that 8:40persona. So, it could be a skeptical CFO 8:42or something like that and they can 8:44practice role playinging and then they 8:46can go into the field and practice for 8:47real. Another one is prepping for 8:49presentations or maybe prepping for an 8:51interview. You can do that as well with 8:53advanced voice mode. Give it a persona 8:54of a certain type of person you're 8:56presenting to and or interviewing being 8:58interviewed by and you can prep for that 9:00as well. And then role playinging on the 9:01go. This is a really good one for 9:03therapy. So, maybe you want to have a 9:04conversation around interpersonal 9:05relationship skills and communication. 9:07You can have a very specific persona for 9:09a therapist and you can have that 9:11conversation while you're on the go, 9:12either driving or walking. Now, how do 9:14you get access to this? Well, in chatbt, 9:15if you go here, this button here that 9:17says advanced voice mode, this is the 9:19conversation. So, what's happening is 9:20you're going to talk to AI and it's 9:21going to talk back to you. So, it's 9:22voice to voice. And the beautiful part 9:24about chatbt today is this is available 9:26in the mobile app and it's very good, 9:28like I said, to do on the go. But also, 9:29if you want to give it a dedicated 9:30persona, you can set up a system prompt 9:33inside of a project. So if I go to the 9:35sidebar here, I go to my projects, you 9:37can create a new project. Once you've 9:39created a new project, you can give it a 9:40system prompt and give it a persona. 9:42With that persona, you can do that role 9:44playing and the practice and everything 9:45I stated previously by using adv 9:47advanced voice mode in that project and 9:49or custom GPT. And that's our other 9:51mistake is underusing voice mode. Our 9:52next mistake here is writing prompts 9:54from scratch. So since I've dedicated an 9:56entire video to this topic, we're going 9:58to do this one very quickly. You can 9:59watch that here. The two things that you 10:01need to learn here is when writing 10:04prompts, you should have AI write them 10:05for you. You can do that in two ways. 10:08One is you can have an AI research the 10:11best practices for prompting of a a 10:13given model such as GPT5, Opus4.1, etc. 10:16You can say research best practices for 10:18this model. Then use those best 10:20practices that you researched to write a 10:22prompt on my behalf for this task and 10:24it'll do that for you and you'll have an 10:25improved prompt. or and you can take 10:28that prompt and you can feed that into a 10:30prompt optimizer where OpenAI and 10:32Anthropic both have prompt optimizers 10:35that will rewrite your prompts using 10:37best practices for their models 10:39specifically and then inject that into 10:40your prompt improving it. So you can use 10:42tools and AI to optimize your prompts 10:44and you don't have to write them from 10:45scratch. Now we're moving on to the top 10:46two mistakes. So our second mistake here 10:49is all around the usage of AI and not 10:51using it enough. So when I see people or 10:55at least beginners play around with 10:56chatbt, they don't really know when to 10:58use it. So they don't use it that often. 11:00And that's a big mistake because we need 11:02a lot of exposure to AI so we can build 11:04proper AI intuition for knowing how to 11:07use it and when to use it. And I think 11:08this chart represents that really 11:09effectively. So here we have vertical 11:11access which represents understanding. 11:13So how well does somebody understand how 11:15to utilize AI effectively? And then we 11:18have on the horizontal axis usage 11:20frequency. So how often do they use the 11:21AI? And you can see the more frequently 11:24they use AI, their understanding 11:26increases. And here we have AI 11:28intuition. Now, the important part here 11:30is embracing failure and being okay with 11:32the fact that you can fail with AI 11:34consistently. And that's all right 11:35because through that process of failure, 11:37you're going to learn on how to make the 11:38most of these tools. I recommend always 11:41having open in another tab inside of 11:43your Chrome browser, your Firefox 11:45browser, whatever else, a tab dedicated 11:47to Chatbt Cloud, or whatever tool you're 11:49using. always go to that tool anytime 11:51you have a question or repetitive task 11:53you run into. And by doing this over and 11:55over and over, even if the question you 11:56know is likely impossible for the AI to 11:58answer, you're going to figure out what 12:00are the boundaries of AI, what it can do 12:02and what it can't do. And by doing that, 12:04your understanding is going to increase 12:05and you'll eventually build a stronger 12:07intuition of when to leverage AI. That's 12:09number two. And number one is going to 12:12be a mindset shift. So this is going to 12:15be the shift from a scarcity mindset 12:18into a abundance mindset. And this goes 12:21all around the concept of intelligence 12:23becoming a commodity. If we're having AI 12:26commoditized and everybody can get 12:28access to it, we're commoditizing 12:29intelligence to a degree. And we need to 12:31embrace that. And that means that we can 12:32use these tools abundantly sampling them 12:35and seeing what different outputs look 12:36like. And this chart here represents a 12:39point that I want to make around 12:40iterations. So here we have a starting 12:43point. This is the same starting point. 12:45And what this represents is maybe we 12:47make a slight change to a prompt. So 12:49here we have prompt A coming out here in 12:51the light blue and we land in this 12:54position. But here we have prompt B. So 12:57this is prompt A. This is prompt B. So 12:59for prompt B, we're going to make a 13:01slight change. Maybe we add one or two 13:03words to the prompt. And by just adding 13:05one to two words, we have a slight 13:07deviation in the trajectory. But at the 13:09end of this trajectory, we get in a 13:12completely different location. And maybe 13:14this location where prompt B is is 13:16exactly what we wanted. But if we never 13:18tried multiple attempts with the same 13:20prompt or slight variations of the 13:22prompt, we would have never known the AI 13:24could actually achieve what we wanted it 13:25to. We would have assumed that it could 13:27only get to point A. And there are 13:29different ways that one can go about 13:30this. I just I just have four here, four 13:33different levels of ways that I do this, 13:34but there are many different ways. But 13:36I'm going to show you the four that I 13:37use. So the first one here is asking the 13:39same question to the same model multiple 13:41times. So why would we want to do that? 13:43Well, if you ask the same question to 13:45the same model, these models, they're 13:47probabilistic in nature. What does that 13:48mean probabilistic? It simply means that 13:50when you input the same exact item, the 13:53output may vary even if the input's the 13:55same with the same model. And by doing 13:56this, by doing this multiple times, you 13:58can see the different outputs that that 14:00arrive. So, an example here is maybe you 14:02have the AI write five different emails 14:04with the same input to see if there is a 14:05specific variant that you like or maybe 14:07you have the AI create five different 14:09visuals for a presentation and you can 14:11AB test those and see which one you 14:13prefer based off of either the same 14:14input or slightly variations and that's 14:16going to be our next level which is 14:19making a slight variation to the prompt 14:21but running it through the same model. 14:22So, maybe you have five different 14:24variations of a prompt with maybe one or 14:26two word changes and you give it to the 14:28same model and multiple threads. So 14:30these are all new conversations and 14:31you're seeing how the outputs look. The 14:33next level is asking the same question 14:35but to different different intelligence 14:37levels of the same model. So like I 14:39mentioned inside of chatbt we have 14:41different levels of intelligence. We 14:43have auto instant thinking and then even 14:45within thinking you have standard and 14:47extended. If you have the plus version 14:49and if you have pro you have light, 14:50standard, extended and heavy. So these 14:52are different levels of reasoning. So 14:54you can use those different levels of 14:56reasoning to see the outputs that equate 14:58to those. And it's important to note 15:00that the level of reasoning or 15:02intelligence doesn't always equate to 15:05better. I found time and time again that 15:08if there's a very specific task that I 15:09have, if I give that task to a really 15:12smart model, it may overthink and 15:14overreason and go in the wrong 15:15direction. And this has been proven out 15:17through research. So if you have a 15:18simple task or a simple question, that 15:20may be more optimal for a less 15:23intelligent or lower reasoning model to 15:25achieve that task for you faster and 15:27also more accurately. So it's okay to 15:29ask the same question to different 15:30levels of intelligence. And then 15:32finally, the one that I prefer and I 15:33like to do the most, which is asking the 15:35same question to different models. So 15:37I'll ask the same question for for 15:39instance for visuals. I can ask uh Cloud 15:41Sonnet 4.5, GPT5, Gro 4, and Gemini 2.5 15:46Pro all to create different visuals for 15:48me or the same visuals for me and seeing 15:50what their outputs look like. I then can 15:51AB test the visuals. I can then grab 15:54different things that I like from each 15:55and combine them into one. So these are 15:57four different levels of being abundant 15:59in your mindset and using the AI in a 16:02way that's more abundant in nature and 16:03not scarce. And as a quick recap, the 16:05eight mistakes that I see people make 16:07time and time again are first single 16:09threads, so you're staying in the same 16:10conversation. You're not starting new 16:11conversations when you should. You're 16:13overrelying on memory when you should be 16:14using projects or GPTs. You're typing 16:17and not talking, which you should be 16:18using dictation. You're using chatbt as 16:21a replacement for Google, which is a bad 16:22move. You are underusing advanced voice 16:25mode for different types of tasks that 16:26are outside of what most people do. 16:29You're writing your prompts manually 16:30when you should use AI to do so. You're 16:32not using AI enough because you're 16:34probably overwhelmed by the vast nature 16:36of what it can do, but instead you 16:38should embrace failure and do this more 16:39often. And then finally, you have the 16:41mindset of scarcity because historically 16:44intelligence has always been scarce, but 16:46now that it's a commodity, we need to 16:47take an abundance mindset and sample the 16:49outputs from these AIs to figure out 16:51what's more suitable for us. And that's 16:52it. That's the video. So, if you enjoyed 16:54this, please reshare with your friends. 16:56And like I said previously, two things. 16:58One, below is the 30-day AI insight 17:00series, completely free. You'll get 17:02insights in your inbox of how you can 17:03apply AI to your business and your work. 17:05The second thing is if you'd like to 17:06work with me, below is a series of 17:07offerings that I have to see if there's 17:08a good fit between the two of us. With 17:10that being said, you should check out 17:12the next video, which is going to be 17:13around here because YouTube gods thinks 17:15that you'll love it. See you next time.