Learning Library

← Back to Library

Top 10 ChatGPT‑5 User Complaints

Key Points

  • The rollout of ChatGPT‑5 sparked intense backlash, not just because of the infamous “chartgate” mistake but because it abruptly terminated users’ long‑standing AI workflows and relationships built on earlier versions.
  • OpenAI replaced multiple specialized models with a single “GPT‑5” that actually contains ten new sub‑models behind a router, aiming to satisfy diverse needs (speed, empathy, depth, web search) while managing GPU load.
  • The router’s default to the faster, less‑reasoning sub‑model has left many users frustrated, prompting questions about when to use each variant and how to customize the experience.
  • Despite widespread criticism, Sam Altman affirmed that this composite model is the permanent default for hundreds of millions of users, and the speaker outlines the top ten user complaints with practical fixes for adapting to the new system.

Sections

Full Transcript

# Top 10 ChatGPT‑5 User Complaints **Source:** [https://www.youtube.com/watch?v=Gqnf5f1ITyo](https://www.youtube.com/watch?v=Gqnf5f1ITyo) **Duration:** 00:21:59 ## Summary - The rollout of ChatGPT‑5 sparked intense backlash, not just because of the infamous “chartgate” mistake but because it abruptly terminated users’ long‑standing AI workflows and relationships built on earlier versions. - OpenAI replaced multiple specialized models with a single “GPT‑5” that actually contains ten new sub‑models behind a router, aiming to satisfy diverse needs (speed, empathy, depth, web search) while managing GPU load. - The router’s default to the faster, less‑reasoning sub‑model has left many users frustrated, prompting questions about when to use each variant and how to customize the experience. - Despite widespread criticism, Sam Altman affirmed that this composite model is the permanent default for hundreds of millions of users, and the speaker outlines the top ten user complaints with practical fixes for adapting to the new system. ## Sections - [00:00:00](https://www.youtube.com/watch?v=Gqnf5f1ITyo&t=0s) **Backlash Over ChatGPT 5 Rollout** - The speaker details the angry user response to OpenAI abruptly replacing established AI workflows with a single GPT‑5 model that actually hides ten distinct variants, disrupting long‑term personal and professional AI relationships. - [00:03:36](https://www.youtube.com/watch?v=Gqnf5f1ITyo&t=216s) **Custom Instructions and Model Transparency** - The speaker advises using explicit prompts and custom instructions to steer ChatGPT's behavior, highlights the inconsistency between chat and API model selection, and notes that true model control requires the API or Pro tier features. - [00:07:06](https://www.youtube.com/watch?v=Gqnf5f1ITyo&t=426s) **Long Context Illusion Explained** - The speaker cautions that bigger token windows don’t ensure perfect recall and that traditional prompting tactics—anchoring, reiterating, and rhythmic reminders—remain crucial for handling long‑context inputs. - [00:11:13](https://www.youtube.com/watch?v=Gqnf5f1ITyo&t=673s) **Choosing ChatGPT Personality Settings** - The speaker explains how to select ChatGPT’s default mode or customize its personality via the settings menu—offering options like empathetic “Listener” or “Robot”—to address complaints about reasoning depth and lack of empathy. - [00:16:00](https://www.youtube.com/watch?v=Gqnf5f1ITyo&t=960s) **Common LLM Pitfalls & Fixes** - The speaker reviews four major issues—router misrouting, chat vs. API model selection, retired‑model drift, and long‑context misconceptions—and outlines practical remedies such as “think hard” prompts, custom instructions, explicit model selection, prompt versioning, and disciplined long‑context techniques. - [00:19:37](https://www.youtube.com/watch?v=Gqnf5f1ITyo&t=1177s) **One Model, Many Challenges** - The speaker explains the shift to a single, dominant AI model, emphasizing inevitable rollout issues and the lasting importance of prompting, model literacy, and workflow adaptation. ## Full Transcript
0:00So, the response to chat GPT5 has been a 0:02little bit like watching a mob with 0:04pitchforks come to the vampires castle. 0:07It's been wild to see people get so 0:10upset, so fed up with how the roll out 0:13was handled. And I don't just mean 0:15chartgate where famously, infamously 0:18chat GPT5 was rolled out with completely 0:22inaccurate charts in a live stream to 0:25hundreds of thousands of people. That's 0:26very fixable and OpenAI immediately 0:28fixed it. What I mean is that they chose 0:31to open AAI chose to end people's 0:36long-term relationships with their AI. 0:38And I don't just mean the sort of like 0:40vaguely creepy this is my AI girlfriend 0:42stuff. I mean they chose to end 0:45workflows. They chose to end 0:47professional engagements that people 0:50have with thinking partners. Everything 0:51you've built up with your AI with 40, 0:54with 03, with 03 Pro, it all went away 0:56within an hour or two after that video. 0:58and instead you got a brand new AI that 1:02was really like I've actually counted it 1:04up 10 different GPT5 models hiding 1:06inside the one GPT5 in a way that's 1:09predictable right the entire world spent 1:13a year telling open AI please stop 1:16giving us so many models in the dropdown 1:19but people still have really differing 1:21needs some people want really fast 1:22responses some people want a warm and 1:24empathetic model some people want really 1:26thoughtful responses some people want a 1:27lot of inference time. Some people want 1:29web search. Great. So, OpenAI gave us 1:33one model that was actually 10 models 1:35underneath with a router. And contrary 1:37to popular belief, this is not like a 1:39bunch of old models stitched together 1:41with a router. These are all new models 1:43and they're stitched in with a router. 1:44The problem is the router is cued to 1:48give OpenAI more room on their GPUs 1:52because their GPUs are melting with the 1:54kind of traffic that they get. And so 1:56the model router defaults to the dumber 2:00model for lack the non-reasoning model 2:02is the polite way to put it. What do we 2:04do with that? When do we need a 2:05non-reasoning fast model versus a model 2:08that's good? And how do we customize it? 2:10This video really focuses on top 10 2:13named user complaints and concerns and 2:16what we can do to fix them in Chad GPT 2:195. I'm all about fixes, right? I'm all 2:22about being practical. This is the new 2:25default model for something like 700 2:28million people whether we like it or 2:30not. And Sam Elman on his Reddit AMA 2:32where people came for him with 2:34forks. He was very clear. We're not 2:36going back. This is the model we have. I 2:38think it's a powerful model, but I think 2:40it needs like any model some working in 2:42some getting to know it. It's like going 2:44on a first date. I know that's going to 2:46sound weird and creepy, but stick with 2:48me, right? Andre Karpathy talked about 2:49these as stochastic people spirits. In 2:52this sense, you have to teach the 2:54stochastic people spirit what you need 2:56from it. And there are specific ways you 2:59can do that. And so I'm going to give 3:01you the top 10 issues that I've dug up 3:02on the internet about chat GPT5. And I'm 3:05going to tell you how I think they can 3:06be addressed. And we'll go through one 3:08at a time. Number one is router 3:10misouting. Now, part of that on day one 3:13was that one of their auto switch 3:14routers was actually offline. And so if 3:16you had day one issues but haven't had 3:18them since, that was probably what was 3:20going on. But if you still get shallow 3:22responses to complex questions because 3:24the router defaults to faster models, 3:26you want to get to a place where you can 3:29actually ask for thinking hard very 3:31clearly. I would recommend two things. 3:34One, just say think hard in the prompt. 3:36Let's not make this over complex. And 3:38two, go into the option to personalize 3:40your chat GPT and make it clear in the 3:44custom instructions what you want. as an 3:46example, default to deep analysis unless 3:49I say quick take and then go from there. 3:51But essentially, you're trying to push 3:53it and route it with the custom 3:55instructions as much as you can. Number 3:57two, chat versus API mismatch was also 4:00complained. So, chat GPT uses a routing 4:02system. API gives you direct model 4:04access. developers get a much different 4:06experience than the rest of us with chat 4:08GBT5 because developers can test a 4:11particular model in the sandbox, deploy 4:13it and get completely different 4:14behaviors. In this case, I think how Sam 4:17Alman is going to address it is he's 4:20going to start giving us more 4:21customizability and they've already 4:22rolled out in the last couple of hours a 4:26ability to see what model you're getting 4:28and what model's responding to you. And 4:30originally that wasn't the case. So, 4:31they're working hard to make this more 4:33visible in the chat. That's not really 4:35something we can fix with prompts. I 4:37promise to be honest with you about what 4:38you can fix and what you cannot. If you 4:40really care about controlling exactly 4:42which model you get every single time, 4:44you have only a couple of options. You 4:46can either go to the API or you can hit 4:48the drop down and you have you don't 4:50have 10 models of choices. You have if 4:52you're if you're a pro user, you have 4:53chat GPT5 pro and you have chat GBT5 4:56thinking and you have chat GBT5. The 4:59options degrade from there down to plus 5:01and free users and so you have less and 5:02less choice and have to rely more on the 5:04prompting I gave you for router where 5:06you're prompting think hard. This is an 5:08issue that they are going to address 5:10with more customization. Number three, 5:12model drift and mismatch. So old 5:15workflows produce different outputs 5:16after migration to chat GPT5. That is 5:19somewhat inevitable. I would suggest if 5:22you've been running workflows in 5:23production that you I I hope that you 5:26have been keeping track of your prompts 5:28that you have been versioning your 5:29prompts and that when you have a new 5:31model that is responding differently 5:33with outputs because drift is inevitable 5:35with new models. Any new model would 5:37have produced drift that you then have 5:38the space to deliberately experiment 5:41with your prompt and adjust it to the 5:44right model. Now, if you're running a 5:45production pipeline, you get to select 5:47exactly which GPT5 model you want to 5:49use, and that gives you the flexibility 5:52to be much more controlled in your 5:53responses. If you're trying to run 5:55something through the chatbot flow, and 5:56a lot of people do, you are going to 5:58have to do more work to customize your 6:00prompt and more work to figure out how 6:02to route it to the right kind of model. 6:04And by the way, not every prompt needs a 6:05thinking model. Sometimes you want 6:07something quicker. I will say having 6:08worked with this model, you sometimes 6:10get more token output on the 6:12non-reasoning model because it's cheaper 6:14for them to produce those tokens. And so 6:16if you have like a thinking model 6:18produce an outline, you can have the 6:19non-thinking model do a lot of work for 6:21you in writing. Let's say you're writing 6:22a PRD. That might be a way to do it. And 6:25the non-thinking model, I know people 6:26come after it. This is just a little, 6:28you know, before we get to number four, 6:30this a little sidebar. The non-thinking 6:32model is remarkably smart for a 6:34non-thinking model. And it's also 6:37incredibly fast. And one of the things I 6:39noticed that is true about chat GPT5 6:42that hasn't been true about previous 6:44models is that even if the non-thinking 6:46model isn't right the first time, it is 6:48so incredibly fast that you can get five 6:52or six responses back in the time it 6:54takes like Claude Opus 4 to do one 6:56response. It iterates into something 6:58that's really good in that time. And so 7:00in in a sense people are sort of 7:01sleeping on the value of speed there. 7:03Okay, let's go to number four. the long 7:06context illusion. So users have have 7:09assumed that if they stuff the the model 7:11with 200,000 tokens because they 7:13advertise, right? They they advertise 7:14they had a bigger token window that 7:16you'll get perfect recall. It's going to 7:17be good recall. It's going to be better 7:19recall than we've had in the past. It 7:21doesn't mean it's perfect. Even OpenAI's 7:23own evaluation admits something like 89% 7:26accuracy between 128 and 256,000 tokens. 7:30That's that's good. It's not perfect. 7:32There's still lost in the middle 7:33problems. you would still be wise to use 7:36U-shaped thinking in your prompting. 7:38Right? So, the mitigations are not new 7:39here. We've had challenges managing long 7:41context windows in the past. You want to 7:43anchor at the beginning with a strong 7:44prompt. You want to reiterate what you 7:46need to at the end. You can use 7:48techniques like um rhythmic reminders 7:51through the context window of what 7:52you're looking for. Claude showed us 7:53that with the system prompt. And so, 7:55there's a lot of techniques that we 7:57already know to manage this. And I think 7:58people just assume they didn't have to 8:00anymore. And as I emphasize over and 8:02over again, these are models within a 8:04lineage. They are getting better. But 8:05don't assume that everything you learned 8:08immediately breaks. Instead, assume that 8:10a lot of the techniques you've learned 8:12will evolve. And so in this case, it 8:14gets a little bit easier to recall the 8:16context. But still, those techniques 8:18work well. Let's move on to number five. 8:20If you ask for JSON and you just say, 8:22"Please return JSON." For whatever 8:24reason, chat GPT5 doesn't always do 8:26that. Sometimes it does. Sometimes it's 8:28invalid JSON objects. I would recommend 8:31that you ask specifically for structured 8:34outputs with JSON schema. And if you use 8:36JSON a lot, I would recommend getting 8:39into custom instructions with it and 8:40actually specifying what you're looking 8:42for. It's not that the system doesn't 8:43know it. It's that for whatever reason, 8:45each every model has flavors and tweaks. 8:48This model in early testing has had some 8:51issue with JSON objects. Now, that's not 8:52for every single one. This was 8:54specifically in some of the smaller 8:55versions of GPT5. GPT5 Mini had this 8:58issue. And so you might also switch to a 9:00different model. That's going to feel 9:01like a very coding specific tip, but we 9:03use these models for a lot of things and 9:05coding is one of them. Number six, tool 9:08action and how you handle tool action 9:10claims or calls. So the model will 9:12sometimes pretend to have called a tool 9:16or claim to have called a tool and done 9:17an action it didn't perform. 03 would do 9:19this too. An AI claims they reduce 9:22deception significantly. Anecdotally, 9:24that feels correct. It does do more of 9:27what I ask it to do, but the number is 9:30not zero. Whatever it is, I think they 9:31claimed it it's down to 2%. It's not 9:33zero. You need to be really clear about 9:36requiring the model in your prompt. This 9:38is whether you're API or chat. You need 9:40to be clear about requiring the model to 9:42show you a plan and then to show you the 9:44actions completed against the plan. In 9:47my initial notes, in my review that I 9:49published last Friday, I talked about 9:52the idea that this model does well with 9:53artifacts because artifacts are a way of 9:56proving that you can make a tool call 9:59and come back and do something. So if 10:01you need it to use Python, you don't 10:02just say use Python, you say, show me 10:04the Python greater that you made or show 10:06me the Python query you built. So you 10:08have to make it prove the artifact. I 10:10think that's something that is a little 10:11bit of a secret hack with shed GPT5 10:14because we can't pick either the model 10:17directly in the chat nor can we define 10:20exactly the tool call in the chat. Those 10:23are ways to sort of force a tool call 10:25that get us what we want. And why does 10:26that matter? Because this model is 10:29designed to solve things with code. And 10:30sometimes you get solutions with code 10:32you wouldn't otherwise. My review on 10:34Friday called out that it is okay at 10:36making Gant charts just as an image. It 10:39is really good at making Gant charts 10:40with code and that that is a pattern 10:42that repeats for other problems. Number 10:44seven, thinking mode costs. Reasoning 10:48uses a lot of tokens and a lot of time 10:51and that is part of why it defaults to 10:53non-reasoning. And so we have people 10:55complaining and saying the thinking mode 10:56takes too long given what it's giving 10:59back. This is very much a preference. I 11:01am actually personally very okay with 11:04the model taking a few moments to think 11:06before it returns because I can feel the 11:08difference in the quality of response. 11:10If you don't want it to think that hard, 11:13this is actually the easiest one to 11:14solve. Pick regular chat GPT5 or if 11:17you're on a free uh or plus tier, it's 11:19going to default that way anyway. And 11:21just be happy and use that. And for a 11:23lot of people, honestly, that is 11:25probably good enough. By the way, the 11:27people who complain about non-reasoning 11:29are often complaining about either the 11:32quality of response, and we talked about 11:33going to thinking if you want it, or the 11:36lack of empathy in the non-thinking. And 11:37I have a I have a really easy fix for 11:39you on the empathy one. Go into your 11:41chat GPT personalization menu. You will 11:44have a style or a mode that you can use. 11:47And so, you can literally go in and you 11:49can say, I'm going to actually like read 11:51off to you all of the different options 11:53that you can check in the settings. So 11:55you go to settings, you go to customize 11:58chat GPT and you can select the 12:00personality. Personality is either 12:02default which is quick, clever and built 12:04to keep the conversation going which is 12:06absolutely true or cynic. I don't see 12:08many people asking for that one. 12:10Critical and sarcastic or robot 12:12efficient and blunt people complaining 12:13about it being robotic. It can be more 12:15robotic. Listener is thoughtful and 12:17supportive. I think that's the closest 12:18to the empathy people are looking for. 12:20Although OpenAI has said they're working 12:21to soften the overall profile of all of 12:23these personalities in response to 12:25customer feedback, the pitchforks or 12:27nerd exploratory and enthusiastic. So 12:29you can pick that personality. Now there 12:31are other custom instructions and this 12:33is what I've been saying when you 12:34customize chat GPT take advantage of 12:36that, right? Like I have it and I've 12:38introduced it. I've said I'm Nate. This 12:39is what I do and I've given it traits 12:41like for me I want it to think strategy 12:44first. I want it to be reflective. I 12:46want it to focus on high signal. I want 12:48it to push back on me. And so those are 12:50things that I've actually put into the 12:52customized instructions because they are 12:55what I want. You can do what you want 12:58with your custom instructions. And I 13:00think people are sort of sleeping on 13:01that as a way to handle chat GPT5 13:04because that's what custom instructions 13:06are for. That's that's exactly what we 13:07should be doing. All right. So thinking 13:09mode costs absolutely fixable. In fact, 13:12I think that's one of the easier ones. 13:13Guard rail friction is interesting. And 13:16so there's there's certain cases where 13:18you are going to have appropriate 13:19questions for chat GPT 5 and it is a 13:23little bit more conservative around dual 13:25use content and there's particular risks 13:27especially around biohazards that it's 13:29super conservative about. Well, you 13:31probably want to think about how you use 13:33the model and how you ask for safe 13:35completions in those cases. That's a 13:37fairly limited like that's a very narrow 13:39wedge, but it's something that comes up 13:40if you were in biology, if you were in 13:42research. You may be asking for things 13:44that are entirely appropriate, but they 13:46tend to be right next to things that 13:47would be inappropriate to ask about. You 13:49are going to essentially need to evolve 13:51ways to talk to the model that 13:53prioritize safe completions in ways that 13:55are useful. Either that or you're going 13:57to honestly have to switch models for 13:59that one. Number nine, where it makes 14:01basic errors, the simplest fix is to 14:04require thinking mode. And the second 14:05simplest fix is to require verification 14:07and citations for factual claims. And 14:09you can actually lean on that in the 14:11custom instructions as well as a way of 14:13reinforcing that. Now, I will say I've 14:15emphasized customization and custom 14:17instructions a lot before we get to the 14:1810th thing here. It will not override 14:21the system prompt in the chat if you are 14:24using the chat. One of the ways you know 14:26you can't overwrite it is you can demand 14:28that the custom instructions be verbose 14:31like super long- winded but OpenAI has 14:34to preserve their GPU capacity and so 14:37they are going to still impose token 14:39constraints and you can actually see it 14:41in the chain of thought. I've tried this 14:43if you ask it to be verbose and write 14:44long- winded stuff it is going to come 14:47back and it's going to say I have to 14:49respect OpenAI's token policies so I 14:51have to watch my output length. It 14:53literally shows you in the train of 14:54thought where it's adhering to the 14:56system prompt. And that's just good to 14:58know because essentially OpenAI has put 15:00some guard rails on that system prompt 15:02so that you can actually not break their 15:04GPU. I will do a separate video where 15:06I'll break down the system prompt that 15:07got leaked. I think it's super 15:08interesting. It's too long for this 15:10video. Uh but we'll get into it. It's a 15:12super interesting system prompt. So 15:13number 10, the silent fallback. So if 15:17you are on one of the lower plans, not 15:18one of the pro plans, if you hit 15:20something like 80 messages in 3 hours, 15:22it is going to silently downgrade the 15:24model and the quality can drop mid 15:26conversation. There will not be a 15:27warning. The only solution here is to 15:31monitor your usage and Chad GPT is 15:33working on a way to monitor that because 15:35they know that people want to see it. To 15:36use the API if you care about that as a 15:38developer or if you're not a developer 15:40to upgrade tiers if you really really 15:42care or to just go touch grass and take 15:44a walk. I wish there was a way to force 15:46it to sort of buy a prompt pack or or 15:49buy an upgrade pack for three hours or 15:51something. I think there'd be a lot of 15:52interest in that. That is not something 15:54that Jet GPT as a business has decided 15:56to do. 15:58All right. So, reviewing where we've 16:00been going through these 10. Number one, 16:03router misouting is a huge huge issue. 16:06You can fix this with prompts like think 16:07hard and also with custom instructions 16:09like default to deep analysis. Number 16:11two, chat and API being different 16:13because chat GPT uses a routing system 16:15in the chatbot and a API users can 16:18select the model. Well, honestly, the 16:20simplest fix there is to select the 16:22model. Or if you are using the chatbot 16:25on one of the higher tier plans, you can 16:27actually drop down and hit the model and 16:28like actually see like pro mode or 16:31whatever you want to test. You can also 16:32use the same number one fixes like think 16:34hard if you don't want to go if you 16:36don't have the option to go and hit the 16:38drop down. Number three, model 16:40retirement drift. So if a model was 16:42retired and your old workflows broke, 16:44what do you do? It's all about prompt 16:46versioning and making targeted upgrades 16:48and evaluating what happens. You should 16:50already have prompt versioning and you 16:52should already be evaluating. I've been 16:53preaching that for a long time. If you 16:55haven't been, this is where the bill 16:57comes due. Please start now. Number 16:59four, long context illusion. So, people 17:04assume that because of the 17:05advertisement-like quality of that 17:07OpenAI live stream that they could stuff 17:09in hundreds of thousands of tokens with 17:10perfect recall, but that's not what 17:12OpenAI actually claimed, and certainly 17:14not what I'm seeing in practice. You 17:15still need to use your good long context 17:18practices like U-shaped prompting where 17:20you emphasize at the beginning and the 17:22end what you're looking for and 17:23reiterating reiterating through the 17:25context window what you want. Context 17:26engineering still matters. I've been 17:28saying for a long time there is no way 17:30around good prompt engineering and good 17:31context engineering. That is a durable 17:33skill. Hey, it's a durable skill. Number 17:36five, JSON breaking. It feels like a 17:38narrow one, but it matters. We have had 17:40issues with smaller models with JSON 17:42breaking and not forming correct JSON. 17:44Either upgrade to a better model or be 17:46very clear that you want correctly 17:48formed JSON in your custom instructions 17:50and prompt for it very specifically with 17:52like you want structured outputs in 17:53complete JSON schema. Number six, tool 17:56action claims that are not true, like 17:59hallucinating tool calls. So this is 18:00where I called out that with this model 18:02in particular, getting artifacts 18:03matters. It's a way of forcing the tool 18:05call and forcing proof of tool call. 18:07Number seven, thinking mode cause people 18:09not wanting to use thinking mode when 18:10they don't want to. That one is actually 18:12one of the easiest. You just default to 18:14non-thinking. And if you really want to 18:15emphasize it, you can say don't think, 18:17act now, or get the faster answer, which 18:19is a little button that they added in 18:21chat GPT. Number eight, guardrail 18:23friction. This is another narrow one, 18:24but it's for the bio researcher folks 18:26out there, the folks using it for 18:28science and hard science. You may be 18:30asking queries that are close to 18:32dangerous requests or requests that 18:33OpenAI has deemed dangerous and it's 18:35using safe completions. You need to 18:37figure out how to narrowly tailor your 18:39request in the prompt or you need to 18:41switch models. Number nine, where it 18:43makes basic errors is probably using the 18:45non-reasoning model. So either upgrade 18:47to a better model or adjust your 18:49customization to require verification 18:51and citations for factual claims and 18:54really lean on that in the prompt as 18:55well. Then number 10, the silent mini 18:58fallback where like you use it 80 18:59messages in 3 hours and it disappears. I 19:01wish that I wish this was fixable but 19:03like the open AI has to either up the 19:05limits which historically they tend to 19:07do or give you the ability to use a 19:09different model which they've talked 19:11about bringing 40 back or you're going 19:13to have to monitor your usage and maybe 19:14upgrade tiers. Now, there are people 19:17when I say at the beginning of this 19:18video, there are 10 things we can do to 19:20fix these issues, who are going to throw 19:22up their hands and say, "Why do I have 19:24to fix it? I was promised a magic 19:27thinking machine that would do the 19:28routing for me and do the thinking for 19:30me." I've seen that in my Tik Tok 19:31comments over the weekend. I was 19:32promised this and it didn't happen. 19:34Guys, there's no such thing as a free 19:37lunch. We spent an entire year asking 19:40chat GPT to take away all of the other 19:43models and give us one model that thinks 19:45well. And people will say, well, we 19:47didn't ask them to take away the models. 19:48But most people did. They said they 19:50didn't want the model drop down. If you 19:52don't want the model drop down, you want 19:53one model. Something has to give. And so 19:55now we have one model in the drop down 19:57or maybe like a couple flavors of the 19:59same model in the drop down depending on 20:00your plan. And we have to decide what to 20:02do with it. And there is no way you can 20:04make a transition that big and not have 20:06some teething problems and not have some 20:08issues with roll out and not have some 20:10issues with how we learn to use it. The 20:12idea that the intelligence from the sky, 20:14the magic rocks that think are going to 20:17magically be able to in a new model roll 20:20out understand exactly what you want in 20:22your vague English is not ever something 20:24that you should have anticipated to be 20:26blunt. It just isn't. Prompting is a 20:28durable skill. Understanding how models 20:30work is a durable skill. And 20:31increasingly being able to adjust and 20:34evolve your workflows with a new model 20:35is a durable skill. That's not going to 20:38go away. I'm going to keep exploring how 20:39chat GPT5 works because this is a model 20:42that is incredibly important in the 20:44world right now because it's now the 20:45only model that hundreds of millions of 20:47people are using every week and it is a 20:49complex model to use. My early 20:51impressions are that this takes more 20:53effort and more thinking and more 20:55deliberate intention to use really 20:58really well. even if the default feels 21:01kind of smart to some people. So the 21:02default may feel cold, but it can feel 21:04kind of smart enough to some people and 21:06I've seen that in my comments as well. 21:07If you want to use it for extraordinary 21:09work, which this model is capable of, 21:11I've tested it. It does incredible work. 21:13And I will do some more demos later this 21:15week that sort of show that you need to 21:17be ready to put in extra work versus 21:19what you would have had to do with 03 or 21:21with 03 Pro or with Cloud4 Opus. And you 21:24might be like, is it worth it? Is the 21:25extra work worth it? The answer is yes. 21:27I have seen this model do analysis that 21:30I haven't seen any other model complete 21:32successfully. It can oneshot or fewot 21:36coding examples for for software that 21:38you can use around the office that I 21:40haven't seen anything else do quite as 21:42successfully. It is worth the effort, 21:44but it is work. So, I hope this review 21:47of 10 common issues across chat GPT5 has 21:50been helpful. We continue our 21:51exploration of this new model we're all 21:53living with now. Uh, let me know what 21:55you think in the comments. There'll be 21:56no other issues I can address.