Learning Library

← Back to Library

Mastering ChatGPT‑5 for Business Transformation

Key Points

  • Organizations must assume ChatGPT‑5 is already present via shadow‑IT and proactively integrate it into workflows rather than waiting for formal adoption.
  • Unlike prior versions, ChatGPT‑5 is a bundle of specialized sub‑models, requiring teams to learn new skills for routing prompts to the appropriate model category.
  • The model’s performance is highly variable—prompt quality and correct model selection can produce either poor or exceptionally accurate results on complex tasks, so teams need strong judgment on what constitutes a good answer.
  • To extract deep reasoning from ChatGPT‑5, users should explicitly instruct the model to “think hard,” a simple prompt cue that reliably triggers its most advanced reasoning capabilities.

Sections

Full Transcript

# Mastering ChatGPT‑5 for Business Transformation **Source:** [https://www.youtube.com/watch?v=dUWxN0snnW8](https://www.youtube.com/watch?v=dUWxN0snnW8) **Duration:** 00:22:56 ## Summary - Organizations must assume ChatGPT‑5 is already present via shadow‑IT and proactively integrate it into workflows rather than waiting for formal adoption. - Unlike prior versions, ChatGPT‑5 is a bundle of specialized sub‑models, requiring teams to learn new skills for routing prompts to the appropriate model category. - The model’s performance is highly variable—prompt quality and correct model selection can produce either poor or exceptionally accurate results on complex tasks, so teams need strong judgment on what constitutes a good answer. - To extract deep reasoning from ChatGPT‑5, users should explicitly instruct the model to “think hard,” a simple prompt cue that reliably triggers its most advanced reasoning capabilities. ## Sections - [00:00:00](https://www.youtube.com/watch?v=dUWxN0snnW8&t=0s) **ChatGPT-5 Organizational Adoption** - The speaker explains that enterprises must rethink AI rollout strategies for ChatGPT‑5—recognizing its shadow‑IT prevalence, bundled model architecture, and the demand for new team skills—to guide executives in effectively integrating the tool and driving bottom‑line impact. - [00:04:03](https://www.youtube.com/watch?v=dUWxN0snnW8&t=243s) **GPT-5 Elevates Enterprise AI Use Cases** - The speaker explains how GPT‑5’s expanded reasoning and data‑synthesis abilities enable more effective product specifications, engineering efficiency, and customer‑success ticket analysis—provided users craft proper prompts to unlock the newly raised capability envelope. - [00:07:09](https://www.youtube.com/watch?v=dUWxN0snnW8&t=429s) **Demand Proven AI Artifacts** - The speaker urges teams to make AI generate not just final results but also all intermediate deliverables—code, rubrics, summaries, and tool‑call evidence—so the model’s work is transparent, verifiable, and tied to specific backend functions. - [00:10:58](https://www.youtube.com/watch?v=dUWxN0snnW8&t=658s) **One Model, Many Paths** - The speaker explains that the era of choosing between multiple AI models has ended with GPT‑5, so organizations must now master how to invoke its capabilities for their specific data and teams, as effective model usage—not model selection—will determine their success. - [00:14:20](https://www.youtube.com/watch?v=dUWxN0snnW8&t=860s) **AI Completeness and Vibe Coding** - The speaker warns that AI can fabricate seemingly complete meeting agendas and other outputs, leading teams to overlook real gaps, and introduces a new, low‑stakes “vibe coding” category of personal, kitchen‑table software launched on August 7th. - [00:17:41](https://www.youtube.com/watch?v=dUWxN0snnW8&t=1061s) **Empowering Teams with AI-Driven Apps** - Encouraging employees to use ChatGPT to quickly create and remix data‑driven applications, fostering grassroots innovation beyond static templates. - [00:20:57](https://www.youtube.com/watch?v=dUWxN0snnW8&t=1257s) **Redesigning AI Playbooks Post‑GPT5** - The speaker explains that organizations need to revamp their AI transformation playbook—eliminating outdated step‑by‑step and model‑selection emphasis, establishing new guardrails for hallucinations, and expanding prompt libraries to include generated artifacts—to capture an anticipated 20% productivity gain from GPT‑5. ## Full Transcript
0:00If you work in AI transformation, if 0:02you're trying to figure out how to get 0:04AI into your business and how to get 0:06your team to use it, how to pick the 0:09right tool for the right job so you can 0:11make the most of AI and really drive the 0:13bottom line at GBT 0:16has to change your approach and it 0:18changes it in really unexpected ways. 0:21And I want to take this briefing and 0:23talk that through in detail and give you 0:25field notes that you can take to your 0:28teams to guide how you shift 0:30implementation of AI now that chat GPT5 0:34is in your workplace. And I've got news 0:36for you. If you're a co-pilot 0:38organization, if you're a cloud 0:39organization, it is very likely that 0:41chat GPT5 is already in your workplace 0:44because people bring it in on their 0:46phones. The shadow IT problem is real. 0:49You have to assume it's already there. 0:51So, what makes Chad GPT5 special and 0:53different? Why is it worth an executive 0:56briefing just to talk through what 0:59changes in your org as a result? Number 1:02one, the way this model works is unlike 1:07any other model. This is a bunch of 1:10models bundled together, which means 1:13that your team has to learn a brand new 1:16skill. So before when chat GPT40 was out 1:20there, it was really about your team 1:22move to a reasoning model and 1:24specifically invoke it at the right 1:25time. Go to 03, right? Or if you were uh 1:28using chat GPT40 and it was the old 1:30days, ask it to think step by step. None 1:33of that really works in the same way 1:36anymore. Now, you're going to have to 1:39actually work with your team and help 1:40them figure out how to route the model 1:43into the right model category behind the 1:47scenes so that you can get the power you 1:51need for the job you want. And this 1:53matters. When I did my full write up on 1:56chat GPT5 this week, I found that chat 1:59GPT5 is both the best and the worst 2:02model performing in the workup in the 2:05test that I did. In other words, 2:07depending on how it's prompted and which 2:09model you route to, you either get a 2:11very bad response to a complex problem 2:14or an extraordinarily good one. Your 2:16team needs to double down on taste. They 2:18need to double down on understanding 2:21what constitutes a good answer to a very 2:23hard question if you're going to use it 2:25for complex work. And I think that the 2:27answer with AI is that you have to try 2:30to use it for complex work. I don't 2:32think it's acceptable as an AI 2:34transformation organization to look at a 2:37launch like chat GPT5 and say, "Ah, 2:40we're going to wait. We're going to see 2:41what chat GPT holds. We're not going to 2:44assume this is too hard. I've got news. 2:46It's not going to get easier. We're not 2:48going back to a world where you just 2:50pick the model. Your team has to level 2:52up in the way they prompt in order to 2:56take advantage of this model. Your cheat 2:58sheet, by the way, if you have one thing 3:00that you tell your team to make sure 3:03that they hear, tell them that when you 3:06have a hard problem, when the model 3:08needs to do some really in-depth 3:09thought, literally tell the model to 3:11think hard. It's it's one of those 3:13hard-coded passwords that seems to tell 3:16chat GPT reliably to invoke the thinking 3:20model. Tell them to think hard. But 3:23that's not the only tip. At the end of 3:25the day, what your teams need to succeed 3:28with chat GPT5 is they need to recognize 3:31that the leverage has shifted from 3:33picking the right model to picking the 3:36way you work with the model. And so you 3:38need to look across your teams and I've 3:40spent a lot of time in these executive 3:42briefings highlighting use cases for AI 3:45on teams. I don't want to belabor that 3:47here. There are use cases in marketing 3:50around idea generation. There's use 3:52cases in sales around how you handle 3:55really consistent language, how you 3:57handle deals, how you translate 3:59technical requirements into contracts. 4:01There's use cases in product around 4:03developing PRDs that are effective 4:05around vibe coding prototypes. So 4:07engineers can understand what you want 4:09and there's use cases in engineering all 4:11over the place around building more 4:12efficiently using coding tools. Those 4:15are just a few examples. Customer 4:16success has like voice of customer and 4:18ticket analysis. I could go on and on. 4:21The key thing you need to understand 4:23leading AI transformation is that for 4:26those use cases you have to help people 4:29see that the envelope of capability has 4:33gone up with chat GPT5. But the way you 4:37access it is trickier now. And so as an 4:39example, looking at the customer success 4:41use case, looking at the number of 4:44tickets you can assess and the patterns 4:46that you can make out of those tickets, 4:48if you invoke thinking mode, if you set 4:50up your prompt correctly, if you feed it 4:52all the tickets, it is going to do a 4:54better job of pattern recognizing, a 4:57better job of assessing overall what's 5:00in the box on those tickets than other 5:02models. And that includes cloud models. 5:04I I threw that kind of a problem at 5:06claude code did not do as good a job as 5:08Chad GPT5 and thinking and so I feel 5:10very confident saying that that overall 5:13capacity envelope has gone up handling 5:15and synthesizing really complex data 5:18including numeric data and including 5:20mixed data has gone way up and I think 5:23it's slept on because the business has a 5:25lot of that every business I I know has 5:27really messy data and chat GPT5 gives 5:29you the first really capable approach to 5:32tackling that but only if you can 5:35persuade people to very very carefully 5:38load that context window with the right 5:41prompt and the right data. And so I 5:44would encourage folks if you're working 5:46on like how can I unlock this extra 5:48capacity, get this extra pattern 5:50synthesis, maybe it's a market analysis, 5:52maybe it's a customer sentiment 5:53analysis, maybe it's looking across a 5:56lot of our behavioral data for product, 5:58whatever it is that like is that extra 6:00step of synthesis that was tough to do 6:02with straight AI before without having 6:04like a whole agentic pipeline or without 6:07building a rag system just like in the 6:09chat. My encouragement to you is to get 6:12get that data as clean as you can. Focus 6:14it on the data you need the AI to 6:16process in order to answer the question 6:19and then and that's okay. It can be a 6:21lot. This is a 400,000 token context 6:23window. It's okay that it's a lot. Put 6:25it in in a format the AI can fairly 6:29easily parse. I have tried it where you 6:32make it parse it and give it a nasty 6:34format in addition to giving it dirty 6:38data. And I will tell you it does it, 6:41but you're going to have much better 6:42results if you give it clean data in a 6:44format it understands. So take the time, 6:47get it into markdown, get it into CSV if 6:49you can, and then once you supply the 6:51data to the system, you want to very 6:54clearly specify the artifacts that will 6:58enable the AI to show it's done the 7:00work. And this is another distinctive of 7:02Chat GPT5 that I think is going to have 7:04to get wrapped into training curricula. 7:06You need to be at a point with your 7:09teams where they know the outputs that 7:13AI needs to write, build, demonstrate to 7:17show that it has done the work. In other 7:21words, with this model, with Chad GPT5, 7:24it does better if you force it to prove 7:26its work than if you tell it to do the 7:29work. So, in other words, when you're 7:30asking for the output, say, "Hey, give 7:33me the sentiment analysis. Give me the 7:35Python workbook to show how you did it. 7:38And then also give me a plain English 7:41summary of the rubric and the scoring 7:43assessment that you used for the 7:45sentiment analysis along with any 7:46personas that you developed. Something 7:48like that, right? Like basically, show 7:50me what you did. Sure, give me the 7:52executive summary and the report, but 7:54show me all the artifacts along the way 7:55as well and demand those as outputs. Why 7:59is that important? Well, go back to the 8:01original architecture of chat GPT5. It's 8:05important because this is a model that 8:08is like a skin stretched over a bunch of 8:11different machines in the background. 8:13You are basically specifying artifacts 8:16that trace to more tool calls in the 8:19background that get you more of what you 8:21want. And so when you specify the Python 8:23grader for instance, you're specifying 8:25tool use effectively around a particular 8:28kind of grading that you want done on 8:30this particular data set. When you do 8:32that across a range of artifacts, you 8:34are hard- coding or invoking specific 8:37tool calls that you can then ensure are 8:40used against the data set in the way you 8:42want. And so that's why proving it 8:44matters. That's why defining the 8:45artifact seems to especially matter with 8:47GPT5. That is going to be a big jump for 8:51teams that are used to just saying, you 8:53know what, produce this thing. And I 8:56think it's a good jump because in a 8:57sense, we're asking the AI to do more 9:00meaningful work. We're asking the AI to 9:03come back with more in-depth analysis 9:06that it really wasn't possible to do 9:08before. And so instead of thinking of AI 9:10as a text output generator, which so 9:12many teams do, we're asking AI to think 9:15more multimodally. We're asking AI to 9:18take advantage of the maths and the code 9:20that it's able to do and actually put 9:22that at the service of our teams, even 9:24if we're not in engineering. And that's 9:26why training teams to think in artifacts 9:28really helps. So that's that's the 9:30second key piece I want to call out. So 9:32we've talked a little bit here. There's 9:33there's more to come in this video. 9:35There's a lot to dive into, but I I want 9:37to just pause for a second and say as 9:39you think about this, remember this is 9:41multiple models. You need to trace the 9:44call to the right model. So make sure 9:45that you're asking the model to think 9:48hard and get to the right problem space. 9:50Make sure you give it clean data and 9:52make sure that you asking for artifacts 9:54along the way. I will also add as we 9:56sort of move forward in this discussion, 9:58it is really really important for you to 10:02be clear with your teams about the way 10:06you want certain problems addressed in 10:08GPT5. A lot of execs settle for AI to do 10:12this. I used AI to do this. That used to 10:14be kind of okay. It is now definitely 10:17not okay because the difference between 10:19bad usage of GPT5 and good usage of GPT5 10:23is so large. You cannot just tell your 10:26team I use GPT5 for this anymore. You 10:29need to specify and say this is how I 10:32use this tool to get this result. The 10:34specificity of communication takes more 10:36work on your part. It takes more work on 10:38the part of anyone who's teaching AI at 10:40your company. But the trade-off is that 10:43if you do that work now, you are going 10:46to get more AI fluency around a tool 10:49that is even less obviously powerful 10:52than previous AI models. At least when 10:5403 came out, it was obviously powerful. 10:56You were talking to the reasoning model 10:58all the time. Now it's GPT5. The 11:00reasoning model is one of several that's 11:02hiding back there. And you have to kind 11:04of feel for it in the dark of latent 11:06space. And why why you ask did Chad GPT 11:09do that? Because people were complaining 11:11very loudly about the fact that there 11:12were a bunch of models and we had to 11:14pick which model to use. Well, the 11:16trade-off is we don't have to pick the 11:17model anymore. But now we have to invoke 11:19the path through the model to the power 11:21that we want behind the model. And 11:24that's the trade-off. You get only one 11:25model. Super simple. Everyone's using 11:27GPT5. But now we have to talk more about 11:29how we invoke that power. There's no 11:31free lunch. That's how it works. Now, as 11:33we sort of round out this discussion and 11:36and start to think a little bit about 11:37the wider implications of GPT5 and where 11:40we're going over the next year or two 11:41and how AI transformation unfolds from 11:43here, we have been in an era 11:47characterized by model choice. We are 11:49not in that era anymore as of August. We 11:53are now in an era when the model choice 11:55has largely been made for us. And it is 11:58the model usage that is going to 12:00determine whether organizations survive 12:02or perish. Specifically, it's whether 12:05organizations are able to quickly 12:07understand how to get the most out of 12:09the model for specific use cases that 12:11tie to their data and their teams. You 12:14know, those brown bags and those 12:16socializing AI wins that you would sort 12:18of see happen and maybe they peter out 12:21after a month or two. Those really 12:23matter. Now you need to be in a place 12:25where you are rapidly socializing how to 12:27use chat GPT5 across your business. You 12:31need to be in a place where you are 12:32defining really explicitly these are use 12:34cases in the business that are new that 12:36we can now unlock because there's a 12:37larger context window because thinking 12:39mode gives us synthesis across messy 12:41data that we didn't have before. Great. 12:44Define them. Name them. You're now going 12:46to have to tell people how to prompt for 12:47them, how to prep the data for them. And 12:49if you can get it right, you're going to 12:51have something most other companies 12:53don't know how to do, especially if 12:55they're just telling people start using 12:56GPT5. At the same time, if you're using 12:58the non-reasoning version of GPT5 and 13:01it's very, very fast and it writes a 13:02little bit better, you're going to have 13:04to get more aggressive about working 13:06with people to give GPT5 non-reasoning 13:08for the simple textbased stuff, really 13:10good prompts that drive it to write in 13:13your style, that drive it to write with 13:15no hallucinations and factcheck its 13:17work, that drive it to make sure it is 13:20complete in the answer and not 13:22overpromising. Those are all things that 13:24I have seen in practice. Is it better at 13:27hallucinating than 03? Somewhat. Yeah. 13:29Is it going to benefit from you telling 13:32it explicitly what the bar for clarity, 13:35for adherence to reality, for adherence 13:37to facts, for explaining only the answer 13:40to your solution and not 16 other things 13:43is yes, it will benefit from that 13:45clarity. This is a model that I have 13:47compared to a product manager on crack. 13:50Helpfulness is off the chain. It comes 13:52back, it gives instructions, it gives 13:54overhelpful suggestions. It has been 13:56trained to be a completeness artist. 13:59Your teams will need to learn to rein it 14:02in. Your teams will need to learn to 14:03give it guard rails. So even if we're 14:05not talking about the really complex 14:07stuff and the data stuff and you're just 14:08talking about the simple non-reasoning 14:10model, your teams still need to learn to 14:13rein it in in ways that enable it to 14:16provide very useful content rapidly. 14:18Because the last thing you want is for 14:20teams to give up and walk away from it 14:22because then they don't get the value. 14:24or conversely for teams to use it and 14:26just copy paste from it and you're going 14:28to be able to tell because suddenly all 14:30of your meetings are going to look super 14:31complete with agenda items that no one 14:33pays attention to and no one does 14:35because they're all made up by AI. Be 14:37careful because this model makes up 14:40completeness that your organization may 14:42not actually have internally. Be aware. 14:45This model likes to pretend things are 14:48complete. That's part of why it's a good 14:50coding model. And that brings me to my 14:52last observation for teams and what is 14:54new with this model that you need to pay 14:56attention to as a leader. There is a new 14:59category of software that launched on 15:01August 7th. It is not it it got called 15:04vibe coding. It's not really vibe coding 15:06or it's not the same vibe coding that 15:07we've had for months. The vibe coding 15:09we've had for months is you go to 15:12lovable, you go to bolt, you go to 15:13replet and you type something in and it 15:15builds an app. It might have a backend 15:16and transactions and loginins. it's a 15:18real app or at least it's supposed to be 15:19an app and you wrestle with it and maybe 15:21eventually you launch it. This is a 15:24lower category of software not in the 15:26sense that it's less useful but in the 15:27sense that it's more casual. It is 15:29kitchen table software. It is software 15:30for personal usage and it was positioned 15:33for personal usage in the call. I've 15:35certainly been able to use it for 15:36personal usage, but I've also been able 15:39to use it for professional usage 15:41immediately and people are sleeping on 15:43that. As an example, you could ask chat 15:46gpg5, make me a gant chart for this 15:48really complicated like giant Excel 15:50spreadsheet. It is probably going to 15:52come back with an image of a gant chart 15:54that is not ex exactly what you want and 15:56you're going to swear and say this thing 15:58can't do gant charts. Have you tried it 16:00with code? Go to the model and say 16:03here's the data respond in code. Build a 16:06gant chart app that shows this in code. 16:09And I did that. I did that with the 16:11Apollo 13 mission. I built out a whole 16:13Gant chart in code. It could not do it 16:15visualizing it directly. In other words, 16:17think of code as a tool that your teams 16:20can use for project artifacts. Low low 16:23casual artifacts where you share the 16:24link and say, I built a chat GPT app for 16:26this. This is our Gant chart for the 16:28project. Right? I built a chat GPT chat 16:30GPT app for this. This is our project 16:33update for the week. That kind of thing 16:36is now software. And teams have no idea 16:39that that's there. They don't know it's 16:41there. No one taught them that in 16:42previous prompting and levelups and AI 16:45courses. And do you know why? Because 16:47that wasn't possible before. This is 16:49really the first time we've had 16:50reasonably good coding with a reasonably 16:54complete ability to represent an app. I 16:57tried some of the stuff I worked on in 17:00Claude, which is a good coding app. It 17:02could not do this natively. I'm not 17:05saying anything against Claude in the 17:07API. It's an extraordinary model for 17:08coding. But if you want native 17:10representation in the canvas, you don't 17:12want a development environment, you 17:13don't want anything else, you just want 17:14to like try it and see if it codes up a 17:16little app, JPT5 is the best thing I've 17:19seen. It is absolutely potentially 17:21transformative if you can tell your 17:23people very clearly that for small 17:26presentations, for small things to work 17:28on that represent data in interesting 17:30ways that are visual, that might be 17:32slightly interactive, you want to be 17:35able to use this app for a weekly 17:37business review. That's a classic one. 17:39You have data. You need to represent it. 17:41You need to be able to click around, 17:42look at the the metrics. You should be 17:43able to use an app for that. You should 17:45be able to tell Chad GPT5 to do it. Now, 17:47not everyone's going to do that. There's 17:49going to be a lot of people that say 17:50they want to stick to their existing 17:51templates. The advantage if you do get 17:53into the culture of building apps is 17:56that you really unlock groundswell 17:58innovation from your team. Your team 18:00will come up with ideas for apps you did 18:03not have. If you can bless it and if you 18:05can remind them that it's a good thing. 18:06If you can remind them you're supportive 18:08of this kind of kitchen table software 18:10and you want them to be able to use it 18:12to solve interesting problems. You will 18:14not imagine the 200 use cases across 18:16your business. You'll imagine three or 18:17four of them and you'll try them out and 18:19you'll let people know these are awesome 18:22little use cases. I tried one. Here it 18:24is. Like I tried a travel itinerary one. 18:26It's really fun. And I could see when I 18:28showed people that tiny travel itinerary 18:30app I made that their eyes lit up and 18:33they're like, "Oh, let me remix that. 18:35Let me try that." And Chad GPT makes 18:37that so easy. You can remix it like 18:39you're remixing music. You can go back 18:41in and say, I want it to be, you know, 18:42the travel itinerary is for a different 18:43place, right? It's going to be for when 18:45I go to the Grand Canyon and so I want 18:47you to re remake it. And it's really 18:49easy to do. The weekly business review, 18:51it's going to be for the sales or not 18:53for marketing. I want to remix that 18:54artifact. People need to get your 18:57blessing to use chat GPT in new ways 19:00because the assumption is often if I try 19:02it and if I fail, it's going to be bad. 19:04And this goes back to classics of change 19:06management, right? Like you need to 19:08bless people to fail so they can learn 19:09to succeed. Okay, wrapping all of this 19:12up, what have we learned here? Number 19:14one, rollouts for Chad GPT5 are going to 19:17be different from rollouts for anything 19:18else because of the way they've made it 19:20one model. And so when we think about it 19:22from that frame, certain implications 19:24fall out that are new and different 19:26based on previous AI rollouts for orgs. 19:28First, you have to tell people how to 19:30prompt and access the power behind the 19:31model. That's where I called out think 19:33hard. Second, it could tackle big, 19:36gnarly, data heavy problems in the chat 19:38the way it never could before. You have 19:40to be responsible for the data you put 19:41in, for making it clean. You have to be 19:43responsible for teaching people to 19:44prompt it well. And you have to be 19:46responsible for reminding people that it 19:49does that work best when you invoke 19:52those tools by demanding artifacts, by 19:54demanding proof of work, by demanding 19:56that it actually shows how it did the 19:58work. Not that you tell it how to, but 20:00that you demand the artifacts that shows 20:02that it did the work. And that's a 20:03that's a fine distinction, but it's 20:05important to emphasize because telling 20:06it how to, that doesn't matter. But 20:08saying, I want you to show me the 20:09greater you used. Well, that's actually 20:11helpful. Then moving on from sort of the 20:14data analysis piece, we talked a little 20:16bit about the importance of making sure 20:20that your team feels comfortable using 20:23AI in new ways that are unexpected 20:26because Chad GPT5 unlocks those, right? 20:28So we talked about this coding use case. 20:31It's an entirely new class of work. How 20:33can you help the team understand that? 20:34How can you socialize out these sort of 20:36coded artifacts that become essentially 20:38a new way of doing business around the 20:40office? We talked about the importance 20:41of making sure that teams feel 20:44comfortable using AI for non-reasoning 20:47tasks in Chad GPT5 and what that looks 20:49like because again non-reasoning chat 20:52GPT5 is very good. It's extremely fast. 20:55It's extremely coherent. But it's going 20:57to be up to you to convey these are the 20:58guardrails. This is the house style. 21:00This is what I want to do with 21:01hallucinations and how I handle it. This 21:03is when you can be creative and when you 21:04can't be creative because hallucinations 21:06and creativity relate. And so if it were 21:09me, I would go in Monday morning and I 21:12would look at the current AI 21:15transformation playbook you have and I 21:17would basically say let's assume that we 21:19can do 20% more with AI because of chat 21:21GPT5 and let's assume that a lot of the 21:25way we taught the org is going to have 21:26to change because the old ways of 21:30learning are gone. the old ways of like 21:32I hope you weren't still doing think 21:34step by step in your AI transformation 21:36playbook, but if you were, that's got to 21:38go. I hope you weren't doing too much of 21:40an emphasis on model selection, but 21:42that's going to have to go. Uh, I hope 21:44you were able to communicate that 21:48whatever you're teaching people is going 21:50to update as models come out because 21:51that's still true. You're going to have 21:53to update how people think about which 21:56tasks they select, how people think 21:58about how they share their work. I 22:00called that out. You're going to have to 22:02update how you think about prompt 22:04libraries and what they contain because 22:06prompt libraries might now contain not 22:08just prompts, but also the artifacts 22:10that came out of good prompts. Like 22:12maybe you're saying, I want to have a a 22:14customer sentiment analyzer and here's 22:16the prompt for it, but here's the Python 22:18autograder for it, too, and I want you 22:20to have that. Or, you know, the prompt 22:22library contains not just the prompt to 22:24build the app, but also an example of 22:25the app that you can remix. Right? 22:27There's there's more that's evolving 22:28here that we haven't seen before. Do I 22:31have it all figured out? I do not have 22:33it all figured out. I do have some 22:35strong convictions on where GPT5 is 22:38going with the organization and I wanted 22:40to share these early field notes with 22:41you so that you also get a sense of what 22:45you need to focus on as you roll out 22:46GPT5 to your orgs. Good luck. Let me 22:49know how it's going and uh I will 22:51continue to report from the field as I 22:52dig into GPT5 transformation.