Learning Library

← Back to Library

ChatGPT‑5 Won’t Solve Data Readiness

Key Points

  • The speaker argues that most AI challenges faced by businesses are rooted in human and organizational factors, not shortcomings of the models themselves.
  • Data readiness is identified as the single biggest obstacle—roughly 78 % of firms cite poor‑quality, unstructured data as the reason AI projects stall, and no LLM can magically fix messy inputs.
  • Relying on “magic wand” thinking—simply dumping raw documents into a model—fails because the data lacks semantic organization, sub‑corpora, and clear meaning, which are essential for effective AI outcomes.
  • Consequently, expectations that a future model like ChatGPT‑5 will automatically solve these data‑related problems are unrealistic; disciplined, often “boring” data‑preparation work remains the critical path to success.

Sections

Full Transcript

# ChatGPT‑5 Won’t Solve Data Readiness **Source:** [https://www.youtube.com/watch?v=sf_OYY9lMlw](https://www.youtube.com/watch?v=sf_OYY9lMlw) **Duration:** 00:21:25 ## Summary - The speaker argues that most AI challenges faced by businesses are rooted in human and organizational factors, not shortcomings of the models themselves. - Data readiness is identified as the single biggest obstacle—roughly 78 % of firms cite poor‑quality, unstructured data as the reason AI projects stall, and no LLM can magically fix messy inputs. - Relying on “magic wand” thinking—simply dumping raw documents into a model—fails because the data lacks semantic organization, sub‑corpora, and clear meaning, which are essential for effective AI outcomes. - Consequently, expectations that a future model like ChatGPT‑5 will automatically solve these data‑related problems are unrealistic; disciplined, often “boring” data‑preparation work remains the critical path to success. ## Sections - [00:00:00](https://www.youtube.com/watch?v=sf_OYY9lMlw&t=0s) **Why AI Won’t Fix Data Problems** - The speaker explains that despite hype around ChatGPT‑5, most current AI challenges stem from human and organizational factors—particularly poor data readiness—and cannot be solved by any new model alone. - [00:05:09](https://www.youtube.com/watch?v=sf_OYY9lMlw&t=309s) **Aligning AI Projects to KPIs** - The speaker emphasizes that AI models are merely components of a broader workflow and that successful AI initiatives depend on defining clear, measurable business objectives tied to important KPIs to prevent vague or shifting goals. - [00:08:28](https://www.youtube.com/watch?v=sf_OYY9lMlw&t=508s) **Beyond Off‑the‑Shelf LLMs** - The speaker cautions that relying on generic foundation models for niche back‑office tasks is misguided, recommending a focus on data quality, constraints, architecture, prompt engineering, and retrieval‑augmented techniques instead of attempting costly model training. - [00:12:32](https://www.youtube.com/watch?v=sf_OYY9lMlw&t=752s) **Overhyped AI Needs Change Management** - The speaker warns that businesses often demand impossibly high AI accuracy, overlook necessary human fallback and change‑management processes, and consequently fail to capture the technology’s true value. - [00:17:20](https://www.youtube.com/watch?v=sf_OYY9lMlw&t=1040s) **Prioritizing AI Security Over Speed** - The speaker argues that rushing AI deployments without addressing security and privacy risks is unacceptable, emphasizing that compliance is straightforward and must be addressed from the outset regardless of a startup’s urgency or an enterprise’s timeline. - [00:20:38](https://www.youtube.com/watch?v=sf_OYY9lMlw&t=1238s) **Proper Integration Over Hasty Adoption** - The speaker warns that merely inserting a powerful AI model like GPT‑5 into an unprepared business will fail, emphasizing the necessity of solid change‑management foundations before chasing rapid tech upgrades. ## Full Transcript
0:00Today I want to talk to you about the 0:02things Chat GPT5 will not fix. What? 0:05Why? I don't have a special magic 0:08magnifying glass and a magic time 0:10machine that allow me to go and examine 0:12closely what it will be like in the 0:14future. Instead, I have a thorough 0:17understanding based on lots of company 0:19and boardroom experience of how business 0:21is actually using AI. And I know that 0:24most of the issues that we're seeing 0:26today with AI are human and 0:28organizational problems, not AI 0:30problems. And so when the model makers 0:32make these big claims about how 0:34incredible their new models are, I 0:36always filter them back through the 0:39actual organizational realities. I want 0:41to go through with you a few of the 0:43specific problem areas that have very 0:46boring solutions that I see over and 0:49over and over again in companies that 0:52Chad GPT5 is not likely to magically 0:55fix. Number one, this is the biggest 0:57single one I see. Magic wand thinking 1:00about your data over and over and over 1:02again. I see the fallacy. We just 1:04thought we could give the data to 1:07whatever LLM it is that they're using a, 1:09you know, they're on the Azure cloud, 1:11they're using Copilot, they're using 1:12Gemini, they're using Chad GBT. We just 1:14thought we could give it to the LLM and 1:16it will fix it. No, it will not fix it. 1:19In fact, the 8020 rule is literally true 1:21here. 78% of firms, according to 1:24Techraar, that struggle with AI point to 1:27data readiness is the root cause. Data 1:29readiness is not something that an LLM 1:31will magically fix. There is a school of 1:34thought in the more advanced sort of 1:37researchy side of AI that eventually 1:39this will get fixed because AI will 1:41become so good at recognizing the mess 1:43of human data and have such big context 1:45windows and so much processing power 1:47that it will just read over all of this 1:49mess and magically make sense of it for 1:51us. And that is often voiced as if it is 1:54present and here now and able to do that 1:57work today. None of those things are 1:59true. And there's a lot of concrete 2:01reasons why even if that were true, it 2:03would still be a bad idea to give your 2:05AI bad data. The cleaner your data 2:08inputs, the more likely you are to have 2:11a strong AI experience. Do not use magic 2:14wand thinking about your data. I have 2:16been in situations where people will 2:17tell me, "What is wrong with my data?" 2:20And I will look at the data and it has 2:23no semantic meaning to an LLM. It's just 2:25a blob of data that is like 2:27unatategorized, unorganized. 2:31That's that's the issue right there. 2:33Like you don't have to look farther for 2:35a problem at that point. Like if you're 2:37telling me that you have thousands and 2:39thousands and thousands of documents 2:40with this sort of undifferentiated like 2:44gigantic blob of text and you're 2:46expecting it to make sense of all of it, 2:48it's not going to because you haven't 2:50given it any sense of the semantic 2:52meaning in the larger data structures. 2:55you have no subcorpus 2:58semantic structures to work with. And by 3:00corpus I mean the whole collection of 3:01documents. You should have some sense of 3:04meaning. So for example, if it's in a 3:06Wikipedia, maybe your internal wiki has 3:08different sections that have some 3:09semantic meaning. And then there's 3:11article titles that give give it they're 3:13worth separating out. They give you a 3:14sense of where you are in the wiki. And 3:16that's just a very simple example. If 3:17you're dealing with official documents 3:19like health records, you're going to 3:20have semantic meaning for the patient 3:21name and semantic meaning for the for 3:23the diagnosis and this and that. The 3:25more you get really really clear about 3:27what data you want to convey, the easier 3:30it is going to be to actually use AI to 3:33pull the data. The second issue I see is 3:37closely correlated magic wand thinking 3:40about models. People tend to assume they 3:43need faster, better, stronger reasoner 3:45models. I am a big advocate of moving 3:47your daily driver off of chat GPT40 onto 3:52a stronger model. I have said so 3:53explicitly. Go to 03, go to Gemini 2.5 3:56Pro, go to Opus 4. But as much as that's 4:00helpful for personal productivity, it 4:02does not mean that every single aspect 4:06of a AI job has to be done by the best 4:10reasoner model available. Look, if you 4:12just want to get columns sorted 4:14correctly in a PDF, it does not have to 4:17be sorted by the best reasoner model on 4:19the planet. If you just want to 4:23carefully go through a nicely delineated 4:26data set and extract all of the values 4:29associated with a particular firm, that 4:31doesn't necessarily have to be a 4:32reasoner model either. In fact, that 4:34might not even be an LLM. That might 4:35just be a fancy SQL query. people tend 4:38to overpower their pipelines and they 4:40pay for it. They're they're basically 4:42paying a Ferrari premium. They're paying 4:45so that they feel like they've got the 4:47best model, which they think is what 4:50intelligence is. But really, 4:52intelligence is well organized data, the 4:55right model applied against that data 4:57with the right queries, the right guard 4:59rails, and the right evals surfaced in a 5:02way that a human can find useful. Does 5:05that make sense? It's the right data. 5:07It's the right model constrained in ways 5:09that enable it to do useful work 5:11surfaced in a way that a human can 5:12understand and use. That's how 5:14intelligence actually works in the 5:15workplace. And do you notice how small a 5:18role a model plays in that? Chad GPT5 5:21may be the best Ferrari in the business 5:22when it comes out, but it's a tiny part 5:25of that overall flow of value. And so 5:27you have to think more broadly if you 5:28are trying to build interesting AI work. 5:30The third issue is vague or shifting 5:32objectives. a human problem again and 5:36again and again and again. You need to 5:39have a meaningful business KPI that you 5:43are nailing your AI project to that 5:45solves a specific business problem and 5:47the business problem has to matter. If 5:50your business problem doesn't matter, 5:52I've seen over and over again people 5:53give up, they walk away. If your 5:55business problem matters but isn't tied 5:57to a KPI, so it's just annoying to a 5:58small section of the of the team, also 6:01not going to get prioritized. It has to 6:03matter to the organization, be 6:06measurable in a business KPI, and you 6:08have to nail that objective down 6:10publicly every time you can in order to 6:13keep the objective from moving and 6:15shifting when you inevitably run into 6:18problems. AI projects are a series of 6:20nested problem sets that you continue to 6:23solve until you actually get to value. 6:24And you're not going to have the 6:26patience to go through those nested 6:28problem sets if you don't have a clear 6:30KPI that the business cares about that 6:33the seauite cares about that they are 6:34going to move. That's really what like 6:36leads people to persist. It's what leads 6:38teams to persist. The fourth issue I see 6:40is treating AI strategy as separate from 6:43business strategy. It's a subtle one. 6:45Again, a model doesn't fix this. AI 6:47strategy cannot be separate from 6:50business strategy if you want to avoid 6:52wasting budget. If you want to actually 6:55make progress on becoming an AI native 6:58business, you cannot have the AI 7:00strategy in the corner with the AI guy 7:02or AI gal. And that's what they do. You 7:04cannot just do AI as a project. You 7:07actually have to integrate AI strategy 7:09into your business strategy in a way 7:11that makes sense, which requires 7:13executives taking the time to understand 7:16how large language models work at a 7:18fairly granular level so that they can 7:20see the leverage points in their 7:22business where it applies. Let's say 7:24that you are working in a business that 7:26you don't think has anything to do with 7:28AI. Let's say customer service is 7:30paramount in this business. It's all 7:32about putting your people in front of 7:34the customer with a white glove 7:36experience. I see executives in those 7:38situations often tell me, I mean, AI 7:40might be useful here and there, but it's 7:41not transformational for our business. 7:43Like, we're not buying the hype. Wrong. 7:45There are going to be people in your 7:47business and in competitor businesses 7:49that see opportunities that you will 7:51miss because of that attitude. You need 7:53to recognize that just because your 7:56value proposition is very resilient to 7:58AI, like congratulations. Like you have 8:00a human touch, that's going to be very 8:01valuable. Love that. But the back office 8:03piece, there is no reason that doesn't 8:06need an entire AI strategy focused on 8:09cutting down your business KPI costs. 8:12And by the way, that doesn't just mean 8:13firing people. That may also mean simply 8:16keeping track of all of the items that 8:18you sell more efficiently or being able 8:20to actively query against your data sets 8:22more efficiently or being able to 8:24thoughtfully price more efficiently. 8:27There's a half a dozen things you can do 8:28in the back office. It's document 8:30management, right? Like it's a very 8:31boring thing, but it becomes a really 8:33critical piece of running the business 8:35well and AI can help. Number five, over 8:38relying on offthe-shelf foundation 8:41models. People think that a generic LLM 8:43will fit every niche domain and if it 8:46doesn't, they assume immediately they 8:49have to train their own model, but they 8:50don't even know what training their own 8:52model means. Like I've had people in 8:53almost every conversation say, "Do we 8:56need to train a model for this?" And in 8:57a sense, I sort of blame the model 8:59makers. They've made it feel believable 9:01and they've made it feel plausible to 9:04train the models. It's not an easy task 9:06to do. I don't recommend it. Instead, 9:09think about your data and the 9:12constraints and guardrails that enable 9:13your model to flourish. Think about your 9:15architecture. Think about your prompt 9:17engineering. Think about whether you 9:19need a data set that's formulated as a 9:21rag or not, a retrieval augmented 9:22generation data set or not. Think about 9:24the degree to which you need to help the 9:26model have the context to process the 9:28job appropriately and what providing 9:30that context means. But people don't. 9:32They say, well, the model should know 9:34it. Like there's this again this fallacy 9:37that the model is intelligent because of 9:40what the model knows when in reality the 9:43model at a granular level is intelligent 9:46because of the way it transforms. These 9:48are transformer-based architectures and 9:50the intelligence comes from the way it 9:52transforms and predicts the next token. 9:55It is actually not what it magically 9:56knows. We think that because of the way 9:59it's trained and reinforcement learned 10:01to be helpful, but it's not actually 10:02true when you're building production 10:04systems. Number six, ignoring 10:06integrations and ignoring operations for 10:09AI. Teams will demo a proof of concept 10:12and they will discover they have no 10:14eval. They have no monitoring. They have 10:16no rollbacks. They don't have a way for 10:18the model to get pulled back out of 10:20production if there's an issue. They 10:22don't know what the bar is to reach 10:23production. if it reaches production. 10:25They don't know how to monitor it. They 10:27don't know what tool sets they are 10:29integrated with and therefore what 10:30vulnerabilities they have if those tool 10:32sets change. They don't know how they're 10:34going to refresh the underlying data 10:36sets. They just think again the model 10:39will do it. If we put the model into 10:40production, the model will solve the 10:42problem. That is exactly the mindset 10:44that had Air Canada in court over 10:46bereavement policy that that their AI 10:49made up. You cannot ignore AI 10:52operations. In fact, I would argue if 10:55you want to look for a career path, 10:57getting into AI operations, figuring out 10:59how to stably and safely deploy AI in 11:02production and pull it back and handle 11:04sandboxes, it's a big deal. It is not a 11:06trivial thing. It is something that most 11:08organizations ignore at their peril. And 11:10I have to remind people over and over 11:13again that this this has the 11:14characteristics of software. You cannot 11:16deploy it as if it is not software and 11:18expect it to work. 11:21Okay. Number seven. Similarly, no human 11:25in the loop. This was the Clara story 11:27famously, but I see it in other cases 11:28where you have overeager CEOs who have 11:31read the LinkedIn hype and they are 11:33like, "We don't need these people. I I 11:35want to hire you so that like uh you 11:36know people people will lose their jobs 11:38and I can cut this team and they and 11:40they don't realize that Clara had to 11:41rehire their CS team." They don't 11:43realize that if you do not have an 11:45ability to go to a human being and 11:49actually find out what the real truth is 11:51when AI goes off the rails, you're 11:53inviting hallucinations, you're inviting 11:54compliance breaches, you're inviting 11:56brand damage, you're inviting a customer 11:59experience that costs you the heart of 12:01the business. And so it's really, really 12:03important when you design systems to 12:05anticipate non-happy paths. This is just 12:08what we drill in if we're in product 12:10management. Don't just anticipate the 12:11happy path. Anticipate the miserable 12:13path. How do you make that more 12:14graceful, less miserable, more likely to 12:16retain you, the customer? Similarly with 12:18AI, when the AI goes wrong and the human 12:21knows it at the end of the conversation 12:23and the AI is not admitting it, how do 12:25you get the human help? People don't 12:27spend enough time thinking about that. 12:29They expect the AI to be 100% accurate 12:32when they would never expect a human to 12:33be 100% accurate. That's not a 12:35reasonable bar. Something can be 12:37tremendously useful and only 87% 12:40correct. And so depending on your 12:42application, you may be in a situation 12:44where the AI is 87% correct and you need 12:46a human for the other 13% and your job 12:48is to design a system that switches 12:50cleanly between those two use cases. I 12:53see very little investment from most 12:55businesses in figuring out that number 12:57and how you solve that problem. Number 12:59eight, underinvesting in change 13:02management. Massive issue. People just 13:05assume that if they put the AI in front 13:07of the team, it's going to magically 13:09work. Again, model makers are somewhat 13:11guilty here. They they try and tell you 13:13over and over again, like like the leaks 13:15we've seen on Chad GPT5 this week. Leak 13:17after leak after leak, it's the next 13:19thing since sliced bread. It's going to 13:20be incredible. It's amazing. People are 13:22going to go through this cycle all over 13:23again where they think that they can 13:24just give organizations Chad GPT5 and it 13:27will magically do wonders for their 13:29bottom line. And that may be convenient 13:31for the model makers from a sales 13:32perspective, but it's not true. You have 13:34to go through a change management and 13:36upskilling process to get people using 13:39AI. Otherwise, the chatbot loads nicely 13:42and they interact with it for like two 13:43or three basic tasks a week and you 13:45don't come close to realizing the power 13:48of what the model can do for you. Not 13:50close. And yet, most organizations are 13:53investing more in the model and more in 13:54the AI technical stack than they are in 13:56the people. They're not investing in the 13:58change management. They're not investing 13:59in the upskilling and it's sort of like 14:01pulling teeth to get them to do that to 14:03be fair because no one talks about it, 14:06right? Like the model makers are 14:07emphasizing the tech, the tech, the 14:09tech, the tech. And of course, you're 14:10going to listen to them and think it's 14:11the tech. And you're not thinking about 14:13it from the fact that this is a new 14:15general purpose technology. We need 14:17people change in order to usefully take 14:20advantage of this new technology. It's 14:22not like you can expect that people will 14:24be put on what is effectively an 14:26entirely new digital assembly line and 14:28told to just figure it out and it's just 14:30going to magically work. Like we would 14:32not do that in a factory. Why would we 14:33do it here? And yet that's what we're 14:35doing. Number nine, forgetting the total 14:37cost of ownership. So often people just 14:40they don't think about the token costs. 14:42They don't think about the developer 14:43sustainment cost. They don't think about 14:45the uh the hit by a bus problem where 14:47one developer is doing this and if if 14:49that developer, god forbid, has 14:50something happen to them, then you know 14:52they're done. They don't think about the 14:53sustainment cost of evaluating these 14:55models in production. None of that. 14:57They're just like, can we get this to 14:58production? If we can get it to 14:59production, great. We'll worry about the 15:01rest later. Because there's such a 15:02forced march risk on approach to beating 15:06your competitors to market. And I get 15:07it. It is a existential imperative. Like 15:10you have to be able to get AI into 15:12market. So I understand the incentive 15:14there. I don't think they're incorrect. 15:16But understanding cloud inference costs, 15:19understanding how vector DB queries 15:20works, understanding how cost guard 15:23rails can be maintained is really 15:25important or you can be upside down on 15:27your margins really fast. And that is 15:28just for serving the model if you're 15:30serving it to customers. If you are 15:31serving it internally, it is also an 15:33issue of making sure that you're 15:34extending your use cases across more of 15:36the internal footprint in a cost 15:38sustainable manner. If you are trying to 15:40build subsequent systems, it is also 15:42recognizing that AI systems take more 15:46sustaining in production than 15:47traditional software. You have to 15:48evaluate them continuously. You can't 15:51just test them in QA and forget them. In 15:53fact, I would argue the 80/20 ratio 15:55flips. And 80% of your time should be 15:57spent looking at production use cases 16:00because you can't adequately test the 16:02model inherently for all use cases 16:04before you launch. You have to hit a 16:05certain threshold and say we're going to 16:07launch and keep evaluating. And that 16:08means more sustainment cost that no one 16:11tends to factor in. The total cost of 16:13ownership is largely a broken calculator 16:16at most of the organizations I work 16:18with. It's bad. And I don't again the 16:20model makers say that tech will fix it 16:23implicitly and explicitly over and over 16:25and over again. And the they're sort of 16:28victims of their own success because 16:30normally when you say that people 16:32discount what you say because everyone 16:34has 70 years of advertising in western 16:38markets in their heads and they always 16:39discount claims. But these guys have 16:41actually done it right. They've taught 16:43the rocks to think. They've developed an 16:44incredible new general purpose 16:46technology. It's really, really, really 16:47good. And so our discount disappears 16:49with model makers. And we think, well, 16:51maybe they're actually right. Maybe 16:52they're not hyping it. maybe this is 16:53really this good because they've done 16:55this incredible job with this product, 16:57which they have. Like none of this 16:58should be taken to say that like these 17:00LLMs aren't incredible and can't do 17:02great work. It's about how you set them 17:03up to do great work in business. And 17:05that's the part where people just 17:07there's a missing stare. People people 17:08miss that. The last one I want to call 17:10out, number 10, security and privacy 17:14shortcuts. We'll worry about where the 17:16data lives later. That's a classic one. 17:19I'm not sure what the security 17:20requirements are. Let's just get 17:21started. let's get it up and we'll 17:22figure it out. Or I haven't looked at 17:24the terms of service for the vendor. I 17:26don't know how they relate to the 17:28foundation modelmaker. I'll let someone 17:30else figure that out. You can't do that. 17:32This is one of those things where you 17:34have to know the story of the data from 17:35day one because the risk of misuse is 17:39too high. This is not a case where it 17:42really helps you to go faster to ignore 17:44those things. And partly it doesn't help 17:46you to go faster because solutions 17:48exist. You can read the terms of service 17:50quickly and easily. You can quickly 17:52understand which secure CL cloud 17:54environment you want to deploy into. You 17:56can become compliant and secure with 17:58relatively little effort two and a half 18:00years into the AI revolution. So there 18:02is no excuse. You can't say going faster 18:04is a reason for this. You have to take 18:06security and privacy seriously. And 18:08again this is one where businesses are 18:10of two minds. This the scrappy startupy 18:12ones in a desperate position tend to be 18:14like you know what we'll deal with it 18:16later. Whereas the ones that are more 18:18enterprisey tend to be like we'll do 18:19security and privacy and we'll do 18:21nothing but security and privacy for 6 18:22months and then eventually we'll go on 18:23to the next thing. That is its own risk 18:26because it's actually not that hard to 18:28deploy a secure and private AI at this 18:30point. It does not take 6 months in most 18:33cases for most footprints. And so if 18:35you're spending that long on it, you are 18:37probably using what I would call from 18:39Amazon day two thinking where you're 18:41like looking at it as a process rather 18:43than looking to the outcome you want to 18:44drive. So it is possible to take this 18:46last tenth one and to push it too far 18:48the other way and be too over obsessed 18:49with security and privacy. And I say 18:51that because we know this is an issue 18:53and the cloud providers are heavily 18:55incentivized to make this something that 18:58is solvable by businesses at scale very 19:01quickly because they want your business. 19:03Google Cloud and Azure see this as the 19:05best chance they've had in years to 19:07steal business from AWS because they are 19:10farther along on the AI side. they are 19:12absolutely going to be obsessed with 19:14delivering for you a secure private 19:16environment to your spec. So there's no 19:18excuse for that. Like you should get it, 19:20you should get it quickly and then you 19:21should move on. Okay. I hope that you 19:24look through these 10, which by the way, 19:25these 10 are not an exhaustive list. 19:27There's other stuff too. The CEO not 19:30knowing how to use AI is my favorite 19:32secret 11th one. Like that is a problem. 19:34I've mentioned it before. If the CEO 19:36doesn't know how to use AI, it's really 19:37hard to drive AI transformation. Period. 19:39It's not a general purpose technology 19:41that works if half the people at the top 19:43of the chain don't really know how to 19:44use it. There are lots of other 19:45examples. I want you to look at these 19:47examples and I want you to notice how 19:49many of the examples here are people and 19:52process and how many of them are 19:54technical architecture, not just models 19:56or data, not just models. Again and 19:58again and again, I've called that out. 20:01This is why I continue to say new models 20:04are great. I'm glad we're getting them. 20:07It's a phenomenal time to be alive and 20:09working. But they're not magic bullets. 20:12They don't magically solve everything. 20:15Instead, they help you when you have 20:18good architecture, when you've solved 20:19the problems I've outlined here to make 20:22more return on investment than you would 20:24otherwise. A better model in a clean 20:26data environment with an excellent human 20:29in the loop safety net with good MLOps 20:31deployment practices with a good AI 20:34strategy is going to go farther, right? 20:36Like if you it's it's like putting an 20:38engine in a properly constructed car, 20:40you're actually going to be able to get 20:41the most out of it as opposed to the 20:43people who are trying to jam the engine 20:44into the janky T- Model Ford and be 20:47like, "Well, we've got a Formula 1 20:48engine in here. You know, just floor it, 20:51right? Like, I'm sure it'll work." No, 20:52it's not going to work. Take the time to 20:55set your business up to be ready for new 20:58models. And by the way, chat GPT5 is 21:00going to be great, but there's going to 21:01be another model along in a month, two 21:03months, 3 months. We are in the middle 21:05of an exponential curve. And so, we're 21:07going to see more exponential 21:08improvements. That is why it's so 21:10important to focus on these durable 21:11aspects of change management for 21:13business effectively. There's no other 21:15substitute. I hope that you have enjoyed 21:18thinking about all the ways that chat 21:20GBT5 will not solve all your problems 21:22magically. Cheers.