Learning Library

← Back to Library

MIT vs Wharton AI Success Metrics

Key Points

  • Conflicting AI ROI studies (MIT’s 95 % failure rate vs. Wharton’s 75 % success rate) are creating widespread confusion for businesses.
  • MIT’s unusually strict success criteria require measurable bottom‑line financial impact within a short timeframe, inflating the failure rate.
  • Wharton relied on executive surveys that include broader metrics such as productivity, time savings, and throughput, yielding a higher reported success rate.
  • The disparity between the studies is essentially an “apples‑to‑oranges” comparison, not a direct contradiction.
  • While MIT’s high bar emphasizes the need for rigorous financial validation, a balanced approach that also considers operational benefits would give enterprises clearer, more actionable guidance.

Sections

Full Transcript

# MIT vs Wharton AI Success Metrics **Source:** [https://www.youtube.com/watch?v=X7PWBlxJV1Q](https://www.youtube.com/watch?v=X7PWBlxJV1Q) **Duration:** 00:20:15 ## Summary - Conflicting AI ROI studies (MIT’s 95 % failure rate vs. Wharton’s 75 % success rate) are creating widespread confusion for businesses. - MIT’s unusually strict success criteria require measurable bottom‑line financial impact within a short timeframe, inflating the failure rate. - Wharton relied on executive surveys that include broader metrics such as productivity, time savings, and throughput, yielding a higher reported success rate. - The disparity between the studies is essentially an “apples‑to‑oranges” comparison, not a direct contradiction. - While MIT’s high bar emphasizes the need for rigorous financial validation, a balanced approach that also considers operational benefits would give enterprises clearer, more actionable guidance. ## Sections - [00:00:00](https://www.youtube.com/watch?v=X7PWBlxJV1Q&t=0s) **Decoding Conflicting AI Success Rates** - The speaker critiques wildly differing MIT (95% failure) and Wharton (75% success) enterprise AI studies, explains how methodological filters create contradictory headlines, and aims to give businesses a clear, realistic understanding of AI project outcomes. - [00:03:11](https://www.youtube.com/watch?v=X7PWBlxJV1Q&t=191s) **Balancing AI ROI Perspectives** - The speaker critiques MIT’s high‑bar expectations and Wharton’s pragmatic analysis of AI ROI, urges the audience to ignore sensationalist headlines, and stresses that successful AI adoption is steadier and grounded in practical implementation. - [00:06:21](https://www.youtube.com/watch?v=X7PWBlxJV1Q&t=381s) **Team-Level Context Fluency in AI** - The speaker stresses that organizations gain multiplied business value by teaching teams to articulate local, team‑specific context to LLMs and combining this with strong problem‑solving skills, turning domain uncertainty into actionable AI performance. - [00:10:19](https://www.youtube.com/watch?v=X7PWBlxJV1Q&t=619s) **Reversing Skill Ownership in AI** - The speaker explains that, unlike traditional settings where team managers held problem‑solving expertise, the AI era flips this dynamic—individuals must now assume ownership of challenges while teams collectively develop the technical skills to leverage large language models. - [00:13:26](https://www.youtube.com/watch?v=X7PWBlxJV1Q&t=806s) **Empowering Individuals in AI Organizations** - The speaker argues that AI‑native companies must place ownership of AI use at the individual contributor level, reshaping training and enabling teams to share prompts and custom GPTs to commoditize expertise. - [00:17:14](https://www.youtube.com/watch?v=X7PWBlxJV1Q&t=1034s) **Cultivating Taste in AI‑Driven Work** - The speaker explains that in the AI era “taste” means the democratized ability to identify and prioritize the most valuable problems, solutions, and learning methods, allowing teams to allocate effort where it yields the greatest organizational profit. ## Full Transcript
0:00You know, I don't blame people when they 0:01are confused about AI because the 0:04studies that are coming out are also 0:06confused. This week, October 28th, 0:09Wharton came out with a study on 0:12generative AI return on investment and 0:15implementation at very large companies. 0:17If that sounds like a familiar subject, 0:19it should because MIT studied the same 0:23group of companies just a few months 0:24ago. The kicker is this. MIT study had a 0:2895% failure rate on AI projects and 0:32Wharton came back with a 75% success 0:35rate. Now, it does not take a lot of 0:38mathematical skill to figure out that 0:40these are not compatible numbers. You 0:42cannot be both correct. And so I want to 0:45spend time unpacking what is really 0:48going on at the enterprise, how we put 0:51these two numbers together and what is a 0:53reasonable path forward that cuts 0:55through frankly the headline nausea that 0:58I get from all of this just top lines 1:01that don't make sense and that keep 1:03changing all the time. Business needs 1:05consistency. Business needs clarity and 1:08business needs to be able to actually 1:10build in a way that makes sense. So, my 1:12goal is to ground you by the end of this 1:14so that you don't get spun and confused 1:16when people are saying, "Is it 75%, is 1:18it 95%." Here's what's really going on. 1:2095% came out of the extremely tight 1:26screen that MIT put on Project Success. 1:30That is one of the ways they effectively 1:33engineered a headline that would go 1:35viral. And yes, I'm just going to say 1:36it. I think they engineered the headline 1:38because the screen is tighter than 1:42almost any other internal software 1:45measure I have ever seen. In this case, 1:48what MIT was saying is every project is 1:51by default a failure unless you can 1:55measure a dollar and cents impact on the 1:59bottom line, not the top line of the 2:01business within just a few months. It 2:03was like 6 or 12 months or whatever it 2:05was. If you can't do that, then it's 2:08useless. That is no other software that 2:10I have seen. If you're buying software 2:12is measured that way. You always measure 2:14it on internal metrics that you think 2:17will map to larger business value. And 2:20that brings us to the Wharton study and 2:22the 75% success because Wharton took 2:24more of that approach. Wharton's 2:26approach was to talk to executives and 2:28let executives tell them how they're 2:32measuring ROI. And what executives said 2:35overwhelmingly is that they're using 2:36other metrics. They're not just using 2:38dollars and cents on the bottom line. 2:40They're looking at productivity. They're 2:42looking at time saved. They're looking 2:44at throughput. And when you look at all 2:47of those, execs feel like you get a very 2:50clear measure of success. And that's 2:52where that 75% number comes from. So 2:55really, if you want to know first what 2:57the heck is going on and why they're 2:58different, it's apples and oranges. You 3:01have a very hard profit measure from MIT 3:03and you have a looser, more conventional 3:05software ROI picture from Wharton. 3:08Here's what both of them are not getting 3:11right and where I have sympathy. I think 3:14MIT is correct that we need to hold AI 3:18to a pretty high bar. This is a 3:19transformative technology. It's also an 3:21expensive technology. It is on the verge 3:24of being 10x or more more expensive per 3:26employee than any software was before. 3:29Yeah, we're going to have different ROI 3:30measures. So, I think that MIT is 3:32getting at something when they're 3:34challenging leadership to think 3:36differently about software purchase ROI. 3:40But I think Wharton is doing a great job 3:42actually analyzing the the reality on 3:45the ground and the way execs by 3:47definition and by convention and the way 3:49they usually act really measure stuff. 3:52So, my ask to you, if I were to like 3:55take all of this away, my ask is that 3:56you not pay too much attention to these 3:59kinds of headlines, I get inbounds, 4:02right? I get emails, I get messages 4:04coming in. I get it. It is confusing 4:07when the news media loves to report 4:10contradictory information. But the 4:12reality at organizations that are 4:15succeeding with AI is a lot more steady. 4:18And that's the piece I want to leave you 4:19with from a grounding perspective. When 4:22we build with AI systems and they 4:24actually work, there are a few things 4:26that do align really well with both of 4:28these studies and I'll sort of explain 4:30that, but they don't like the studies 4:32don't get at them, right? They don't get 4:33at how to positively build and that's 4:35you know me, right? Like that's what I 4:37love to do. The first piece that I want 4:39to lay out for you, think of these as 4:40sort of building blocks that you can 4:44build institutional fluency with. So, I 4:46talked about individual fluency a couple 4:48of weeks ago. I want to talk about 4:51institutional fluency today. I think 4:53that is one of the missing pieces that 4:54connects these two studies. And I think 4:56that understanding how it works will 4:59help you to not get swept and pushed 5:01around when the next whatever study 5:03comes out with whatever number. The 5:05biggest piece of institutional fluency, 5:07if you want to set up a sort of whole 5:09companywide fluency on AI, your company 5:12has to get good at understanding and 5:16shaping context awareness for teams and 5:20individuals. And I think teams are 5:22really the atomic unit here. Individuals 5:24come and go, but teams are steady. Teams 5:27have a particular vertical they take 5:29care of. Teams have a particular domain 5:31ownership. and institutions that are 5:33fluent in AI understand that the value 5:37of the team is the context they inhabit 5:41and specifically the context they're 5:43able to articulate to AI systems. So 5:46when we talk about context engineering 5:48typically that's a job. I'm suggesting 5:51that we think of it less as a job and 5:54more as everybody's job. Context is 5:57something that we all bring to the 5:59table. Context is something that teams 6:02need to deliberately maintain. What do I 6:04mean by that? Right? Like if you 6:06understand at a very deep level, this is 6:09the way my domain actually works. This 6:12is the way I actually drive value for 6:15the business. These are the unique 6:17processes and workflows that I can use. 6:20These are the areas of uncertainty and 6:21the areas I need to explore in my domain 6:24to get better. And if you can articulate 6:28that intentionally to an LLM as a team, 6:33you are going to be in a position to 6:37deliver multiplied value to the business 6:39relative to individuals working on AI 6:43alone or relative to the work that we've 6:46done before 2022 sort of pregenerative 6:49AI. Context is king here. Context helps 6:52us to 6:55feed an AI with what's needed to be 6:58useful at a local level within the 7:00business. If you can't figure out how to 7:03help your team to articulate context to 7:06the AI, you're going to have trouble 7:09with everything else. And this is one of 7:11those things where like if you look at 7:12the Wharton study and the success, part 7:14of what's going on here is that leaders 7:17are saying that they are seeing 7:19accountable acceleration amongst teams. 7:22Like the way I read that is that leaders 7:25are starting to see teams pick up and 7:27use context in their disciplines to 7:31drive value and the executive just kind 7:33of gets to measure it, take credit for 7:35it potentially and count it as a 7:38success. So context is the first piece I 7:40want to call out. Institutionally fluent 7:43organizations in AI understand that 7:46context is local that context operates 7:49at the team level not the individual 7:51level and they are deliberately 7:53fostering team level context fluency. 7:57The second piece that I want to share 7:59with you that institutionally fluent 8:01organizations have is problem solving 8:04skills. And this sounds really obvious 8:06because we've been talking about problem 8:07solving skills as an element in sort of 8:10employee training and upskilling for for 8:11decades, right? Way before generative 8:13AI. But socializing those problemolving 8:16skills is something that managers, 8:19directors and above are coming to me 8:21privately and saying this is really 8:22hard. This is not easy. And I think that 8:25part of why we see the discrepancy with 8:27the Wharton and the MIT measures is that 8:30the MIT measure, the 95% fail rate 8:33measure demands that a entire 8:36organization be so good at problem 8:40solving that it meaningfully upshift the 8:43bottom line. That is an extremely high 8:45bar. You can get a whole bunch of teams 8:47who are good at problem solving and if 8:49you have two or three bad apples, you 8:51will bottleneck somewhere in your 8:53process and have trouble delivering 8:55value to the bottom line. And so what we 8:57need is we need to treat problem-solving 9:01skills as a critical patch on team 9:05fluency that we cannot live without that 9:08we must have on every team and that we 9:12will hire for if needed to get done. In 9:15other words, AI problem solving is 9:17becoming all of our problems today now 9:21and it doesn't get better until we 9:23actually fix it. So what does AI problem 9:25solving look like in practice? We can 9:27say it, but what makes it something that 9:30a team can reasonably learn? Because 9:32keep in mind, if teams know context, 9:35teams are going to know problems and 9:36teams are going to be able to sort of 9:38learn to solve problems. I want to 9:39suggest that problem solving is really, 9:43if you peel the onion back and you think 9:46about it deeply, a function of 9:48understanding how AI thinks about and 9:52processes information. Because if you 9:54think about problem solving 9:55conventionally before AI, we're really 9:58manipulating information in order to 10:00unlock ambiguous problem spaces. And so 10:04traditionally, it would be like, I'm 10:06going to write my product requirements 10:07document or I'm going to do this data 10:08analysis. And we're manipulating 10:10information in order to get closer to 10:13unlocking a complicated customer 10:15experience or a painoint in operations. 10:17And all of the stuff we talk about like 10:19critical thinking skills, good writing 10:22skills, those were all ways that we 10:24could scale up manually so that we could 10:27successfully manipulate information as 10:30an individual and as a team to solve 10:32these problems. And in that world, 10:34individual skills mattered a lot because 10:36individuals pushed information fluency 10:40forward. Right? If the individual could 10:42write well, they might write well enough 10:43that the whole team was elevated, right? 10:45And then ownership resided at the team 10:47level. And so a team manager would be 10:50responsible for would own solving the 10:52problem, driving around obstacles, all 10:54of that stuff you want good managers to 10:55do. That is starting to flip. And I have 10:58never shared this before. I think this 11:00is really interesting. I think that what 11:03we are starting to see in the age of AI 11:05problem solving is instead the 11:08individual needs to index really highly 11:11on ownership and the manager or the team 11:14needs to index highly on skills and 11:17that's sort of a reverse of the usual. 11:19So the problem solving skills, the 11:20ability to understand how LLM works, 11:22those actually can reside at the level 11:25of the team, but the ownership piece has 11:28to rest with the individual if we're 11:30going to make progress. And I'll explain 11:32why that flip has happened. When you 11:35think about solving a problem in the age 11:37of AI, what you really are doing is you 11:40are understanding enough about AI to 11:43feed the AI the problem in a way that it 11:47could understand and work with. And I've 11:48talked about this part before where 11:50you're sort of chopping up the problem, 11:51decomposing it so that the robot AI can 11:54pick it up and manipulate the problem 11:56and help you get through the problem 11:57space faster, which is the whole goal. 11:59It is easier to solve problems if the 12:01robot intelligence is working on that 12:03problem with us. Here's what I haven't 12:06talked about before in practice with 12:08real teams building real AI systems. 12:10What I'm seeing is that ownership is 12:14irreplaceable at the level of the 12:15individual working with AI. If you don't 12:18have a very strong sense as an 12:20individual, as an individual contributor 12:22of ownership and quality and assessing 12:25the bar that AI is using to solve and 12:28insisting that the AI isn't doing good 12:30enough when it really isn't, you're not 12:31going to be able to add any value at 12:33all. Whereas in the past, you could have 12:35that bar set at the team level and the 12:37manager would be able to sort of manage 12:38the informationational standard and it 12:40would be okay because all of the humans 12:42were working together and information 12:44was moving slowly enough and we were 12:45exploring the problem slowly enough that 12:47the manager could act as a quality bar. 12:49In this day and age, that's not true. AI 12:52is giving everyone so much superpower 12:55that you have to devolve ownership down 12:57to the level of the individual 12:59contributor. And I think that at root is 13:01one of the reasons why organizations are 13:03struggling so much with the AI 13:05transformation. It demands more of our 13:08individual contributors than it ever has 13:10before. And we're not used to a world 13:12where the individual contributor is the 13:14atomic unit of the corporation as 13:16opposed to the manager. Corporations are 13:18founded on management theory. The idea 13:20is that the manager is accountable to 13:21for the domain for the department. They 13:23are the representative of the business. 13:25They work with the individual 13:26contributor. That's how how we've done 13:28it for hundreds of years. I am beginning 13:30to think that that is not how AI native 13:33organizations are actually going to be 13:36configured. The power you have with AI 13:38resides so heavily with the individual. 13:41I don't think you can do it any other 13:42way. I think you have to put ownership 13:44at the level of the individual 13:45contributor. And that has profound 13:47implications for how we train people. 13:48Because really what we need to train 13:50people to do is you need to start by 13:52taking ownership of your domain and your 13:54situation, of your problems, of the way 13:57you work with AI, of the bar you use it, 13:59everything flows from that. And 14:01ironically, what we previously had at 14:03the individual level, this sort of 14:05skill, hey, this is a really skilled 14:06writer, right? This amazing writer, uh, 14:08and we couldn't do it without him. And 14:10he lifts up the whole team. That kind of 14:12thing can now reside at the team level. 14:15Look at how teams are sharing prompts 14:17with one another. Sharing clawed skills 14:19with one another. How teams are sharing 14:21custom GPTs with one another. AI is 14:24enabling the commoditization 14:27of a lot of those skills. And when it 14:29comes to AI problem solving, you can 14:32encode a lot of the technical skills and 14:35understanding of AI in sharable format. 14:39And so let's say someone isn't super 14:42familiar with how transformer 14:43architectures work and how you want to 14:45chunk problems so that the AI can read 14:48the problem coherently. That's okay. You 14:51write a prompt for them. You share the 14:53prompt with the team. You have a brown 14:55bag where you talk about what it does, 14:57but they can just immediately run the 14:58prompt and the skill translates and they 15:01can gain skill over time as they 15:03socialize with the rest of the team. But 15:06what you can't do is give them the skill 15:09and they don't have the sense of 15:11ownership. That breaks. That does not 15:13work. So we've talked about context. 15:16We've talked about problem solving and 15:18how it sort of inverts traditional team 15:20and managerial norms. There's one more 15:23piece that I want to talk about today 15:24that I think underlies this concept of 15:27institutional fluency that that isn't 15:29talked about very often. I think that 15:32previously the concept of taste, the 15:36concept of is this excellent, is this 15:38extraordinary, is this something that is 15:41an incredible offer for the customer, we 15:44could delegate that to a small, call it 15:47a priesthood within the company. The 15:49Steve Jobs of our company is over here. 15:52He has taste. He's an extraordinary 15:55builder. He's an amazing inventor. We'll 15:57run this by him and that will be fine. I 16:00think in the age of AI, taste is 16:02something that doesn't work that way 16:04anymore if you really want to move 16:07quickly. And so, one of the things that 16:09you want to do is actually give and 16:12socialize a sense of taste down to the 16:16team level so that teams are empowered 16:20to move autonomously without sacrificing 16:23extraordinary quality. And I think that 16:25that quality tradeoff is one of the 16:28pieces I really have been sitting with 16:30in the Wharton and MIT studies. I feel 16:32like MIT essentially had an extremely 16:34high quality bar and Wharton had a more 16:37relaxed traditional software quality 16:38bar. And if you want to thrive and build 16:42an AI native company that actually 16:44works, you have to figure out how you 16:47can socialize that insane almost founder 16:51level obsession with quality and taste 16:55to the point where the team has it built 16:58into their DNA because they have so much 17:01power with AI agents with uh AI tooling 17:04to launch their own products to drive 17:06their own corner of the business. This 17:08might look different at different 17:09companies. Maybe you say it's at the 17:10department level, not the team level. 17:11But the point stands, right? Taste is 17:14something that shows up at a much more 17:17democratized level than it did in the 17:19pre-AI age. And what's interesting is 17:21it's not just is this product good 17:24taste. It's taste in problems. Which 17:27problems are spicy that we should choose 17:29to solve? It's taste in problemsolving 17:34skills. Taste in learning methods. What 17:37I'm saying is you have to develop a 17:40sense of where the juice is in the 17:43profitability matrix of the 17:44organization. Maybe the most effective 17:46thing your team can do is to scale up 17:50for the next 3 months and other teams 17:52don't need that but yours does. Maybe 17:54the most effective thing you can do is 17:56double down on problem space discovery 17:58and other teams are building product or 18:00maybe it's a more traditional definition 18:02of taste and you're working on an 18:03excellent product. But the reason that 18:05matters is because the team has to have 18:08taste or the tooling they're using with 18:11problem solving with high um high 18:14ownership is wasted. Taste is 18:16effectively a fancy way of saying pick 18:19the right thing to work on and make sure 18:22that you are really really good at 18:25knowing what good looks like. That's 18:27taste. When we talk about someone with 18:29high taste in fashion, they pick the 18:30right thing to wear and they know how to 18:32wear it so it looks good. very very 18:34similar idea and I think that that's 18:36something that we could previously 18:37delegate to just a handful of people to 18:39a tiny sort of collection of folks when 18:42IBM was at its height IBM had taste 18:46makers they were a group of 10 or 15 18:48people who were licensed to break all 18:51the norms of the organization and they 18:53were licensed to do that by the 18:55organization so that they could 18:56introduce creative thinking well their 18:59their taste has to be democratized that 19:01idea does not work anymore more. We need 19:04to build institutions that socialize a 19:08sense of taste. And I do want to suggest 19:10this is not universal. Right? The way 19:12LLMs work is universal. The ability to 19:15learn to solve problems with LLMs is 19:17also a universal skill. The sense of 19:20ownership is a universal skill. Taste is 19:22not. Taste is specific to your vertical. 19:24Taste is specific to your situation. 19:27Taste is more like context, which I 19:29mentioned at the beginning of this 19:30video. taste requires you to know your 19:33local domain very very well and have an 19:36excellent taste in problems. So there 19:38you go. I think what we're really 19:40talking about between Wharton and MIT is 19:42institutional fluency. And I think the 19:44three keys are context and then the 19:46ability of teams to start to flip the 19:49traditional relationship between 19:50ownership and skills. Ownership residing 19:53now at the individual level, skills at 19:55the team level and then finally taste. I 19:58think that taste is something that we 20:00have to push down into our organizations 20:02and that's also new. What do you think 20:04you're missing or I'm missing on AI 20:07fluency in institutions? This is an 20:08evolving field. I'm learning and seeing 20:11this in real time. What are you saying?