Learning Library

← Back to Library

AI Careers: A Pascal’s Wager

Key Points

  • The AI‑job debate (pessimists vs. optimists) is less important than treating the future as a “Pascal’s wager”: you should act as if any outcome is possible.
  • Regardless of whether entry‑level roles disappear or expand, the single career imperative is to become better at solving high‑quality, complex problems.
  • Strong agency and problem‑solving skills prepare you for both scenarios—whether you’ll manage fleets of AI agents or work in large enterprise environments where AI’s impact is limited.
  • Engineering serves as a proxy for the broader tech ecosystem, so trends in engineering jobs will ripple through product, design, marketing, and other roles.
  • Rather than debating predictions, focus on actionable steps that build deep problem‑solving ability, which remains valuable no matter how AI reshapes the job market.

Full Transcript

# AI Careers: A Pascal’s Wager **Source:** [https://www.youtube.com/watch?v=XqwfFbuZF-0](https://www.youtube.com/watch?v=XqwfFbuZF-0) **Duration:** 00:11:13 ## Summary - The AI‑job debate (pessimists vs. optimists) is less important than treating the future as a “Pascal’s wager”: you should act as if any outcome is possible. - Regardless of whether entry‑level roles disappear or expand, the single career imperative is to become better at solving high‑quality, complex problems. - Strong agency and problem‑solving skills prepare you for both scenarios—whether you’ll manage fleets of AI agents or work in large enterprise environments where AI’s impact is limited. - Engineering serves as a proxy for the broader tech ecosystem, so trends in engineering jobs will ripple through product, design, marketing, and other roles. - Rather than debating predictions, focus on actionable steps that build deep problem‑solving ability, which remains valuable no matter how AI reshapes the job market. ## Sections - [00:00:00](https://www.youtube.com/watch?v=XqwfFbuZF-0&t=0s) **Career Strategy Amid AI Uncertainty** - The speaker likens choosing an AI‑focused career to Pascal’s wager, asserting that whether jobs vanish or multiply, the key to thriving is building strong agency by learning to solve high‑quality problems. - [00:03:09](https://www.youtube.com/watch?v=XqwfFbuZF-0&t=189s) **Cultivating High‑Agency Meta Skills** - The speaker argues that mastering transferable meta‑skills—problem recognition, solution design, resource marshalling, execution, and integration—provides valuable agency now and safeguards careers against uncertain future job markets. - [00:07:12](https://www.youtube.com/watch?v=XqwfFbuZF-0&t=432s) **Beyond Code – Embracing Human Wisdom** - The speaker argues that career success now depends less on technical showcases and more on taking agency and cultivating human skills—emotional clarity, discernment, and relational ability—as the economy shifts from pure knowledge to a wisdom‑focused, AI‑augmented landscape. ## Full Transcript
0:00We need to talk about AI and jobs. And 0:02no, I am not interested in the debate 0:04between the pessimists who say jobs are 0:06going away and the optimists who say 0:08jobs are going to stay. I actually want 0:10to take a different angle. I want to 0:13suggest that the way forward is clear 0:16for you and me regardless of which side 0:20you take. This is like the Pascal's 0:23wager of tech careers. Fundamentally, 0:26the idea behind Pascal's wager is that 0:29you kind of need to live your life a 0:31certain way regardless of what you 0:33believe. I think that's sort of the 0:35insight we need to take for AI right 0:37now. If you are a pessimist, if you 0:40agree with Dario Amade's take this week 0:43that half of entry-level jobs are going 0:46to go away, okay, that's what you 0:49believe. If you're an optimist and you 0:52believe believe along with say Gurgaly 0:53Oros uh that entry-level jobs may 0:56actually scale, there's some evidence of 0:58that. He's talked to folks at GitHub, 1:00talked to folks at Shopify because 1:02entry-level roles represent culture 1:04change and people coming in who are 1:06better at AI, etc. Great. That's what 1:08you 1:09believe. My point is this. Regardless of 1:12which side you take on that bet, you 1:15have a single problem to solve in your 1:18career. You have to figure out how to 1:21get better at solving high quality 1:23problems because at the end of the day 1:26if you have strong agency as a career 1:28trait and you can solve highquality 1:30problems you are ready whether you live 1:33in Daario's world and you need to manage 1:35fleets of agents or whether you live in 1:38Gurgal's world and you have more 1:39entry-level roles and you're working in 1:41enterprise environments where you know 1:43cursor makes a marginal difference and 1:45the code bases are just too large for AI 1:47and even though IQ is scaled context 1:49windows haven't scaled. Memory handling 1:51hasn't scaled. Um, and we still have to 1:54have a lot of senior engineering work. 1:56And by the way, I do regard engineering 1:58work as a proxy for a lot of other 2:00works. If you have to have a lot of 2:01engineers at an enterprise, you have to 2:04have a lot of other jobs that just go 2:05with that. Comm's jobs, marketing jobs, 2:08customer success jobs, product jobs, 2:10designer jobs. And so, in a sense, 2:13engineering is the core of tech. And if 2:15the bet on engineering goes one way or 2:17the other, the rest of the tech market 2:19will follow and we should probably be 2:20more honest about that. Now, are there 2:23going to be differences here and there? 2:25Yes. So, that's the first thing I want 2:27to be really honest that we live in a 2:28world where I think we should talk about 2:30it as a Pascal's wager problem. In other 2:33words, we should behave as if we need to 2:35prepare to solve highquality problems 2:39regardless of which way we think the 2:41world is going to go. And let's be 2:42honest, it kind of is a belief at this 2:44point. You can point to evidence either 2:47way. You can argue about which way it's 2:49going. I'm not really interested in 2:52having that debate here. And I think 2:56that people who dive too deep on that 2:58debate are missing the actionable steps 3:00you can take to actually start to answer 3:03the AI question in practice. What I 3:05would call the agency principle. 3:07Learning how to do problem recognition 3:09well. Learning how to do solution design 3:11well, learning how to resource marshall, 3:13learning how to execute, learning how to 3:15integrate. These are all things that you 3:17can do regardless of whether you're in 3:19engineering or other tasks. They're meta 3:21skills and you need them. I've talked 3:23about other meta-kills in the past, but 3:26one of the things that keeps coming 3:27through for me is this idea of solving 3:29problems with high agency isn't going 3:32away. And I think it's really really 3:34important to recognize that and not 3:38pretend that high agency has no value in 3:41the future. And if you come back to me 3:43and you say, "Well, it doesn't because 3:45it's all going to code away 3:47anyway." That's fine. But again, go back 3:50to Pascal's wager. Imagine that you're 3:52right. Would you have wanted to spend 3:54the time between now and whenever you 3:57believe that dark future will arrive 4:00doing nothing and complaining about 4:02it? Or would you rather prepare for a 4:06world that you have some agency 4:09over? I think regardless of what you 4:11believe, that's the more interesting 4:12place to be. It's also the less risky 4:15place for your career because either 4:17way, having more agency doesn't hurt 4:20you. And waiting if you're wrong, if 4:23Dario is incorrect, if Gurgal is right, 4:25waiting and doing nothing and saying the 4:27world is going to end and jobs are going 4:29to be over profoundly hurts your 4:32career. I also want to be honest about 4:34the fact that we need to talk more about 4:38in-person skills 4:41because interviews are beginning to 4:43shift back in person. Work is beginning 4:46to shift back in person. And that's very 4:48deliberate because people are wanting to 4:50hire you first for your problem solving 4:53skills first and then they need to check 4:55that you know how to use AI, but they're 4:58not hiring someone who can just read 5:01answers off of chat 5:02GPT. Look, I'm sure if I was given the 5:06appropriate time to prep with 03, I 5:09could read off answers for a leap code 5:11interview tomorrow. I don't think that 5:13that makes me a particularly qualified 5:16engineer in a lot of different places 5:18and that's fine. The point is this. We 5:22need to talk about emotional clarity, 5:24discernment in a world drowning in data 5:26and options, how you find signal, the 5:28ability to craft connection with people. 5:31You are getting flown into interviews 5:33more and more these days. You are going 5:34to be expected to be human because that 5:37is the only guarantee people have that 5:38you're not an AI. And that gets back to 5:41how companies now are actually answering 5:44this vexed question around signal versus 5:47noise in the candidate 5:50pipeline. And this is why I am not 5:52telling you that everybody should go out 5:54and vibe code a website and stick the 5:56code on GitHub. Is that an answer for 5:58some people? Sure. But the problem is if 6:01you really ask yourself what you're 6:03proving there, it is another way of 6:05showing that you can solve problems that 6:08is a step removed from the resume which 6:11is traditionally where you did that. And 6:13the reason why the resume is useless is 6:15because chat GPT has essentially made 6:17every resume perfect. And so in a world 6:21where every resume is perfect, it offers 6:23no signal. And in a world where 6:26everybody vibe codes something and 6:27sticks it on GitHub, it also offers in 6:30no signal. Now, I do still think there's 6:33some signal there because it is harder 6:35to replicate working code, even if 6:37you're vibe coding, than it is to build 6:39a resume. You can get a perfect resume 6:42out of chat GPT in 2 minutes. You cannot 6:45get a perfect vibe coded website that 6:47functions, that draws users in 2 6:50minutes. And so there is still some 6:53signal. I am certainly not one to say 6:55that you should not learn to vibe code 6:57or you should not learn to build. I'm a 6:59big fan of that. I teach a class on 7:00that. Not what I'm saying. But I am 7:03calling out that the keys to employment 7:07long-term are not these specific skills. 7:10It's not the ability to use lovable per 7:12se. It's not the fact that you have a 7:14GitHub repo with a lovablecoded project 7:17per se. It is the fact that you are 7:20taking agency over your career and 7:22showing you can solve problems across a 7:24wide range of tools, across a wide range 7:27of problem sets. And coupled with the 7:31human skills that enable you to function 7:34as a human in the workplace effectively. 7:37And by the way, this human skill thing 7:38is not just something I'm making up. Joe 7:41Hudson published a piece on every on 7:45Thursday talking about the idea that we 7:49are moving from a knowledge economy to a 7:51wisdom economy. It sounds maybe a little 7:54bit clickbaity, but I get the idea. 7:57Fundamentally, if Chad GPT is good at 7:59knowing facts, maybe we have to go back 8:02200 centuries and talk about this idea 8:04of elders and wisdom and humans gaining 8:06wisdom and that becomes something that 8:09is useful to us as humans in an AI 8:12economy. Interesting thesis and I think 8:15regardless of what you think about it, 8:16the advice to look at human skills is 8:20helpful because that is where workplaces 8:23are starting to go. And by the way, if 8:25you get better at emotional clarity, if 8:27you get better at discernment in a world 8:28drowning in data, if you get better at 8:30crafting connections with people, that 8:32does translate digitally as well. You 8:34don't lose out. It's another Pascal's 8:36Wager situation. Getting better at it is 8:39always 8:40good. So, where does all of this leave 8:42this? I I I feel like what I want you to 8:46take 8:47away is the concept that you don't have 8:50to pick a belief structure or a side 8:53about the future of AI in order to take 8:56steps that you know are going to be 9:00positive for you and for your career. 9:02You can work on those uh emotional 9:05people skills. You can work on problem 9:07solving, high agency, uh, proactivity in 9:10what you do, which sounds like a buzz 9:12word, but I promise you, if you've seen 9:14it, you know it's a big deal when you 9:15can find someone who has it. High agency 9:18people are incredible. They run through 9:19walls, and it's not because they 9:21overwork. It's because they know how to 9:23run around obstacles. 9:25Um, and so my my call to action to you 9:30is in a world where people will try to 9:32make you afraid a lot, be the person who 9:35is willing to take action for your 9:37career and not the person who buys the 9:41fear because I think that is very high 9:44risk. It's high risk for you personally. 9:46Daario Amade can say that and if he is 9:50wrong, he still makes billions of 9:53dollars. But if he is wrong and people 9:56believe him, the people who spiraled and 9:59went into a fear cycle and didn't 10:01prepare for their careers will be 10:04profoundly damaged over the long term. 10:07Their career prospects are 10:09affected. And I I am not saying that he 10:12didn't have good intentions. He's 10:13calling explicitly for larger efforts 10:15beyond a private company. I get why he 10:17did it. But the risk is real because 10:20what I see in practice is that 10:23statements like Daario made on Wednesday 10:25and Thursday of this 10:27week don't provoke attention from the 10:30government which he's asking for. They 10:32provoke attention from the media. They 10:34provoke attention on Tik Tok and most of 10:37it is a sharks feeding fest of 10:41fear and there's not a lot of productive 10:43discussion there. So this is my 10:45response. I don't need you to believe 10:48that the jobs will be better and it will 10:49be an amazing future. Maybe that's a 10:51step too far for you. But you can 10:53believe that working on these skill sets 10:56is going to have value because even if 10:59you disagree and you're like a total 11:01pessimist, it's still the rational 11:03choice. It's still the correct bet for 11:05you for your career selfishly. So that's 11:08my little soap box. Hopping off my 11:10little soap box. Cheers.