Learning Library

← Back to Library

AI Execution: Cheaper Yet Riskier

Key Points

  • AI is dramatically lowering the cost of execution across functions—from product management to engineering to customer success—by enabling faster, higher‑volume work.
  • Paradoxically, this cheaper, faster execution spawns new jobs focused on quality assurance and security because AI‑generated code and outputs introduce “dirty” code, hallucinations, and prompt‑injection vulnerabilities.
  • Real‑world examples illustrate the risk: engineers grapple with low‑quality AI‑written code, and sales teams inadvertently rely on hallucinated AI‑crafted decks, leading to potential misinformation and contractual errors.
  • Addressing these challenges requires specialized safeguards such as red‑team testing, robust prompt engineering, data chunking controls, and mechanisms to constrain LLM responses within trusted answer distributions.
  • The core tension now lies between the benefits of accelerated AI‑driven productivity and the need to manage the accompanying quality and security nightmares.

Sections

Full Transcript

# AI Execution: Cheaper Yet Riskier **Source:** [https://www.youtube.com/watch?v=CNw443X6dB0](https://www.youtube.com/watch?v=CNw443X6dB0) **Duration:** 00:31:47 ## Summary - AI is dramatically lowering the cost of execution across functions—from product management to engineering to customer success—by enabling faster, higher‑volume work. - Paradoxically, this cheaper, faster execution spawns new jobs focused on quality assurance and security because AI‑generated code and outputs introduce “dirty” code, hallucinations, and prompt‑injection vulnerabilities. - Real‑world examples illustrate the risk: engineers grapple with low‑quality AI‑written code, and sales teams inadvertently rely on hallucinated AI‑crafted decks, leading to potential misinformation and contractual errors. - Addressing these challenges requires specialized safeguards such as red‑team testing, robust prompt engineering, data chunking controls, and mechanisms to constrain LLM responses within trusted answer distributions. - The core tension now lies between the benefits of accelerated AI‑driven productivity and the need to manage the accompanying quality and security nightmares. ## Sections - [00:00:00](https://www.youtube.com/watch?v=CNw443X6dB0&t=0s) **AI Execution Dynamics and Job Impact** - The speaker outlines two opposing dynamics—AI making execution cheaper, which threatens jobs, and the resulting quality and security challenges that actually generate new employment opportunities. - [00:03:33](https://www.youtube.com/watch?v=CNw443X6dB0&t=213s) **The Human‑AI Boundary Crisis** - The speaker argues that exploding compute costs demand new infrastructure roles, while a looming “human‑AI boundary crisis” will spawn billion‑dollar businesses to define, debug, and manage ambiguous AI behaviours like hallucinations. - [00:06:46](https://www.youtube.com/watch?v=CNw443X6dB0&t=406s) **AI, Chaos, and Trust in Product Management** - The speaker argues that amidst AI‑driven chaos, product and program managers can differentiate themselves by earning trust and retaining accountability, even as AI automates many routine tasks. - [00:09:57](https://www.youtube.com/watch?v=CNw443X6dB0&t=597s) **Software Engineering Evolving With AI** - The speaker urges engineers not to abandon the field despite AI hype, emphasizing the need for robust, scalable design to manage the surge of low-quality AI‑generated code. - [00:13:19](https://www.youtube.com/watch?v=CNw443X6dB0&t=799s) **Middle Managers' Future in AI Era** - The speaker contends that AI will erode the traditional information‑filtering role of middle managers, expanding directors' strategic accountability and span, and stresses the need for leaders who excel in ownership and AI‑enabled decision‑making. - [00:17:12](https://www.youtube.com/watch?v=CNw443X6dB0&t=1032s) **Aligning Deployment & AI UX** - The speaker emphasizes coordinating secure deployment practices with human‑centric AI interface design, arguing that polished, innovative UI features like Perplexity’s in‑task messaging set products apart. - [00:21:19](https://www.youtube.com/watch?v=CNw443X6dB0&t=1279s) **AI Security & Cloud Infrastructure Opportunities** - The speaker emphasizes that the expanding AI attack surface creates demand for security talent to “jailbreak” and protect models, while the massive growth of AI data centers makes cloud AI infrastructure engineers—specialists in GPU arbitrage and cost‑optimal pipelines—extremely valuable for cutting trillion‑dollar spend. - [00:24:45](https://www.youtube.com/watch?v=CNw443X6dB0&t=1485s) **AI-Driven Shift for Solutions Engineers** - The speaker stresses a required mindset change—especially among QA professionals—as AI now lets sales and forward‑deployed engineers rapidly prototype and personalize B2B SaaS solutions, provided they understand technical feasibility and the lowered cost of coding. - [00:28:27](https://www.youtube.com/watch?v=CNw443X6dB0&t=1707s) **Emerging AI Workforce Roles** - The speaker outlines a suite of upcoming high‑demand AI specialties—including behavioral data extraction, context supply‑chain management, human‑factor tuning, power‑efficient scheduling, regulatory compliance, synthetic data generation, edge inference/robotics, and AI psychology—that will shape the future talent landscape. - [00:31:42](https://www.youtube.com/watch?v=CNw443X6dB0&t=1902s) **Future of Jobs in AI** - The speaker questions how AI will affect their career and asks viewers to share any jobs not yet discussed in the comments. ## Full Transcript
0:00I want to talk about AI and jobs. It's 0:02the number one thing I get asked, Nate, 0:04what about my job? And those jobs are 0:06all distinct and unique. So, we are 0:08going to get into that level of detail. 0:09But before I do that, I want to talk 0:12about dynamics. What are the big 0:14dynamics and how are they shifting? The 0:16first one I want to call out is one that 0:17you're going to be familiar with 0:19probably because it's the one that makes 0:20headlines. It's the idea that execution 0:22is getting cheaper. So if you see an 0:25expectation as a PM that you should 0:27prompt engineer so you can do twice as 0:28many things, guess what? That's 0:30execution getting cheaper. If the 0:32expectation from an engineering 0:33perspective is you can use cursor and 0:35ship twice as much code, I don't know if 0:37that's better, but that's execution 0:38getting cheaper. If the expectation is 0:41customer success can now do uh you know 0:44other things because AI can do a lot of 0:47the customer success, that's the same 0:49theme, execution getting cheaper. We're 0:51going to take that one as red because 0:52you've probably read those headlines. 0:54So, what's dynamic number two? That's 0:57the interesting one because it is in 1:00conflict with dynamic number one. 1:02Dynamic number two says execution 1:05getting cheaper creates jobs because of 1:07quality and security nightmares. I know 1:10engineers who will tell you that they 1:13hate touching vibecoded stuff because 1:15the vibe code is so dirty. I know other 1:18engineers who say it's okay that it's 1:19dirty. I write it from scratch anyway. 1:21It's a nice idea. The point is there are 1:24new security and quality challenges with 1:26fibe coding. You look across the board 1:28with AI that is true. There are 1:31challenges with AI security around red 1:34teaming around prompt injection around 1:37all kinds of security concerns when you 1:39frontload AI into public facing 1:43websites. If you have bad chunking on 1:45your data, which I talked about last 1:46time, you're going to have issues with 1:48hallucinations that are very difficult 1:49to trace. If you cannot hedge your LLM 1:53so that it only answers within a 1:55specified distribution range of answers 1:58and if if you ask something wild, if you 2:00try to inject it, it's not going to 2:01respond. If if you can't figure out how 2:03to do that, you are going to be in big 2:05trouble. And so those are just two 2:07simple examples. There are a lot of 2:09other examples where people even 2:11internally are misusing AI and creating 2:13security nightmares. I'll give you one 2:15example on the internal one. This is a 2:17fun one for me. Sales will sometimes use 2:19chat GPT before a sales call and they 2:22will feed Slack and they will feed decks 2:24and so on into chat GPT to try to make 2:27sense of it. What they don't do is they 2:29don't prompt carefully all the time and 2:31they will sometimes get to a 2:32hallucinated deck and that's very bad 2:34because then the company is committed to 2:36something that was made up by AI. You 2:38see that kind of dynamic all the time. 2:40And so you have these twin competing 2:42dynamics and we're stuck in the middle. 2:43You have execution getting faster and 2:46you have speed creating huge quality 2:48security nightmares. But we're not done. 2:49There's two other dynamics I want to 2:51cover. Number three, compute costs are 2:54absolutely exploding. And when compute 2:56costs explode, it means there's a whole 2:59forest of downstream jobs that come with 3:02that exploding compute. And so that one 3:04is much more of a straightforward. If we 3:06have this many GPUs, we have downstream 3:08roles. So, for example, being able to 3:11get into tuning costs is a very 3:15lucrative business because everyone is 3:17spending so much on AI. And so, I want 3:19to call that out because that doesn't 3:21get talked about a lot. A lot of the 3:22time people talk about AI as almost a 3:25scale-free thing. Like we we'll do the 3:28AI. The AI isn't free as you scale it. 3:31You not you're not just spending on 3:33GPUs. you're spending on the ability to 3:36scale up your model, scale up your 3:38inference cost, scale up your prompt and 3:40context engineering if you're serving a 3:42model. All of this comes down to compute 3:45costs are exploding. How do we manage 3:47our infrastructure? There are whole 3:50forests of new roles there that people 3:52don't talk about much. Dynamic number 3:54four, the human AI boundary crisis. You 3:57can have perfect tech and you can still 4:00have very angry users. And I think one 4:02of the challenges right now is that 4:04humans and AI don't have norms to 4:06interact yet. And you have vast 4:09confusion between the two. And so there 4:12are going to be roles that develop just 4:15to figure out how to manage that event 4:17horizon. In fact, there's going to be 4:19entire billion-dollar software 4:20businesses built around managing the 4:23human AI boundary crisis. As an example, 4:26suppose someone tells you that AI is 4:27hallucinating. That sounds clear to 4:30them. But if you peel that back, it 4:32takes someone very very technical to 4:34understand and debug that. They have to 4:36understand what is meant by 4:37hallucination because it's a very vague 4:39human term. Is it an undesired response? 4:41Is it a lack of a response? Is it a 4:43partial response? Is it an overcomplete 4:45response? Is it a response that somebody 4:47noticed and that's the only reason it's 4:49being reported? But there's like 15 4:51other examples that are more 4:52complicated. Just that one statement 4:54from a human reveals the tension in the 4:56boundary crisis. Again, where there are 4:59problems like that, there are jobs. And 5:01that is the thing I want to call out for 5:03all four of these dynamics. These 5:04dynamics all create problems. People get 5:07paid to solve problems. And so AI, if 5:10you want to look for where the jobs in 5:12AI, they are where the problems are 5:14moving toward. So let's go from here 5:17from these four dynamics around the 5:18human AI boundary crisis around 5:20infrastructure exploding around trust 5:22deficits as speed creates quality 5:25nightmares and finally automation and 5:27how fast it's happening. Those are the 5:29four dynamics. What does that mean for 5:32call it 15 of the top roles in tech? 5:35Number one, not because it's special but 5:37because I did it a lot product manager. 5:39What does that mean for product 5:40management? PM is right in the middle of 5:44the automation and trust crisis. PMs 5:47simultaneously 5:48need to be scaling up their ability to 5:51manage agents. They need to be becoming 5:53more technical and at the same time they 5:55have never had more value in figuring 5:58out how to scale trust. And so if you 6:01are a PM and you can figure out how to 6:03take all of the ideas that the 6:05organization is generating through vibe 6:07code and you can figure out how to 6:08filter that and you can be technical 6:09enough to have a perspective when you 6:11talk to your engineering team about AI 6:13models and you can articulate this is a 6:16path forward. This is something that 6:18creates trust and then even more 6:19important than all of that deliver 6:22quality models in production. Now you're 6:24talking about value. By the way, this is 6:27underlining farther the whole age-old 6:29debate about PM and MBAs. MBAs aren't 6:32learning this stuff. If you need to 6:34learn hard skills in AI, the only way to 6:38learn it is by building an AI now 6:41yourself or by learning it on the job. 6:44There's not really another way to do it. 6:46Academics isn't keeping up. But the 6:48thing I want to call out, everyone's 6:49going to tell you to build as a PM. You 6:52can learn to do that. You probably 6:53should. What people aren't telling you 6:56and what I look for is the ability to 6:58earn trust in the middle of the chaos. 7:00PM as a role has always been about 7:03managing chaos and earning trust. How 7:06can you do that better with AI now that 7:09AI itself is a chaos creator? You have 7:12even more opportunity to earn trust 7:14amidst organizational chaos because AI 7:16is multiplying that chaos. And so that 7:18is where I see really compelling PM 7:20action and nobody is talking about it. 7:22Role number two, program and project 7:25manager. These guys, when I talk to 7:27them, they're very nervous. They worry 7:29that AI can plan out an entire program 7:31in a doc better than they can. AI can 7:34write the Slack messages. AI can write 7:36the email updates. All the individual 7:38pieces that they did, AI can do. AI 7:41agents can schedule the calendar 7:43meetings, right? Like where where does 7:44the program manager go? You know what 7:46program managers really do? They're 7:48accountable for delivering against time 7:51and budget and resources. That point of 7:54accountability LLMs are not taking. And 7:57so, yes, do you want to be the program 7:59manager who can probably execute better 8:01because you have AI tooling that can 8:02build Gant charts? Sure, you do want 8:04that. Do you want to be the program 8:06manager fluent enough in how AI works 8:08that you can marshall resources and 8:10manage AI projects effectively because 8:11they're exploding? 100% you want to be 8:14doing that. But the heart of that role 8:16as someone who has worked with program 8:18managers is accountability. That isn't 8:20going anywhere. And great program 8:22managers know that accountability is the 8:25beating heart of the role and will stick 8:27to it. Accountability is not going to go 8:29out of style. You still need people who 8:31are accountable, especially as things 8:33like infrastructure costs for AI are 8:35exploding. There's more money pouring in 8:37than ever to the AI space. We need 8:39people who can hold teams accountable to 8:42their use of resources. Rule number 8:44three, customer success. This one always 8:47gets the black flag, right? Like Sam 8:50Alman will talk about it as this is just 8:52going to go away. Just totally gone. 8:54Wonder why. I really wonder why. Because 8:57the customer success people who are 8:59absolutely brilliant that I know, their 9:02success does not depend on their ability 9:05to answer tickets. Their success depends 9:08on their ability to hold relationships. 9:11That is a human thing. You can't get an 9:14LLM to hold a relationship with you. And 9:16so my bet is that actually customer 9:20success is sticking around and leaning 9:22into customer relationship management. 9:25That is where I think it's going because 9:27the customer relationship person who can 9:30who can advocate for the customer 9:32internally aggressively with those pesky 9:34PMS who can talk with sales about 9:37expansion revenue. That's not something 9:39we're talking about automating with AI. 9:41It's the ticket stuff that we're talking 9:43about automating with AI. Well, great. 9:45Fantastic. Let let them do that piece. 9:48The the beating heart of the role is in 9:50the relationship because that directly 9:52extends the lifetime of the customer. 9:55Role number four, software engineering. 9:57Boy, I see people, frankly, people who 10:00are just out of college advocating that 10:03no one studied computer science anymore 10:05and that people run away from 10:07engineering. Don't run away from 10:09engineering right now. You may have to 10:11change how you do engineering, but this 10:14role, the role of software engineer has 10:17evolved more times than I can count over 10:20the course of my career in tech, let 10:22alone the 70 years or so the role has 10:25been active. Software engineering is by 10:27definition a compute enabled role and 10:29it's going to keep evolving. And so, of 10:32course, it's going to evolve in the age 10:33of AI. That's not a reason to walk away 10:35from it. Do you know how many people are 10:38going to need all of their vibecoded 10:40work cleaned up? Insane amounts of code 10:42are being generated with security holes, 10:44quality issues. It gets back to that 10:46dynamic I called out is that the fact 10:48that we can go faster is creating 10:49massive speed and quality issues. Now AI 10:52in some cases makes you not care if you 10:55want the prototype out there. If you 10:56want to initiate the idea, the the debt 10:59is not a debt. It's an asset. It helps 11:01you go faster. I get that. At the same 11:03time, if you're production deploying, 11:06you have to build well. You know where 11:08the beating heart of a good engineer is? 11:10It's someone who understands how to 11:12design durable technical systems, 11:14especially ones that scale. Yes, you 11:16have some engineers who can lean farther 11:18toward the prototyping side, and that's 11:20fantastic. Some who are going to be able 11:22to code something up in a weekend that 11:23shows how it could work. And there is 11:25going to be tremendous value there, 11:27especially if you can do that with real 11:28life data. Because a lot of these 11:30prototyping ideas your PMs are handing 11:32to you, they don't have real data on 11:33them. And so if you can knock together, 11:35and I know engineers who do this, 11:36they're brilliant. You knock together 11:38something with real data and you knock 11:40it up in a weekend, you say, "Ah, it's a 11:41Tracer bullet. It's fine." And people 11:42use it and they're just wowed. Not a a 11:44skill that's going out of style. If you 11:46can production deploy something to a 100 11:49million boxes, not a skill that's going 11:51out of style. And so the challenge if 11:52you're getting into engineering is you 11:54need to recognize that the way you work 11:57may be evolving with AI, but those 11:59fundamentals are not changing with good 12:02engineering. And you should not let the 12:05fact that AI can write some code confuse 12:08you. You must learn how technical 12:12systems get put together because that is 12:15actually the path toward career 12:16leadership. And a lot of senior 12:18engineers are very worried about junior 12:20engineers coming in and overdepending on 12:22AI because they don't understand the 12:24fundamentals. So if I actually wanted to 12:25call out a risk here, it is not that 12:27there will not be jobs. It is that there 12:29will not be qualified people for jobs 12:30because people are expecting and reading 12:34the hype and believing that AI will just 12:36write the code for them. Not true. 12:38Executive leadership. Should managers be 12:41more worried? I think I want to talk 12:43especially to senior managers and 12:46directors here. I've sat in those 12:48chairs. Those chairs are at risk. And I 12:52know people who feel that in their 12:54bones. And I got to tell you, you're 12:56kind of right. Because look at these 12:58dynamics. None of them play out in favor 13:01of senior managers and directs. If you 13:03can execute faster, it doesn't really 13:05help a senior manager or director whose 13:07core job isn't execution. If you have 13:10speed, trust, quality issues, that 13:12doesn't really help you because you're 13:14the one that has to deliver anyway. 13:16Middle managers are fundamentally 13:19information bottlenecks. Their entire 13:21job for most of the history of 13:24corporations has been to filter 13:26information. Well, guess what? OM are 13:28already really good at filtering 13:30information. And so, people joke around 13:32about the CEO really should be AI. I 13:35think it's more the middle manager is at 13:37risk. Not of being an AI agent. I don't 13:40buy that. I know that uh there's that 13:41throwaway line in Project Vend about 13:43Claudius being a middle manager. That's 13:45not the future I'm talking about. I 13:47think it's more likely that the role is 13:50limited and somewhat endangered. We will 13:52still have directors. We will still have 13:54senior managers. Their spans will be 13:56much bigger. They will be more stressed. 13:58They will depend more on AI tooling to 14:00help them ladder up all that information 14:01flow. and they will chiefly exist as a 14:04strategic point of accountability. And 14:07so if the company is executing a 14:09strategy, you want to hand a big piece 14:10to someone who's accountable for it. 14:12That's the director. You're going to 14:13hand that to the director, not an AI 14:15agent. And so if you want to get ready 14:17for that, it's sort of like the PM and 14:20the program manager. Get good at 14:22accountability. get good at saying I can 14:25take this strategy and I can put legs on 14:27it with the people and resources I have 14:30with almost no direction from my VP or 14:32SVP. I can just go and do it. That is 14:34the heart of being a director. If you 14:37are good at that, if you are good at 14:39building AI cultures for your team, 14:42you're probably going to be okay. But 14:44don't expect that role to grow. There 14:47are not going to be lots more directors 14:48out there because the dynamics aren't in 14:51favor of job growth on this one. Number 14:53six, data scientists. This is a really 14:56interesting one because the the demand 15:00is skyrocketing for this. People worry 15:02that like data scientists might not be 15:05doing well because there's like research 15:07scientists for AI now and maybe data 15:09scientists are out. It's not really 15:11working out that way because people have 15:13so many needs for data science related 15:16to preparing their data for the AI age. 15:18In a sense, this is one of the most 15:21blessed roles in the age of AI because 15:23at the end of the day, there's so much 15:25data in the world that has to be got 15:27ready for AI and there's so much custom 15:29work that needs to be done at the 15:30enterprise level to suit models to data 15:33sets etc. The data scientists are just 15:35never bored and the heart of the role is 15:37a design. It's a creative role. People 15:39think it's not creative, it's creative. 15:40I worked with data scientists. It's a 15:43really thoughtful role. It is not a role 15:45that is easy to automate and it is a 15:46role where quality matters. All of those 15:49things strongly argue for this trend of 15:51like boosted demand for data science 15:53being durable. I'm quite bullish on data 15:55science, DevOps or machine learning 15:58operations. Demand is exploding 16:00especially for machine learning ops. 16:02People don't know how to implement 16:06machine learning pipelines and 16:07operations. If you as a DevOps person 16:09can go from how do we help developers 16:12deploy software effectively to how do we 16:14help AI engineers effectively deploy and 16:17maintain models which is really like 16:19it's automation out of new chaos 16:21patterns but that is what you do right 16:24like what you do in DevOps is 16:26fundamentally taking the herd of cats 16:29that is a bunch of developers and 16:30figuring out how to get it into a clean 16:32production pipeline. Well, quite 16:34similarly, you have a herd of cats that 16:36are now ML engineers or or AI engineers 16:38and you have to herd them into an 16:40effective deploy pipeline and 16:42effectively manage the model as it's in 16:44production. If you are in DevOps or in I 16:46guess maybe you'll call it machine 16:48learning ops now the beautiful thing is 16:50you are solving human problems and 16:53engineers as we've talked about are not 16:54going out of style and so you are still 16:56going to be needed to solve those human 16:57problems and yes there are going to be 16:59AI tools that help you but the heart of 17:01this is helping to align the complex 17:05work of building good software to 17:08production value. So when do you deploy? 17:10Why do you deploy? How do you fix? What 17:12do fixes look like? How do you deploy 17:13securely? what are your different 17:14environments look like etc etc etc 17:16tooling you know this better than me the 17:19heart of it is getting all of that 17:21aligned so you deliver the value to the 17:22customer and so that you solve the 17:24problems for engineers and they can 17:26focus on building software those are 17:27human problems you're solving those 17:29aren't going out of style number eight 17:31UX human AI interaction design you 17:33remember when I talked about the 17:35problems of the human AI boundary that's 17:38real that is that is something that I 17:40see happening all the time the current 17:42AI interfaces are deeply imperfect and 17:45create a lot of confusion. We need to 17:48understand that as execution gets 17:51faster, UI craft is becoming more 17:54valuable because the cheap stuff is 17:56becoming more commoditized. So if you 17:58have really really polished UI, it's 18:00going to stand out more because the sea 18:02of the internet is going to be a bunch 18:03of vibecoded stuff that is not 18:05wellcrafted. And so let me give you an 18:08example that just came out. Perplexity 18:10has done a really good job with UX 18:11interaction. They just launched 18:13something today, yesterday, that I think 18:15is really, really cool. They launched 18:17the ability to pass a message to the AI 18:21as you read the chain of thought in the 18:23middle of a research task. As far as I 18:25know, no one else lets you do that yet. 18:27It's brilliant because how many times 18:29have you as a user sat there and typed 18:31out a prompt and then you're like, "Ah, 18:33what did I for I forgot to say this, 18:35right? Like, I have to add this." And 18:36then you have to sit there and it's a 18:38research prompt. You have to wait. Not 18:40anymore. Now with perplexity, you just 18:42pass the AI a note and it modifies. That 18:45is AI, but it's also UX, human 18:48interaction design. It is solving some 18:50of the human AI boundary issues because 18:52you're recognizing the old truth that 18:54humans are better at correcting mistakes 18:57we have made than checking our work. 18:59Which is why good email systems will 19:01often give you a delay on send and an 19:04undo button because they know you 19:05instinctively check your work after you 19:08send. That's UX design. So if you are 19:10designing for human AI interaction, your 19:13world is getting richer and richer and 19:14richer. If you are designing for humans 19:17using AI systems, it's the same kind of 19:19problem. I'm just calling it a different 19:20name, right? Like because all of the 19:22systems we're using now are getting 19:23rapidly AI enabled. And that's why when 19:26people say, well, I'm not designing for 19:27AI. I I I'm like, but really because 19:31almost everything is getting AI at a 19:33tremendous rate. And so if it's not true 19:35for you now, it probably will be true 19:37because your board is going to ask you 19:38to do it soon. The trend is is 19:41unbelievably pervasive. And so I think 19:44this is not a case where UX has to go 19:46and get AI experience because the AI 19:48experience is largely speaking going to 19:49come to you. People are going to be 19:50asking you to do this. And the challenge 19:52for you is to think deeply about human 19:55AI design and figure out how to build 19:57trust through interactions. I'll give 19:59you another example that we haven't 20:00solved and leave you with that to to 20:02think about in the in the UX space. How 20:04do you take the models we have which are 20:07not good at taking accountability and 20:09build in interaction dynamics that track 20:12accountability over time for models? If 20:14I tell the model that is incorrect, do 20:17not do that again. How can you signpost 20:19that and indicate to the user that you 20:22have instructed the LLM in a specific 20:24way of behavior? And then on the back 20:25end, can you work with AI engineers to 20:27pass that as a prompt to remind the LLM, 20:30thus improving the experience of even 20:34simple chatting because you're actually 20:36reinforcing accountability from the user 20:38there. There are a hundred different 20:40ideas like that that you can come up 20:41with around UX human interaction. It's 20:43tremendous, huge opportunity. Number 20:45nine, security and red team. I don't 20:48think I need to do a ton of this. There 20:49is a new AI jailbreak issue almost every 20:52day. red teaming and security work is 20:55there are just not enough of folks in 20:57the world. If if you are able to start 21:00playing with jailbreaking, start playing 21:02with LLMs as attack services, start 21:04looking at prompts for potential 21:05vulnerabilities, 21:07start looking at systematic 21:08vulnerability databases, start reviewing 21:10vibecoded pieces of work for security 21:13issues, you will never be out of work. 21:15That is an absolutely huge area of work. 21:19And you know what? It is the same set of 21:20instincts that has made security people 21:22do what they do well. I I knew grey hat 21:24people back in the day. It's the same 21:26instinct to go and try and mess with it 21:28and break it. Well, it turns out we have 21:30a whole new intelligence surface and we 21:33have to jailbreak that to make it more 21:35secure. Huge opportunities there. Number 21:3710, cloud AI infrastructure engineers. 21:40This is the infrastructure exploding 21:41piece. You pay for yourself in this role 21:44by cutting spend. If you can find ways 21:48to master GPU arbitrage, to master the 21:52way you pass calls to the GPUs, to 21:55master your cloud infrastructure build, 21:57you're literally optimizing for 21:59collectively speaking the largest 22:01infrastructure build in human history. 22:03AI data centers are on track for like 22:06trillions of dollars in compute capital 22:08expenditure by 2030. Trillions, like 22:10six, seven trillion, something like 22:12that. It'll probably be higher by the 22:13time we get there. They need cloud AI 22:15engineers to avoid spending more money 22:17than they have to. At that level, an 22:20engineer that can do his or her job well 22:23pays for their salary 10 or 100 times 22:25over in the way they handle these larger 22:27and larger fleets of GPUs. It's an 22:29incredibly valuable occupation. And if 22:31you were already in cloud as an 22:33engineer, you're prepared for it. Data 22:35engineering, figuring out how to get 22:37from ETLs into AI pipelines. Listen, you 22:41may think, no, what are we going to do 22:43here? AI is going to come for like 22:44automated pipeline builds etc etc. I 22:47don't think that you are realizing how 22:49much data, this is the same thing with 22:50data science, how much data is going to 22:53be needed and how much data preparation. 22:55I will tell you again, I think most of 22:57the failures I see in AI projects come 23:00from the data side. If you are good at 23:04figuring out feature store governance, 23:05at figuring out vector ETLs, if you're 23:07good at figuring out how new data types 23:09can be made accessible and useful for 23:11business use cases, that's where the 23:13value is. Extraordinary data engineers 23:15have always been distinguished by 23:17understanding the technical side of the 23:19business and also the customer use case. 23:22In this case, understand the AI customer 23:24use case, how AI is changing what 23:26customers are expecting, the kinds of 23:27queries that are coming through. And 23:29then understand how the technical side 23:31enables that. Understand how vectorizing 23:33data is different than storing data 23:35traditionally. It's the same job. It's 23:38just a new technology stack. QA and AI 23:40quality. This is an interesting one 23:42because this is an area where we are 23:44fundamentally seeing a transformation 23:46and I don't know of anyone talking about 23:48it enough. Right now we are putting most 23:51of our energy into QAing software before 23:54it launches. With AI we need to shift 23:57and put much more of our energy into 23:59QAing as a durable quality threshold 24:03that is always on in production. Why? 24:06Because these systems produce 24:08probabilistic responses. You cannot 24:11deterministically test all this 24:12software. In a sense, the value in QA 24:15now is sustaining the value of the 24:18software and guarding it over time. That 24:21again is the heart of QA is sustaining 24:23the quality of the software, but you get 24:25even more to do because there's more of 24:28it because it just sustains it over 24:30time. You can't just launch and forget 24:32the way you did with deterministic 24:33software. Now, this is a major mindset 24:35shift. Most QA people that I talk to are 24:38not ready for this world. they are used 24:40to P 0, P1, P2, do the test and launch. 24:42That mindset won't work. And I do worry 24:45a little bit, not because the jobs won't 24:46be there, but because the QA people I 24:49know aren't really thinking that way. 24:51And so this is an area where there's a 24:52mindset shift that I think is important. 24:54Number 13, sales and solutions 24:57engineers. These can be very popular. 24:59Forward deployed engineers is another 25:01word for it. U some people say that's 25:03different, but it's it's very simple uh 25:05and very similar. This is a case where 25:07AI is a powerful enabler for this job. 25:10You can code something up very very 25:12quickly that demonstrates a personalized 25:15solution for the customer effectively. 25:17The challenge is you also have the 25:18quality piece. It's on you as the 25:20forward deployed or solutions engineer, 25:22the sales engineer to know what is 25:24actually doable from your product 25:26technically and to vibe code or quickly 25:29code only those things that you can 25:31actually reliably deliver. And you are 25:33also on the front lines of one of the 25:34most interesting trends in B2B SAS 25:37because because speed and execution are 25:41getting better at the code level. It is 25:42possible to extend SAS frameworks in 25:45ways that weren't possible before. When 25:46I was coming up in product, we were 25:48always taught to say no. PM says no, 25:50right? We were always taught to say no 25:51because you couldn't extend the software 25:53because it was so expensive to code. 25:55It's not expensive to code anymore. It's 25:57cheap. If it's not expensive to code 25:59anymore, then you should be able to 26:01extend and personalize the software 26:02more, which means more sales and 26:03solutions engineers as long as you are 26:06careful about quality. And so that's the 26:08thing where I know a lot of solutions 26:09engineers that want to advocate for the 26:12customer and lean in on customization. 26:14Make sure that you know the software 26:16stack you're working with and you don't 26:17overcommit. Number 14, edge engineers. 26:20People who can put intelligence into 26:23smaller devices. This one is brand new. 26:26This is not a role that exists right 26:29now. There are absolutely indie hackers 26:32out there who love to build LLMs onto 26:34small devices. Let me run it on my 26:36laptop, right? Let me quantize it and 26:38run it down on my phone. Whatever it is, 26:40let me compress this vision model. If 26:42you are that person, this role is going 26:44to exist for you. And I know I know a 26:47lot of people who just it's like the 26:49Unix tinkerers in the 1990s and 2000s 26:52and the Linux tinkerers. They just can't 26:54stop tinkering and playing with it. 26:55That's a fantastic preparation for this 26:58kind of role. We are going to want 27:00intelligence in everything. And if you 27:02think you don't, somebody is going to 27:04hire you to do it. Like someone is going 27:06to hire you for the smart refrigerator 27:07and the smart toaster and the smart home 27:09robot that folds your laundry and the 27:11smart washing machine and this and that. 27:13All of them are going to take little 27:15large language models that can fit and 27:17that need to be secure and that need to 27:19fit uh you know on prem on the device. 27:21Anyone who can figure out how to deploy 27:23intelligence at the edge is going to if 27:26you have your ability to talk about use 27:29cases and you're not just interested in 27:30your own work, you're going to have 27:32work. You're going to have roles. Number 27:3315 and number 15 is the last one. Vector 27:36database and retrieval engineers. This 27:38is exploding. This this is exploding. No 27:41one can get their hands on these people. 27:42If you work with rag, you are in one of 27:44the most valuable places in tech. which 27:47is why I've called out in the past 27:48understanding how rag works is one of 27:50those cheat codes right now in the job 27:51market. If you are an engineer who works 27:53with rag even more, you are even more 27:56valuable than you were before. It's 27:58incredible. Okay, so we've gone through 28:00these 15 roles. I want to just call out 28:03that there are some really interesting 28:05things that are coming up as dynamics 28:07that don't yet have role titles that you 28:10should have your eyes on if you're 28:11looking five or 10 years down the road 28:12in your career. Agent fleet 28:14orchestration is one. How do you manage 28:15fleets of agents? People have talked 28:17about that one. People talk less about 28:18number two, the simulation economy. How 28:21do you simulate more things? I talked 28:23about this in my digital twins video. 28:25Getting behavioral data out of 28:27simulations is going to be a big deal. 28:29Number three, understanding context and 28:32context supply chain. We don't really 28:33have names for that role, but that's 28:35going to become big. Number four, 28:37figuring out how to tune the human 28:41factor in AI modeling. It's almost like 28:43you're designing AI models with cultural 28:45bias. you're designing AI models and 28:47understanding human preference 28:48distributions. It's going to be a mix of 28:50like anthropology and psychology and 28:52deep technical understanding. Number 28:54five, figuring out how to work extremely 28:57efficiently with power. If you are 28:59scheduling jobs, how do you maximize 29:01efficiency on GPUs? We haven't had to 29:03get to this level before, but we haven't 29:04had a capex spend this big in compute 29:06before. It's going to become a big deal. 29:08I also want to call out AI risk and 29:11compliance is just starting to come up. 29:13It's going to be absolutely massive. the 29:15EU AI act, SEC disclosure rules, GDPR 29:18implications for training data. It's 29:20everywhere and it's going to get bigger. 29:22Synthetic data is one we don't talk 29:23about. People who are good at producing 29:25very high quality synthetic data are 29:27going to be in demand. Edge inference 29:29optimizers, people who not only can put 29:31stuff on devices, but figure out how to 29:33make it reason and figure out how to 29:34make it into robotics. It's like a 29:36crossover between robotics and 29:37inference. It's going to be a big deal. 29:39calling out that the idea of an AI 29:42psychologist sounds like science 29:43fiction, but it may help with security 29:45and red teaming. You're going to see 29:47psychologists on red teams to help debug 29:49LLMs. And then the last one I want to 29:52call out, business process designers. 29:54Like figuring out how business process 29:56designers are able to take AI and design 29:59an endto-end human and AI process loop. 30:02That's going to be a huge deal. We don't 30:04know how to work well with AI. 30:05Businesses are complex. If you can 30:08zeroot a business process as a designer, 30:10it's going to be extremely valuable as a 30:12skill. Okay, how do you navigate all of 30:14this? I want to give you just a couple 30:16of things at the end here that will help 30:18you put this together. First, if you're 30:20stuck, if you're overwhelmed, look at 30:22the survival level. Look at how you can 30:25identify tasks in your current role that 30:26you can automate so you are more 30:28effective. How can you set up AI powered 30:30email filters? All you know, all the 30:31stuff that people talk about, right? Use 30:32chat GPT for first drafts. Get to the 30:34survival level first. Then get to the 30:36adapt level. Then figure out how you 30:38move into the kind of role I talked 30:40about here. How do you get a 30:41complimentary technical skill set going? 30:43How do you build a portfolio project? 30:45How do you demonstrate that you are 30:46competent in where the role is going? 30:48And finally, you get to the lead level. 30:50Finally, you get to understanding where 30:52the new risk areas are, where there's 30:54frameworks that others can adopt in your 30:56job field, where you have new tools you 30:59can build to solve problems or you can 31:00get others to build, where you can 31:01establish industry standards. because 31:03for some of this stuff it's so new there 31:04isn't industry standards. Okay, wrapping 31:06this all up, we have talked, frankly 31:09fairly exhaustively about the key 31:10dynamics driving AI jobs. I've called 31:13out 15 key jobs I want you to think 31:15through. I've talked about how you can 31:18survive, how you can adapt, how you can 31:20lead in those jobs. And I've even talked 31:22about future job dynamics that will 31:24become jobs in the next 5 years or so. I 31:27hope this has been helpful. I don't want 31:29to overwhelm you, but I do believe this 31:31degree of specificity is necessary to 31:34honestly answer the question, where is 31:37my job going? So, that's it. This is it. 31:40Where is my job going in the age of AI? 31:42Let me know. If you found a job that I 31:43haven't covered yet, put in the 31:45comments.