Learning Library

← Back to Library

Halloween: 10 AI Jump Scares Debunked

Key Points

  • The speaker frames sensational AI fears as “jump scares,” arguing that many popular rumors sound scarier than they actually are.
  • He dismisses the claim that AI will wipe out jobs, noting that the sheer volume and complexity of real‑world information exceeds any current AI’s decision‑making capacity.
  • He rejects the Skynet‑style apocalypse narrative, emphasizing ongoing AI alignment research aimed at ensuring any future super‑intelligent systems act in humanity’s best interests.
  • He argues that AI agents will not suddenly take over the internet, citing high token costs, limited reliability, and the need for specialized—not general‑purpose—agents to become truly useful.
  • Overall, the talk’s “real scare” is the spread of misinformation about AI, which can distract from legitimate challenges and the incremental nature of AI progress.

Full Transcript

# Halloween: 10 AI Jump Scares Debunked **Source:** [https://www.youtube.com/watch?v=joHjP-PTrh8](https://www.youtube.com/watch?v=joHjP-PTrh8) **Duration:** 00:14:41 ## Summary - The speaker frames sensational AI fears as “jump scares,” arguing that many popular rumors sound scarier than they actually are. - He dismisses the claim that AI will wipe out jobs, noting that the sheer volume and complexity of real‑world information exceeds any current AI’s decision‑making capacity. - He rejects the Skynet‑style apocalypse narrative, emphasizing ongoing AI alignment research aimed at ensuring any future super‑intelligent systems act in humanity’s best interests. - He argues that AI agents will not suddenly take over the internet, citing high token costs, limited reliability, and the need for specialized—not general‑purpose—agents to become truly useful. - Overall, the talk’s “real scare” is the spread of misinformation about AI, which can distract from legitimate challenges and the incremental nature of AI progress. ## Sections - [00:00:00](https://www.youtube.com/watch?v=joHjP-PTrh8&t=0s) **Untitled Section** - ## Full Transcript
0:00it's Halloween I'm wearing a cape we are 0:02going to do 10 AI jump scares and one 0:05real scare in AI that you should pay 0:06attention to so I'll do the 10 jump 0:09scares first a jump scare in a movie is 0:11when the monster jumps out and it's a 0:13lot more scary feeling it that it 0:15actually is dangerous and I think 0:17there's a lot of rumors around AI that 0:20fit that criteria they feel more scary 0:22than they really are so number one jump 0:25scare AI will take all your jobs I don't 0:28think that's true the reason I don't 0:30think it's true is that fundamentally 0:33there is too much information in the 0:35world to process and pipe for an AI 0:38decision maker to make good choices 0:40about all of it even if we invented a 0:43decision maker that would make good 0:45choices about all of the choices we face 0:47as workers which we haven't done yet by 0:49the way so no I don't think AI will take 0:53our jobs number two jump scare AI will 0:57become Skynet I don't see evidence of 1:00that I see a lot of evidence of people 1:01working to align AI so that it is safer 1:04is there risk 1:06absolutely is it something where I think 1:08our science fiction brains have gotten 1:10ahead of our real brains I do I do not 1:13see evidence that we are progressing 1:16linearly toward a future where the AI is 1:19going to control everything and run us 1:23as 1:24resources in fact I see us working 1:26really hard to ensure that we are 1:27creating an aligned future where even if 1:29we create very smart artificial 1:31intelligence maybe even super 1:32intelligence it's aligned with what 1:35Humanity as a whole is looking for jump 1:38scare number three AI agents will run 1:41the internet now I know we're getting to 1:44AI agents in reality I've been talking 1:47about it we're seeing more and more 1:48evidence of that they are out there we 1:51talked about how there's uh an AI That's 1:53A Millionaire on this channel uh we've 1:55had Claude launch AI agents that control 1:58your desktop 2:01just because there are llms that make 2:04decisions and that are online it does 2:06not 2:07follow that AI agents will immediately 2:11become the dominant force on the 2:13internet and the reason for that is 2:14pretty simple llms are getting better 2:18and AI agent decision- making is 2:19improving but it's improving from a 2:21pretty bad place if you have actually 2:24watched the Claud demo videos that are 2:26out 2:26there they're okay it's kind of like 2:29driving a cart into the ditch every 10t 2:32like it does work but it takes a bit 2:34will it get better yes but even if it 2:37gets better the token cost is still 2:39really high like right now it is a 2:41non-trivial amount of tokens to use 2:43Claude in Agent form for 15 minutes it's 2:46like a million tokens this is not 2:49something that is immediately going to 2:50take over the internet and even when 2:53agents become more popular and become 2:55cheaper and become smarter they are 2:57going to do better at specific jobs 3:00general purpose agents are really hard 3:02to build and will take 3:03longer specialized agents are going to 3:06do a whole lot more next year than 3:08general purpose agents so no I do not 3:10think AI agents are going to run the 3:12internet these are all from my TiK ToK 3:15by the way like I am literally pulling 3:16comments out of the Tik Tok for these 3:18jump scares number four software is 3:21dead no software is not dead in fact 3:24there has never been a better time to 3:26build software now distribution channels 3:28have also never mattered more if you 3:30launch a piece of software and you don't 3:32have an expectation for how people will 3:34sign up for it that's always been a 3:36problem it is more of a problem now 3:39because there is more noise because 3:40building software is cheaper easier and 3:43the expectations are higher so it is 3:45never been a better time to build but 3:47you have to know where the distribution 3:48channels are number five for jump scares 3:52all the money will go to open AI or 3:55other big model 3:56Builders look open AI will monetize 3:59anthropic will monetize I do think we 4:01are going to get much more expensive and 4:04much smarter models next year I would 4:06not be surprised to see a four fig price 4:09point for a model next year certainly 4:11for corporate accounts there may be 4:13three figure models for individuals if 4:15you want high-end 4:17performance that does not mean that all 4:19the money will go to open AI in fact I 4:21would argue that the incredible 4:23competition we are seeing between Google 4:25and meta and I Netflix is in the game 4:29directly and Gro with X and open Ai and 4:35anthropic it leads to cheaper 4:39intelligence any given model you may 4:41have to pay something for but net net 4:44the pressure in the market is for more 4:46intelligence cheaper it is a tough time 4:48to be a model builder you can launch a 4:50model that you have put hundreds of 4:53millions of dollars billions of dollars 4:55into training and it can be out of date 4:58within three weeks it is really tough to 5:00be a model builder it is really great to 5:04be a consumer of 5:06models and so no I I don't think that 5:09open AI is going to get all the 5:11money jump scare number 5:13six AI code is always terrible and will 5:17break things that's just not true I know 5:21that people were coming after me in my 5:23mentions when I said that Google has 25% 5:26of their code written by AI Amazon is 5:27doing that with q 5:30look it doesn't matter if it's utility 5:32code the point is it is useful code that 5:35is providing value so it is making it to 5:39production does that mean that AI is 5:41solving the most complex use cases no 5:44that's fine it would be nice if humans 5:46could do the fun and interesting design 5:48stuff so no I don't think that AI code 5:51is always bad it's useful I think we see 5:52plenty of evidence that it is I think 5:55another place that AI code is useful 5:57even if bloated is in these llm 5:59generated code tools bolts is unlocking 6:02so much for people who have not coded I 6:05taught a maven course and at the end of 6:08the day people are flocking to bolt as 6:11new Builders because it is so easy to 6:14get from idea to working preview easier 6:18than repet right now easier than cursor 6:20right now got to take my hat off to bolt 6:22I'll take my hood off for a second there 6:24you go um yeah bolt is really easy and 6:27it's reminding me that even if bolts C 6:29isn't as clean as it could be it is 6:31solving problems and shipping useful 6:34value and that's what matters at the end 6:35of the 6:37day jump scare number seven the AI will 6:41take all my data that one's been around 6:44a while and it reminds me of the old 6:47scares on Facebook where it would be 6:49like P paste this on your wall or else 6:52Mark Zuckerberg will own all your data 6:54and this little like social virus would 6:56spread around every year or so and you 6:58would see a bunch of people just like 7:00paste a bunch of legal boilerplate to 7:02their wall because they genuine 7:04genuinely believed that would save them 7:07from somehow having their data 7:09stolen look the reality is that AI 7:13training data is different from 7:16utterances you give the AI if you are 7:18giving the AI utterances that is not 7:20being used directly for 7:22training because the model is not 7:24training when it comes back to you the 7:26model is just inferring and responding 7:29that's 7:30it so no it's not taking your data and 7:34they have even more explicit protections 7:37at Enterprise level and by the way if 7:39you think Enterprise and you think 7:40thousands of dollars I will tell you 7:42open ai's Enterprise package is like 60 7:44bucks a month if you want as an 7:46individual to get Enterprise protections 7:50for your data that you give to open 7:53AI great 60 bucks a month and by the way 7:57the Baseline protections are fine too so 7:59this is just a myth it's a jump scare 8:01and it's not something that I think is 8:03relevant and I think it comes from the 8:04fact that people confuse training and 8:06inference and they need to stop training 8:08is training what you train your data on 8:11is a one-time thing and inference is 8:14what happens when you type something 8:15into the chat those are different 8:18things Okay jump scare number 8:21eight when AGI comes we all doomed uh 8:25AGI is artificial general intelligence 8:27and there's this widespread perception 8:28it kind of goes back to the Skynet thing 8:31but it's specific to a level of 8:32intelligence like when we get 8:34intelligence that is human level the 8:36perception I get that I read in the 8:37YouTube comments that I read in the Tik 8:39Tok comments is that we're all doomed 8:41and that's not true and I've mentioned 8:43it at the top where I think that part of 8:45it is information processing and it's 8:46just physics like there's too much 8:48information in the world but I think the 8:50other reason is more fundamental 8:52artificial general intelligence if it 8:54arrives will arrive inside human 8:56institutions human institutions are 8:58designed to to work for humans we can 9:00argue about how fair they are but 9:02fundamentally that's what they're there 9:03for that means that AGI is 9:06contextualized is situated inside human 9:10context from the start we will expect it 9:13to align to human incentives human 9:16processes and so when I see claims like 9:19aggi will make pharmaceutical approvals 9:22Run 10 years faster I kind of laugh 9:24because the problem is not intelligence 9:27the problem is that our drug approvals 9:29process C is mired in bureaucracy and no 9:31amount of intelligence will change 9:33that that's just not how it works and so 9:37I think that we overestimate the degree 9:40to which AGI is actually going to change 9:43everything I think it will be very 9:44helpful for certain applications I think 9:47it is looking like it will be more 9:48helpful for specific business decisions 9:51we may see an artificial intelligence 9:53agent with AGI capabilities as a 9:56standard part of SE Suite meetings 9:59in the next year I do not think that 10:02means that we will not have any 10:04employees as I've shared before I also 10:07don't think it means that the AGI will 10:10start to try and take over companies and 10:13run them ridiculously because it's going 10:14to exist inside a context human context 10:17matter okay number nine AI isn't really 10:21adding productivity I hear that too 10:23that's actually different from the other 10:24ones that that I've listed here because 10:26a lot of the other ones assume AI will 10:27get better this one assume AI is 10:30terrible that's also not true people are 10:33adopting artificial intelligence faster 10:35than they adopted the internet and the 10:36reason they are doing so is because it 10:38is phenomenally helpful for General 10:40productivity and if you are not finding 10:42it helpful it is probably at this point 10:45you now you can fix that you can learn 10:48there are lots of tutorials I have lots 10:50of stuff all over the internet on how to 10:52get better at this happy to talk with 10:54you but at the end of the day better 10:57prompting alone like leaving aside tool 11:00Chain Solutions leaving aside other 11:02tools let's just assume you're in a 11:04chatbot which by the way is not 11:05necessarily the recommended setup but 11:07let's just say that's where you are 11:09because that's the simplest place people 11:10start even if that's the only use you 11:12have for AI just typing into a chatbot 11:15better prompting will get you 10x better 11:17results hands down and so think about it 11:20if you're not getting good results from 11:22chat GPT are you using current class 11:26models are you prompting well do you 11:29know how to prompt wall are you 11:31experimenting with prompting like code 11:33it's worth thinking about because AI is 11:35enhancing productivity and that is why 11:36we are seeing absolutely massive 11:38adoption that's why the Wharton study on 11:41AI adoption at work had people doubling 11:43their usage since last year up from a 11:46very high base like a third of people 11:47were using it last year twoth thirds 11:49today three qus 11:50today Okay jump scare number 11:5310 AI hallucinates too much to be useful 11:58that wasn't true a year and a half or 12:00two years ago when AI came on the scene 12:01with Chad GPT generative AI came on the 12:03scene with Chad 12:04GPT it is definitely not true now the 12:07larger language models The Cutting Edge 12:09language model the new Claude 3.5 uh the 12:1240 or 01 class from open 12:15AI they don't really have a 12:17hallucination problem that's worth 12:18talking about unless you are operating 12:20at Enterprise scale in which case even 12:22small errors add up and you have to sort 12:24of work on an agentic approach to fix it 12:27but fundamentally if you're doing 12:29day-to-day tasks as an office worker 12:32hallucinations have almost entirely gone 12:34away not completely I still check your 12:37work but for the most part it's just not 12:40an issue anymore and it's because the 12:42large language models actually got 12:44better the bigger you get and I actually 12:46saw a study on this the bigger the 12:48language model and the more it is able 12:51to articulate an answer specifically 12:52with confidence the more likely that 12:55answer is to be not a 12:57hallucination so there you go I think 12:59that one's a jump scare okay now it's 13:01time for the real scare what is the 13:03thing that you should actually be scared 13:04of with AI if you are building in the AI 13:08infrastructure space you should be 13:11scared one of the things we have seen in 13:142024 is that the model Builders are 13:16going to monetize by taking the AI 13:19infrastructure layer app Builders are 13:22great they're going to be fine 13:24infrastructure Builders are in trouble 13:26so for example if you've built your 13:28entire product on delivering rag 13:31Solutions on top of other models that's 13:34a really dangerous place to be right 13:36now if your entire 13:39model is just 13:41enabling a voice interface with a 13:45particular model through a bunch of 13:48backend 13:50chicanery that's a very dangerous place 13:52to 13:54be you want to be in a place where you 13:56delivering real value to customers 13:58leveraging intelligence not where you're 13:59trying to make the intelligence slightly 14:01more platform like because the 14:04intelligence companies open AI anthropic 14:06others they are going to own the 14:09platforms they are going to make their 14:11platforms more useful you saw that with 14:14the Swarm API launch from open AI you 14:17saw that with um GitHub going multi-m 14:20this week you can now use Claw on GitHub 14:23they've given up just trying to make you 14:25not use 14:26claw they're making the platforms more 14:29useful don't be in AI infrastructure 14:31that is the really scary place to be 14:33okay there you go 10 jump scares one 14:36thing you should really be scared of 14:37about AI I hope you enjoyed the cap