Halloween: 10 AI Jump Scares Debunked
Key Points
- The speaker frames sensational AI fears as “jump scares,” arguing that many popular rumors sound scarier than they actually are.
- He dismisses the claim that AI will wipe out jobs, noting that the sheer volume and complexity of real‑world information exceeds any current AI’s decision‑making capacity.
- He rejects the Skynet‑style apocalypse narrative, emphasizing ongoing AI alignment research aimed at ensuring any future super‑intelligent systems act in humanity’s best interests.
- He argues that AI agents will not suddenly take over the internet, citing high token costs, limited reliability, and the need for specialized—not general‑purpose—agents to become truly useful.
- Overall, the talk’s “real scare” is the spread of misinformation about AI, which can distract from legitimate challenges and the incremental nature of AI progress.
Sections
Full Transcript
# Halloween: 10 AI Jump Scares Debunked **Source:** [https://www.youtube.com/watch?v=joHjP-PTrh8](https://www.youtube.com/watch?v=joHjP-PTrh8) **Duration:** 00:14:41 ## Summary - The speaker frames sensational AI fears as “jump scares,” arguing that many popular rumors sound scarier than they actually are. - He dismisses the claim that AI will wipe out jobs, noting that the sheer volume and complexity of real‑world information exceeds any current AI’s decision‑making capacity. - He rejects the Skynet‑style apocalypse narrative, emphasizing ongoing AI alignment research aimed at ensuring any future super‑intelligent systems act in humanity’s best interests. - He argues that AI agents will not suddenly take over the internet, citing high token costs, limited reliability, and the need for specialized—not general‑purpose—agents to become truly useful. - Overall, the talk’s “real scare” is the spread of misinformation about AI, which can distract from legitimate challenges and the incremental nature of AI progress. ## Sections - [00:00:00](https://www.youtube.com/watch?v=joHjP-PTrh8&t=0s) **Untitled Section** - ## Full Transcript
it's Halloween I'm wearing a cape we are
going to do 10 AI jump scares and one
real scare in AI that you should pay
attention to so I'll do the 10 jump
scares first a jump scare in a movie is
when the monster jumps out and it's a
lot more scary feeling it that it
actually is dangerous and I think
there's a lot of rumors around AI that
fit that criteria they feel more scary
than they really are so number one jump
scare AI will take all your jobs I don't
think that's true the reason I don't
think it's true is that fundamentally
there is too much information in the
world to process and pipe for an AI
decision maker to make good choices
about all of it even if we invented a
decision maker that would make good
choices about all of the choices we face
as workers which we haven't done yet by
the way so no I don't think AI will take
our jobs number two jump scare AI will
become Skynet I don't see evidence of
that I see a lot of evidence of people
working to align AI so that it is safer
is there risk
absolutely is it something where I think
our science fiction brains have gotten
ahead of our real brains I do I do not
see evidence that we are progressing
linearly toward a future where the AI is
going to control everything and run us
as
resources in fact I see us working
really hard to ensure that we are
creating an aligned future where even if
we create very smart artificial
intelligence maybe even super
intelligence it's aligned with what
Humanity as a whole is looking for jump
scare number three AI agents will run
the internet now I know we're getting to
AI agents in reality I've been talking
about it we're seeing more and more
evidence of that they are out there we
talked about how there's uh an AI That's
A Millionaire on this channel uh we've
had Claude launch AI agents that control
your desktop
just because there are llms that make
decisions and that are online it does
not
follow that AI agents will immediately
become the dominant force on the
internet and the reason for that is
pretty simple llms are getting better
and AI agent decision- making is
improving but it's improving from a
pretty bad place if you have actually
watched the Claud demo videos that are
out
there they're okay it's kind of like
driving a cart into the ditch every 10t
like it does work but it takes a bit
will it get better yes but even if it
gets better the token cost is still
really high like right now it is a
non-trivial amount of tokens to use
Claude in Agent form for 15 minutes it's
like a million tokens this is not
something that is immediately going to
take over the internet and even when
agents become more popular and become
cheaper and become smarter they are
going to do better at specific jobs
general purpose agents are really hard
to build and will take
longer specialized agents are going to
do a whole lot more next year than
general purpose agents so no I do not
think AI agents are going to run the
internet these are all from my TiK ToK
by the way like I am literally pulling
comments out of the Tik Tok for these
jump scares number four software is
dead no software is not dead in fact
there has never been a better time to
build software now distribution channels
have also never mattered more if you
launch a piece of software and you don't
have an expectation for how people will
sign up for it that's always been a
problem it is more of a problem now
because there is more noise because
building software is cheaper easier and
the expectations are higher so it is
never been a better time to build but
you have to know where the distribution
channels are number five for jump scares
all the money will go to open AI or
other big model
Builders look open AI will monetize
anthropic will monetize I do think we
are going to get much more expensive and
much smarter models next year I would
not be surprised to see a four fig price
point for a model next year certainly
for corporate accounts there may be
three figure models for individuals if
you want high-end
performance that does not mean that all
the money will go to open AI in fact I
would argue that the incredible
competition we are seeing between Google
and meta and I Netflix is in the game
directly and Gro with X and open Ai and
anthropic it leads to cheaper
intelligence any given model you may
have to pay something for but net net
the pressure in the market is for more
intelligence cheaper it is a tough time
to be a model builder you can launch a
model that you have put hundreds of
millions of dollars billions of dollars
into training and it can be out of date
within three weeks it is really tough to
be a model builder it is really great to
be a consumer of
models and so no I I don't think that
open AI is going to get all the
money jump scare number
six AI code is always terrible and will
break things that's just not true I know
that people were coming after me in my
mentions when I said that Google has 25%
of their code written by AI Amazon is
doing that with q
look it doesn't matter if it's utility
code the point is it is useful code that
is providing value so it is making it to
production does that mean that AI is
solving the most complex use cases no
that's fine it would be nice if humans
could do the fun and interesting design
stuff so no I don't think that AI code
is always bad it's useful I think we see
plenty of evidence that it is I think
another place that AI code is useful
even if bloated is in these llm
generated code tools bolts is unlocking
so much for people who have not coded I
taught a maven course and at the end of
the day people are flocking to bolt as
new Builders because it is so easy to
get from idea to working preview easier
than repet right now easier than cursor
right now got to take my hat off to bolt
I'll take my hood off for a second there
you go um yeah bolt is really easy and
it's reminding me that even if bolts C
isn't as clean as it could be it is
solving problems and shipping useful
value and that's what matters at the end
of the
day jump scare number seven the AI will
take all my data that one's been around
a while and it reminds me of the old
scares on Facebook where it would be
like P paste this on your wall or else
Mark Zuckerberg will own all your data
and this little like social virus would
spread around every year or so and you
would see a bunch of people just like
paste a bunch of legal boilerplate to
their wall because they genuine
genuinely believed that would save them
from somehow having their data
stolen look the reality is that AI
training data is different from
utterances you give the AI if you are
giving the AI utterances that is not
being used directly for
training because the model is not
training when it comes back to you the
model is just inferring and responding
that's
it so no it's not taking your data and
they have even more explicit protections
at Enterprise level and by the way if
you think Enterprise and you think
thousands of dollars I will tell you
open ai's Enterprise package is like 60
bucks a month if you want as an
individual to get Enterprise protections
for your data that you give to open
AI great 60 bucks a month and by the way
the Baseline protections are fine too so
this is just a myth it's a jump scare
and it's not something that I think is
relevant and I think it comes from the
fact that people confuse training and
inference and they need to stop training
is training what you train your data on
is a one-time thing and inference is
what happens when you type something
into the chat those are different
things Okay jump scare number
eight when AGI comes we all doomed uh
AGI is artificial general intelligence
and there's this widespread perception
it kind of goes back to the Skynet thing
but it's specific to a level of
intelligence like when we get
intelligence that is human level the
perception I get that I read in the
YouTube comments that I read in the Tik
Tok comments is that we're all doomed
and that's not true and I've mentioned
it at the top where I think that part of
it is information processing and it's
just physics like there's too much
information in the world but I think the
other reason is more fundamental
artificial general intelligence if it
arrives will arrive inside human
institutions human institutions are
designed to to work for humans we can
argue about how fair they are but
fundamentally that's what they're there
for that means that AGI is
contextualized is situated inside human
context from the start we will expect it
to align to human incentives human
processes and so when I see claims like
aggi will make pharmaceutical approvals
Run 10 years faster I kind of laugh
because the problem is not intelligence
the problem is that our drug approvals
process C is mired in bureaucracy and no
amount of intelligence will change
that that's just not how it works and so
I think that we overestimate the degree
to which AGI is actually going to change
everything I think it will be very
helpful for certain applications I think
it is looking like it will be more
helpful for specific business decisions
we may see an artificial intelligence
agent with AGI capabilities as a
standard part of SE Suite meetings
in the next year I do not think that
means that we will not have any
employees as I've shared before I also
don't think it means that the AGI will
start to try and take over companies and
run them ridiculously because it's going
to exist inside a context human context
matter okay number nine AI isn't really
adding productivity I hear that too
that's actually different from the other
ones that that I've listed here because
a lot of the other ones assume AI will
get better this one assume AI is
terrible that's also not true people are
adopting artificial intelligence faster
than they adopted the internet and the
reason they are doing so is because it
is phenomenally helpful for General
productivity and if you are not finding
it helpful it is probably at this point
you now you can fix that you can learn
there are lots of tutorials I have lots
of stuff all over the internet on how to
get better at this happy to talk with
you but at the end of the day better
prompting alone like leaving aside tool
Chain Solutions leaving aside other
tools let's just assume you're in a
chatbot which by the way is not
necessarily the recommended setup but
let's just say that's where you are
because that's the simplest place people
start even if that's the only use you
have for AI just typing into a chatbot
better prompting will get you 10x better
results hands down and so think about it
if you're not getting good results from
chat GPT are you using current class
models are you prompting well do you
know how to prompt wall are you
experimenting with prompting like code
it's worth thinking about because AI is
enhancing productivity and that is why
we are seeing absolutely massive
adoption that's why the Wharton study on
AI adoption at work had people doubling
their usage since last year up from a
very high base like a third of people
were using it last year twoth thirds
today three qus
today Okay jump scare number
10 AI hallucinates too much to be useful
that wasn't true a year and a half or
two years ago when AI came on the scene
with Chad GPT generative AI came on the
scene with Chad
GPT it is definitely not true now the
larger language models The Cutting Edge
language model the new Claude 3.5 uh the
40 or 01 class from open
AI they don't really have a
hallucination problem that's worth
talking about unless you are operating
at Enterprise scale in which case even
small errors add up and you have to sort
of work on an agentic approach to fix it
but fundamentally if you're doing
day-to-day tasks as an office worker
hallucinations have almost entirely gone
away not completely I still check your
work but for the most part it's just not
an issue anymore and it's because the
large language models actually got
better the bigger you get and I actually
saw a study on this the bigger the
language model and the more it is able
to articulate an answer specifically
with confidence the more likely that
answer is to be not a
hallucination so there you go I think
that one's a jump scare okay now it's
time for the real scare what is the
thing that you should actually be scared
of with AI if you are building in the AI
infrastructure space you should be
scared one of the things we have seen in
2024 is that the model Builders are
going to monetize by taking the AI
infrastructure layer app Builders are
great they're going to be fine
infrastructure Builders are in trouble
so for example if you've built your
entire product on delivering rag
Solutions on top of other models that's
a really dangerous place to be right
now if your entire
model is just
enabling a voice interface with a
particular model through a bunch of
backend
chicanery that's a very dangerous place
to
be you want to be in a place where you
delivering real value to customers
leveraging intelligence not where you're
trying to make the intelligence slightly
more platform like because the
intelligence companies open AI anthropic
others they are going to own the
platforms they are going to make their
platforms more useful you saw that with
the Swarm API launch from open AI you
saw that with um GitHub going multi-m
this week you can now use Claw on GitHub
they've given up just trying to make you
not use
claw they're making the platforms more
useful don't be in AI infrastructure
that is the really scary place to be
okay there you go 10 jump scares one
thing you should really be scared of
about AI I hope you enjoyed the cap