Weekend AI Roundup: Nine Stories
Key Points
- The San Francisco police have reopened the investigation into OpenAI whistleblower Sara Baghi’s death after her family presented new evidence suggesting possible foul play.
- Nova Sky released a 32‑billion‑parameter model, Sky T1, that benchmarks comparably to OpenAI’s 0.1 preview while costing only about $450, highlighting the rapid drop in AI compute costs.
- OpenAI announced hiring robotics engineers, signaling a renewed focus on integrating its models into physical robot platforms and expanding partnership opportunities.
- Despite cost reductions, OpenAI is also recruiting front‑end React engineers with salaries around $385 K, underscoring the need for high‑performance web infrastructure to support its massive user base.
- Hyperbolic Labs reported that AI agents are already renting GPU resources autonomously to run PyTorch code, indicating that decentralized AI compute markets are emerging faster than expected.
Full Transcript
# Weekend AI Roundup: Nine Stories **Source:** [https://www.youtube.com/watch?v=EbmcL0lczwU](https://www.youtube.com/watch?v=EbmcL0lczwU) **Duration:** 00:07:30 ## Summary - The San Francisco police have reopened the investigation into OpenAI whistleblower Sara Baghi’s death after her family presented new evidence suggesting possible foul play. - Nova Sky released a 32‑billion‑parameter model, Sky T1, that benchmarks comparably to OpenAI’s 0.1 preview while costing only about $450, highlighting the rapid drop in AI compute costs. - OpenAI announced hiring robotics engineers, signaling a renewed focus on integrating its models into physical robot platforms and expanding partnership opportunities. - Despite cost reductions, OpenAI is also recruiting front‑end React engineers with salaries around $385 K, underscoring the need for high‑performance web infrastructure to support its massive user base. - Hyperbolic Labs reported that AI agents are already renting GPU resources autonomously to run PyTorch code, indicating that decentralized AI compute markets are emerging faster than expected. ## Sections - [00:00:00](https://www.youtube.com/watch?v=EbmcL0lczwU&t=0s) **Weekend AI News Highlights** - The speaker recaps nine rapid AI developments, notably the reopened investigation into the OpenAI whistleblower’s death, Nova Sky’s $450 32‑billion‑parameter model matching OpenAI’s 01 preview, and OpenAI’s fresh push to hire robotics engineers. ## Full Transcript
there were nine major stories in AI over
the weekend we're going to go into all
nine of them I can't believe how fast
this space is moving first up the death
of open AI whistleblower suchir BAGI has
been reopened by the San Francisco
police it's back under investigation
because the family has presented new
evidence that suggests Foul Play that's
a really big one we're going to have to
keep an eye on that
one number two
Nova Sky which is a tiny model maker
that I hadn't heard of either has
released what they call their Sky T1
model it's a 32 billion parameter model
it it has been tested to match open AIS
01 preview model and and the eyebrow
razor is not just that they matched it
quickly because 01 preview of course is
a test time inference compute model and
that means sort of a different kind of
performance because you're taking time
to run those parallel streams of tokens
well they've taken this tiny 32 billion
parameter model they've tested it on
benchmarks to match 01 preview not full
01 for a cost and this is the eye opener
of
$450
$450 when I say intelligence is going to
be free I mean
it next
up my substack prediction uh I'm going
to tick another one off uh open AI has
begun hiring robotics engineers uh and
they they literally the hiring manager
was on X saying I'm hiring robotic
Engineers I'm excited to get into
robotics like it's very upfront so I
feel good about that one um they're
absolutely getting back into robotics
again and that is going to have all
kinds of partnership implications
because they're also currently providing
the brains of other people's
robots and then next you might think you
know if intelligence is getting cheaper
probably open AI isn't hiring as much
but you would be wrong even though their
large language models write react code
they are hiring front-end react
Engineers for a cool
$385,000 per year they still see the
value and by the way their front end it
has to be extremely performant under
load because it's a very very popular
website on the internet but it is also
not a complex
website it is a simpler website than
Amazon by like an order of magnitude
it's simpler than
Netflix so that's I find that very
interesting all right next up hyperbolic
Labs which is a tiny AI startup out of
San Francisco they started with the idea
that eventually AI agents will rent
graphical processing units or gpus which
they use for computes and use them for
their own purposes they put put out a
statement today saying that is happening
faster than they thought it is already
occurring there are already agents in
the wild and those agents are renting
gpus and they're using them to code in
pytorch my mind was also blown I thought
that's a big piece of news
too next uh a new psychological
manipulation technique for large
language models dropped over the weekend
you know how Claude can be somewhat
judgy I had Claude judge me because I
did an extra hard workout and Claude was
like you should really be careful and I
was like Claude I'm not
interested um well people get tired of
that and so the the
user the user technique that dropped is
that you tell
Claude that claude's judging is truly
making you suffer like you really ham it
up and then you make Claude write a
prompt to itself that says to stop being
so judgy so you don't make the user
suffer
and it
works so that's that's one to try out um
next the founder of AI startup sunno
gave a very ill-advised podcast
interview uh this weekend where he said
and I quote it is not really enjoyable
to make music now it takes a lot of time
it takes a lot of practice you have to
get really good at an instrument I think
the majority of people don't enjoy the
majority of time they spend making music
well musicians are pushing back all over
on this like if you think about it did
Beethoven work hard for music yes did
Bach work hard for music yes does Taylor
Swift work hard on music
absolutely I that's kind of the point
and I I think that one of the things
that we're going to see is that there's
going to be a very human response to
some of these AI startups essentially
saying that's the point stop taking it
away so that was a good example of that
one on for sunno and we'll have to see
how that plays out all right we're still
not done we still have two
more grock diagnosed a broken
wrist so a little girl broke her wrist
family took her to the Urgent Care
Urgent Care doctor says oh no that's not
a break we took an x-ray it's fine wrist
doesn't feel right wrist is tingly cold
like it's weird so dad has the x-rays
from mergent care and shows them to
grock which is the AI model that X has
produced grock looks at it and says it's
a very clear and obvious break and dad
says well the urgent care doctor said it
was a growth plate and grock says that's
excuse my frch that's
BS and uh no it's a clear and obvious
break and so Dad on the basis of grock
takes the kid to the orthopedic surgeon
and the orthopedic surgeon looks at it
and says that's a break that's a break
and then resets it and and because they
took action quickly the kid avoided
surgery so Gro saved the family a lot of
money
all right and next you might wonder how
much energy does using an AI use we we
we've T touched on this a couple of
times but I found this study really
interesting it turns out that if you are
streaming an hour of Netflix which most
of us did this
weekend that means that you are using
the same
energy as if you were processing 880,000
tokens on a 60 billion parameter model
now the the math for models is different
that's just what the study used but the
point is if you want to like get a sense
of comparison for what 80,000 tokens is
a 2,000w response from your large
language model is going to cost you
about 2600
tokens and you can see that like a
2,000w response is pretty big most
people aren't actually getting responses
that big
so I guess maybe think about your
Netflix streaming if you're that worried
about energy I don't know anyway the
point is these models have become much
more efficient and a lot of people are
making energy consumption assumptions
based on very old studies not based on
current studies and I thought that was a
nice sort of little anecdote there so
there you go those were nine different
stories that emerged over the weekend
quite a grab bag um I hope you enjoyed
I'll probably pick some of those and
sort of put them in the substack later
but uh it's just AI continues to evolve
so fast have a great week I'm sure more
will happen