Clarifying OpenAI’s O1 and O1 Pro Launch
Key Points
- OpenAI’s launch of the new “01” model was muddled, with simultaneous releases of “01” and “01 Pro” and the removal of the “01 preview,” causing confusion about naming, pricing, and where to access the models.
- The author argues that the proper rollout should have been a simple release of “01” (available in Plus and Team plans) followed by a separate announcement for “01 Pro,” to clearly differentiate the products.
- Benchmarks show that “01 Pro” offers only modest improvements over “01” on generic tasks, but delivers a substantial leap on complex, high‑level tasks such as concise, senior‑level critiques that fit within an iPhone‑screen prompt.
- Many users, especially those not deeply embedded in the AI industry, are questioning the $200 cost for “01 Pro” because they perceive little difference in everyday use, highlighting a communication gap from OpenAI about the model’s specialized capabilities.
- The speaker’s personal test—asking the models to critique an 1800‑word essay within a single iPhone‑sized response—demonstrated that only “01” (and by extension “01 Pro”) could meet the requirement, underscoring the practical value of the newer model for nuanced, concise outputs.
Full Transcript
# Clarifying OpenAI’s O1 and O1 Pro Launch **Source:** [https://www.youtube.com/watch?v=lC4CxLrlFpc](https://www.youtube.com/watch?v=lC4CxLrlFpc) **Duration:** 00:07:03 ## Summary - OpenAI’s launch of the new “01” model was muddled, with simultaneous releases of “01” and “01 Pro” and the removal of the “01 preview,” causing confusion about naming, pricing, and where to access the models. - The author argues that the proper rollout should have been a simple release of “01” (available in Plus and Team plans) followed by a separate announcement for “01 Pro,” to clearly differentiate the products. - Benchmarks show that “01 Pro” offers only modest improvements over “01” on generic tasks, but delivers a substantial leap on complex, high‑level tasks such as concise, senior‑level critiques that fit within an iPhone‑screen prompt. - Many users, especially those not deeply embedded in the AI industry, are questioning the $200 cost for “01 Pro” because they perceive little difference in everyday use, highlighting a communication gap from OpenAI about the model’s specialized capabilities. - The speaker’s personal test—asking the models to critique an 1800‑word essay within a single iPhone‑sized response—demonstrated that only “01” (and by extension “01 Pro”) could meet the requirement, underscoring the practical value of the newer model for nuanced, concise outputs. ## Sections - [00:00:00](https://www.youtube.com/watch?v=lC4CxLrlFpc&t=0s) **OpenAI's Confusing 01 Launch** - The speaker criticizes OpenAI's botched rollout of the O1 and O1 Pro models, highlighting unclear naming, surprise pricing, deletion of the preview, and the unexpected reinforcement fine‑tuning release. ## Full Transcript
open AI messed up their launch of 01 and
I want to talk about it because I think
we need to set the record straight on
what 01 is on what 01 Pro is and on
where they're going with their newest
release which was today called
reinforcement fine-tuning so we're going
to get to all three of those and unpack
that 01 is the model they have been
teasing for months they should have just
released that yesterday that would have
been big enough news on its own just
release 01 tell people very clearly that
01 goes in plus and team
plans and then get out that is the
correct launch for the day and the
reason why is because then people would
know where to find the model you are
launching most of the people I know who
are not obsessive industry Watchers are
asking me and saying Nate why do I have
to pay $200 for 01 well you don't but
open AI confused Everyone by ALS so
dropping a second surprise new model
yesterday at the same time as this much
B hooded model 01 they called it 01 Pro
which is even more confusing because now
they're both named 01 what which one do
you
mean and then they deleted 01 preview
without telling anybody now it's fine to
delete 01 preview and just put in 01 I
think that would have made sense but
adding two new o1s is extremely
confusing so 01 Pro costs $200
and everyone I know who knows that there
is a difference which is already sort of
a small segment of the
population is asking why does it matter
because if you look at the benchmarking
test papers and I was up late last night
doing that it does not look like a very
big jump over
01 but it feels like a big jump over 01
for the right kind of task and that is
what open AI has done a poor job calling
out and I think they've done a poor job
calling that out over 40 as well because
most of the folks on my Tik Tok in the
comments are saying why do I go to 01 I
tried 01 I found it in my plus plan it
doesn't seem that much different I will
tell you it doesn't seem different if
you are using it for the same easy tasks
if you're using it for more complex
tasks it's an absolute
lifechanger I'll give you an example I
fed 01 40 and Claud Sonet 3.5 an
1800w essay in one prompt and I gave it
the exact same to the word instructions
I said read the essay come back to me
with a critique to make it better and
that critique must fit inside an iPhone
screen one screen capture size I did not
give it the size of the iPhone I none of
that just fit it inside an iPhone
screen well only one model could do it
01 could do it that's it all the others
failed miserably I asked 01 mini I asked
40 I asked Cloud son at 3.5 all of them
were way too wordy and had critiques
that just stretched on and on and on it
was difficult to make sense of and it
wasn't that they were wrong like Sonet
3.5 had good points 40 was okay but when
I read the cinct inside one iPhone size
response from 01 I felt like I was
talking to a senior stakeholder with 15
years of experience in the industry I
was shocked it is incredible but if I
had asked it to do a really simple
prompt like say hey help me brainstorm
for a meeting here's three bullets would
it have really done a much better job
than 40 probably not in a measurable way
like probably a little bit better the
tone would have been better subtly
better that's not what it's for the
models that we are developing V oping
are solving harder tasks than most of us
have to
solve and
so you need to recognize in your work
what model you really need if you are
just doing day-to-day work 40 or Sonet
3.5 is probably as much power as you
need if you are deliberately solving
complex problems and you want a one-hot
response and you're willing to write a
precise prompt 01 is
incredible and 01 Pro is even better but
for an even smaller range of use cases I
saw a demo on 01 Pro today where they
gave 01 Pro and 01 and 40 and some
others the same prompt clone the
coinbase front
page and only 01 Pro was able to produce
a
highquality production ready piece of
code that was designed
gorgeously and perfectly functional with
no bugs in one
response everybody else was way off base
by
comparison so 01 Pro I mean if 01 is a
BMW 01 Pro is a Ferrari but you can only
put a Ferrari on a small percentage of
the roads without banging it
up and so what I'm encouraging you to do
is what open ai's marketing team should
have done in the first place which is
dig in and understand these models and
if you want to learn more about the
models and how you leverage them for
workflows I am doing a free lightning
lesson I will put the link in the
description you can sign up for it it's
on December 19th I think it's the these
are incredible models I don't want to
take away from the technical achievement
of open AI here just because they
dropped the marketing these are amazing
and I want people to understand what
they can do and how to use them to to
drive workflows and multiply their value
in 2025 so if that if that all that
sounds interesting to you have a maven
lesson you can learn live from me 30
minutes it'll be fun December
19th and before we go I want to call out
that there is a
connection between the Pro Plan and the
reinforcement
fine-tuning that we got today because
the Pro Plan is aimed at scientists and
so is reinforcement fine tuning
reinforcement fine tuning is aimed at
high value Enterprise researchers who
want to dig in a ton on specific highly
technical problems again this is a
Ferrari of a technique it is not for
everybody you do not need it on average
there's a reason they put it on a weight
list that's what it's about and I think
we're going to get more and more like
heavy duty models that offer incredible
value but only for very specific cases
all right cheers