Learning Library

← Back to Library

AI Goes Proactive: OpenAI Pulse & Microsoft Copilot

Key Points

  • OpenAI’s new “Pulse” feature delivers proactive AI assistance based on a user’s recent chats, prompting people to start conversations days in advance and noticeably altering their workflow.
  • Because Pulse is unsolicited, it provides a seamless spot for sponsored cards, and the simultaneous hiring of an ads‑monetization lead suggests OpenAI is gearing up to embed advertising directly into the experience.
  • Pulse represents one of the first consumer‑grade proactive AI products, signaling a broader industry shift toward AI that initiates interactions rather than only reacting, a trend expected to dominate by 2026.
  • Microsoft announced it is diversifying its Copilot offering by integrating Anthropic models, moving beyond its prior reliance on OpenAI and broadening its AI partner ecosystem.

Sections

Full Transcript

# AI Goes Proactive: OpenAI Pulse & Microsoft Copilot **Source:** [https://www.youtube.com/watch?v=-hK4Qt8B9Fg](https://www.youtube.com/watch?v=-hK4Qt8B9Fg) **Duration:** 00:15:54 ## Summary - OpenAI’s new “Pulse” feature delivers proactive AI assistance based on a user’s recent chats, prompting people to start conversations days in advance and noticeably altering their workflow. - Because Pulse is unsolicited, it provides a seamless spot for sponsored cards, and the simultaneous hiring of an ads‑monetization lead suggests OpenAI is gearing up to embed advertising directly into the experience. - Pulse represents one of the first consumer‑grade proactive AI products, signaling a broader industry shift toward AI that initiates interactions rather than only reacting, a trend expected to dominate by 2026. - Microsoft announced it is diversifying its Copilot offering by integrating Anthropic models, moving beyond its prior reliance on OpenAI and broadening its AI partner ecosystem. ## Sections - [00:00:00](https://www.youtube.com/watch?v=-hK4Qt8B9Fg&t=0s) **OpenAI Launches Proactive AI Assistant** - The host outlines OpenAI's new Pulse feature, a proactive chat‑GPT tool that leverages recent conversation history to deliver timely insights, reshapes user workflows, and hints at broader implications like ad integration. - [00:04:19](https://www.youtube.com/watch?v=-hK4Qt8B9Fg&t=259s) **Anthropic Opus Outshines ChatGPT, Meta Eyes Gemini** - The speaker notes that Anthropic’s Opus 4.1 surpasses OpenAI’s ChatGPT for practical tasks such as slide decks and spreadsheets, prompting Microsoft to integrate Anthropic models into Copilot and hinting at an upcoming Opus 4.5, while Meta negotiates a partnership with Google Cloud to embed Gemini’s multimodal AI into its ad‑targeting systems. - [00:07:58](https://www.youtube.com/watch?v=-hK4Qt8B9Fg&t=478s) **Stargate: $400B AI Compute Push** - OpenAI’s multi‑year Stargate initiative, backed by over $400 billion from partners such as Oracle, Nvidia, and SoftBank, aims to amplify compute capacity by about 100× and eventually achieve fully autonomous, robot‑built data centers. - [00:11:42](https://www.youtube.com/watch?v=-hK4Qt8B9Fg&t=702s) **Adobe and Notion Sacrifice Margins for AI** - The speaker highlights that smooth, useful AI tools will boost user adoption, while firms such as Adobe and Notion are integrating third‑party models and deliberately cutting gross margins to remain competitive in the rapidly evolving AI landscape. - [00:15:12](https://www.youtube.com/watch?v=-hK4Qt8B9Fg&t=912s) **Landmark US AI Safety Bill** - The speaker highlights that, despite limited public positions from major AI firms, the passage of Senate Bill 53 will mandate model makers to demonstrate safety protocols, whistleblower protections, and new compliance measures, affecting both providers and their customers. ## Full Transcript
0:00All right, let's get right to it. What 0:01were the AI stories that mattered the 0:03most this week? And new this week, I'm 0:05going to put in the post a special 0:07prompt for you to grab the implications 0:10of these stories depending on your 0:12company, your role, your domain of 0:14interest. You're going to be able to go 0:15like a step deeper with that. Story 0:17number one, Open AI is going proactive. 0:20And this has a lot of pull out thread 0:22implications I want to get at briefly. 0:24The launch is called Pulse, and it 0:26offers proactive AI assistance based on 0:29what you most recently talked about. So, 0:32think of it as sort of like an Instagram 0:34stories reel, but it's tuned to your 0:36previous chats, specifically very recent 0:39chats, like the last day or two. It's 0:41available to pro users initially. I've 0:43tried it out, and what I've found is 0:45that it's a very, very seamless, almost 0:48eerily relevant experience. It changes 0:51my behavior in some interesting ways 0:53though because it's pushing me now to 0:55start to have conversations a day or two 0:58in advance of when I think I need to 1:00have them to plan for work so that I 1:02give pulse a chance to work overnight 1:05and give me interesting insights the 1:07morning when I actually have to do the 1:09work. It's always a good sign when a 1:12product launch is so impactful it 1:13immediately changes your workflow. And 1:15that's what I found here. But the 1:17implications go beyond sort of prouser 1:19availability. What you see with pulse is 1:23chat GPT investing fairly transparently 1:26in an ad surface. It was always going to 1:29be challenging for chat GPT to keep an 1:31aura of objectivity in individual 1:34conversations that users initiate. It 1:37will be much easier to position ads 1:40inside Pulse because Pulse is already 1:43proactive. Pulse is already an 1:45experience that you didn't necessarily 1:47ask for but chat GPT is offering to you. 1:50So if they slide in a card in that sort 1:52of Pulse format and the card is 1:54sponsored, you can just ignore it if you 1:56don't want it. Right? It's a very simple 1:57ad experience. And of course at the same 2:00time as they launch Pulse, we notice 2:02that OpenAI has opened up a ads 2:05monetization role at ChatGpt. So 2:08something is coming there on the ads 2:10front and Pulse seems to be related to 2:12it. We will have to kind of put a pin in 2:14that and stay tuned to see where they go 2:16with that. One final implication I want 2:18to call out on the pulse story. This is 2:20the beginning of an entire new arc where 2:24we will see AI become more proactive. 2:26This is one of the first widely consumer 2:29available proactive AI experiences. And 2:32I want you to keep that in mind because 2:34a lot of the other threads we're 2:36following over the course of the last 2:38few months point to a 2026 that is more 2:41AI proactive versus AI reactive and 2:44major model makers are all building in 2:46that direction. Let's get to story 2:48number two. Microsoft is diversifying 2:51co-pilot with anthropic models. And this 2:54is really fascinating because Microsoft 2:57has been known for leaning in on OpenAI 2:59for a while. But as we discovered in the 3:01last couple weeks, Microsoft and OpenAI 3:03have finalized a new looser partnership 3:06format. And it's not super surprising in 3:08that context to see Microsoft go for 3:12another major model maker as essential 3:15to their suite of tools. And I think 3:17it's the right call because if you look 3:19at how Anthropic actually performs, how 3:21Claude Opus 4.1 specifically performs on 3:26work that co-pilot people do, slides, 3:30sheets, looking at docs, that's work 3:32Opus 4.1 does very well. It's not just 3:35me saying that. Ironically, it's Chad 3:37GPT saying that, too. Open AAI completed 3:40a study called GDP val this week that 3:44actually tests major AI models against 3:47economically useful work tasks. Now, 3:50before you run away with the wrong 3:51headline here, this is not the same as 3:54saying it's testing major model makers 3:56against economically useful jobs. These 3:59are very very limited tasks where the 4:02context is prepared by an expert in a 4:04neat little package and then an expert 4:06prepares a gold standard solution and 4:08then the major model makers run their 4:10models against the problem space and see 4:13how their result compares to that gold 4:15standard in a blind evaluation. Fine. 4:17It's not real work in the sense that 4:19it's not as messy, but it is real work 4:21in the sense that the tasks are real and 4:23designed by experts in the field. That 4:25being said, OpenAI sponsored all of this 4:27and set it up and ran it and even they 4:30admit that Opus 4.1 from Anthropic is 4:33better at doing that kind of 4:35economically useful work than any of 4:38Chat GPT's models, which is a huge 4:40endorsement for Opus 4.1 and the 4:42Anthropic team and one that I personally 4:44have found correct. Like I prefer Opus 4:474.1 for preparing a slide deck, for 4:50creating a a spreadsheet. it is just 4:52much more useful than the chat GPT 4:54models at this point. And so it's not 4:57surprising to see Microsoft pulling the 4:58anthropic models into Copilot as they 5:00evaluate that. There are heavy rumors 5:03that this is going to get even better 5:05shortly as we suspect that Anthropic is 5:08on the verge of releasing something like 5:11a 4.5 version of Opus, which would 5:13presumably be a step forward here. So 5:15stay tuned for that. That may be coming 5:17in the next couple of weeks. We will 5:18have to see. Story number three, Meta is 5:22exploring a Gemini partnership for ad 5:25targeting. So Meta is in early 5:26discussions with Google Cloud about 5:28integrating Google's Gemini AI models 5:30into Meta's ad operations. And Meta 5:32employees have explored how Gemini's 5:34multimodal capabilities like Nano Banana 5:37could refine algorithms that match ads 5:39to users on Facebook and Instagram. So 5:41basically, can you take the sort of 5:42Facebook, Instagram, social algorithm 5:45secret sauce that Meta has and marry it 5:48to the image generation capabilities 5:50potentially or the text generation 5:51capabilities that Gemini has. This is a 5:55huge step back for Zuckerberg and Meta. 5:58And I know you might not have expected 6:00me to say that, but hear me out. They 6:02have invested an enormous amount of 6:04money in their own AI. Zuck just got 6:07done making headlines for the largest 6:09pay packages in history for his AI team 6:12and then very publicly lost about a half 6:14a dozen researchers who stayed briefly 6:16at Meta and left for undisclosed reasons 6:19but everyone guesses the culture. In 6:21this situation, Zuck is having to admit 6:23that Llama, his own homebuilt model, is 6:26just not good enough for this task and 6:30he's having to go to Google. This 6:32underlines how the race for AI has 6:34narrowed over the last year or so. A 6:37year or two ago, Llama would have been 6:38right in the race with the top leaders. 6:40It's just not now. It's just not. And 6:42the race has narrowed really to Google 6:45and OpenAI and Anthropic. And I know 6:48Grock is really trying, but like Llama's 6:50not even in the conversation. And even 6:52Meta is admitting that. And that is the 6:54undercurrent here. And that is a really 6:57big question mark for Meta because Zuck 6:59has not publicly suggested he's walking 7:02back his big dollar investments in AI 7:04for the future. He's still planning to 7:06spend hundreds of billions over the next 7:08few years on AI. Where is that money 7:11going to go? What do they anticipate? Is 7:12this a story sort of like Apple's 7:14chipset story where they're going to 7:15spend a vast amount of money trying to 7:17catch up and build their own chipsets 7:18and eventually they do? Or is it a story 7:21where Meta is going to invest this money 7:23and then gradually kind of roll it back 7:24as they realize they can't catch up? We 7:26don't know where that's going to go yet, 7:28but for now, it looks like Meta is not 7:30in the driver's seat, even in their own 7:32business, on AI. Story number four, Open 7:36AAI is continuing to make headlines for 7:39how much they are spending on data 7:42centers. They are aggressively building 7:45capacity in ways that sort of it boggles 7:48the mind, right? the number the the the 7:50size of the numbers involved in terms of 7:52power generation, in terms of dollars. 7:54We are now over $400 billion in 7:58investment over three years in the 8:00Stargate project, which is their sort of 8:02flagship project. And they announced 8:04with Oracle five new AI data centers as 8:07a part of the Stargate project. What 8:09this is suggesting to me is that not 8:11only is Stargate not just a headline, 8:13Stargate is an umbrella term for the 3 8:16to 5year vision that OpenAI has for a 8:19massive X or so scale up in compute 8:22versus present state, maybe more than 8:24100x scale up in compute when it's all 8:25done. And they are not having trouble 8:28attracting the investment they need to 8:30get there. In fact, some of the stories 8:31over the last few weeks around Oracle's 8:33investment in OpenAI, around Nvidia's 8:36investment in OpenAI, this week around 8:37SoftBank leaning in on financing for 8:40Stargate, it all adds up to Saman being 8:42able to attract the capital he needs to 8:45get this power generation vision done. 8:47And his vision goes beyond just building 8:50stuff. He wants to get to a point where 8:53construction of additional capacity for 8:55AI is fully autonomous. And there was a 8:57blog post about that that while it's 8:59aspirational is important to consider. 9:01Sam expects that he will be able to have 9:06in 3 to 5 years entirely autonomous data 9:09center production. So robots will put 9:11the data centers together. Robots will 9:13bring the chips in. There will be 9:14robotic chip fabs and they will be able 9:16to autonomously stand up new data 9:18centers to match the capacity needs of 9:20AI. You would only do this if you were 9:22expecting not just a 100x gain in demand 9:25for AI, but a thousand or text gain. And 9:28so Sam's vision is predictably 9:30incredibly grand. And that blog post 9:32gives us a sense of why these other 9:34companies are willing to put so much 9:36skin in the game, so many dollars to 9:38scale up on the data center side. What 9:40this suggests is that the demand story 9:42for AI remains intact and that we are 9:44going to continue to see aggressive 9:46buildout. Let's get to story number 9:48five. Kimmy is a Chinese AI company for 9:52it's Moonshot AI's model right so the 9:54Chinese company's named Moonshot and 9:55they produced Kimmy K which is a 9:57trillion parameter model it's very very 10:00good and it is now available as an agent 10:03it's called okay computer and it was 10:05launched on the 24th of September it 10:08allows Kimmy to access its own virtual 10:11computer environment with a file system 10:14a browser and a terminal and it can 10:15autonomously execute complicated 10:17multi-step tasks for you and so it can 10:20transform transform chat requests into 10:22websites or data dashboards or 10:24production documents what have you. This 10:26gives us a glimpse of do the work 10:28assistance but critically it also is 10:31pushing the edges of AI agent autonomy 10:34and it's doing so from a Chinese 10:37perspective and what I mean by that is 10:39that it is reminding us that the Chinese 10:42uh model development evolution continues 10:45to push the edges and if you're like 10:48deep in the weeds technically Kimmy K2 10:50also set a new bar on efficiency for 10:53training you can look up um mu nuon if 10:56you want to get deeper on that and you 10:58will see a lot on sort of how the K2 11:01model was trained and how efficient they 11:03were at training the K2 model. That's 11:05just a little sort of nerd sidebar 11:07there. But this is really the larger 11:09story that that the Chinese companies 11:12innovating on AI are pushing the edges 11:16forward especially around open-source 11:18and they are very very intentionally 11:20pushing straight to Agentic. It's sort 11:22of a leapfrog motion where they're not 11:24just satisfied with LLMs, they're going 11:26to push into advanced AI agent 11:28capabilities. And while US enterprises 11:31and European enterprises may have enough 11:33data concerns to say, you know what, we 11:35don't trust a Chinese hosted Kimmy K2 to 11:38do okay computer agent mode for us. 11:41Individuals will not have those 11:42constraints and if the product 11:44experience is smooth and useful, 11:46individuals are going to start using 11:48this. So I look at okay computer as both 11:51a push on agent mode for the industry as 11:54a whole and you'll see us companies 11:55start to follow suit. I also look at it 11:58as a directly useful tool for 12:01individuals and potentially some small 12:02businesses that aren't too worried about 12:04the data side right now. Story number 12:06six, Adobe is buying instead of 12:08building. Adobe has taken a beating 12:10recently in the public markets because 12:12their AI products are widely perceived 12:14to be missing the mark. And so they 12:17announced comprehensive thirdparty model 12:19integration across Firefly, embedding 12:22what the Luma Ray 3 model for video and 12:25adding a bunch more opportunities for 12:28model support from Google, OpenAI, 12:31Ideog, and others. In other words, Adobe 12:34is deliberately importing best-in-class 12:36capabilities rather than trying to 12:38compete anymore with proprietary Firefly 12:40models. This feels a lot like the meta 12:42story. Adobee's basically admitting 12:44their own AI strategy is dead in the 12:47water and they need to bring in other 12:50models. Now the implication here is that 12:53Adobe is going to eat margin to do that. 12:56Notion very publicly has said during the 12:59notion 3.0 rollout which also happened 13:01this week that they were eating their 13:04own margin and that they were costing 13:06their gross margin about 10 percentage 13:08points to roll out AI. So they went from 13:1190% gross margins to 80% gross margins 13:13as a business to deliver AI 13:15functionality, but they felt it was 13:16worth it for the long-term value of the 13:18business. Adobe hasn't revealed their 13:20gross margins, of course, but I suspect 13:22that is the kind of impact they're going 13:24to have when they actually bring in 13:27these third-party services and start to 13:29pay for them and start to essentially 13:30have the brains of Adobe Firefly and 13:33other tools built by other people. So 13:35this is a story where even traditional 13:37SAS companies are starting to have to 13:40let other models in the door because 13:44it's just too expensive to compete on 13:45the model side directly. Very, very 13:47interesting implications here for other 13:49businesses that need intelligence. 13:50Finally, story number seven. California 13:52is advancing AI safety legislation. So, 13:55California's legislature passed Senate 13:57Bill 53, which requires major AI 13:59developers to publicly disclose disclose 14:02safety and security protocols and 14:04establishes whistleblower protections 14:06and incident reporting. It also creates 14:08Cal compute, which is a public computing 14:10cluster for AI research. The reason this 14:12matters is because the major model 14:14makers are all based in California and 14:16because California is such a large and 14:18influential state in the federal system. 14:20And so I would expect that we will see 14:23actual implemented AI safety and 14:25transparency requirements at Anthropic, 14:27at OpenAI, both of which are based in 14:29San Francisco and at Google, also based 14:31in San Francisco. And we are going to 14:33start to see public disclosure and 14:36discussion around what that means and 14:37probably implications for other states 14:40that may be looking at AI safety. 14:42California is setting the bar for the 14:45nation as a whole here, but it is also 14:48directly impacting the daily operations 14:50of the biggest model makers on the 14:52planet. And so if we're looking ahead at 14:54national AI safety legislation, 14:56California is showing a sort of way to 14:59go there, a direction to head in. But 15:01it's also going to require companies to 15:06change their behavior immediately. Now, 15:08OpenAI has said they're neutral on this. 15:10They don't have an opinion. Anthropic 15:12has endorsed the bill publicly and 15:14Google hasn't really given a signal that 15:16I've been able to find. Regardless, once 15:18this is signed, these model makers will 15:20have to comply. They will have to start 15:22to show that they have safety and 15:24security protocols in line with Senate 15:26Bill 53. They will have to show that 15:27they have whistleblower protections. 15:29There will be new compliance loads. This 15:31has implications for companies 15:33purchasing from those model makers, too. 15:35And that's not been figured out yet. So, 15:38stay tuned for more there. But I wanted 15:39to call out that that's a landmark in 15:42the US on AI safety legislation and one 15:44to watch. As I noted, you can get more 15:46implications on each of these stories in 15:49the prompt that I prepared for you in 15:50the post. Have a great week. Cheers.