Learning Library

← Back to Library

Gemini 3 Redefines AI Workflow Paradigm

Key Points

  • The strategic focus should shift from “which frontier model is best” to “which model best fits each specific workflow,” with Gemini 3 excelling at tasks like video and massive context but not necessarily at persuasive writing or everyday chat.
  • Organizations need a dedicated routing layer to direct tasks to the right model; a simple heuristic is to use Gemini 3 for “see/do” tasks, Claude/ChatGPT for “write/talk” tasks, and smaller flash models for cheap bulk work.
  • Gemini 3 eliminates former “AI silent zones” by making previously opaque surfaces—raw UI dashboards, long messy videos, massive codebases with screenshots—legible and processable by AI.
  • This new legibility unlocks novel workflows that go beyond better chat, such as UI debugging, design QA, admin‑panel automation, and video research, expanding AI’s practical reach.
  • For roles like product managers, engineers, and marketers, the implication is to re‑evaluate job processes and tool stacks to leverage the appropriate model for each task rather than committing to a single‑provider solution.

Sections

Full Transcript

# Gemini 3 Redefines AI Workflow Paradigm **Source:** [https://www.youtube.com/watch?v=_Z-YppWti1E](https://www.youtube.com/watch?v=_Z-YppWti1E) **Duration:** 00:22:00 ## Summary - The strategic focus should shift from “which frontier model is best” to “which model best fits each specific workflow,” with Gemini 3 excelling at tasks like video and massive context but not necessarily at persuasive writing or everyday chat. - Organizations need a dedicated routing layer to direct tasks to the right model; a simple heuristic is to use Gemini 3 for “see/do” tasks, Claude/ChatGPT for “write/talk” tasks, and smaller flash models for cheap bulk work. - Gemini 3 eliminates former “AI silent zones” by making previously opaque surfaces—raw UI dashboards, long messy videos, massive codebases with screenshots—legible and processable by AI. - This new legibility unlocks novel workflows that go beyond better chat, such as UI debugging, design QA, admin‑panel automation, and video research, expanding AI’s practical reach. - For roles like product managers, engineers, and marketers, the implication is to re‑evaluate job processes and tool stacks to leverage the appropriate model for each task rather than committing to a single‑provider solution. ## Sections - [00:00:00](https://www.youtube.com/watch?v=_Z-YppWti1E&t=0s) **Beyond One Model Strategy** - The speaker argues that Gemini 3’s dominance shifts strategic focus from choosing a single frontier model to implementing a routing layer that assigns each task to the most appropriate AI model based on its strengths. - [00:03:25](https://www.youtube.com/watch?v=_Z-YppWti1E&t=205s) **From Keystrokes to Specification** - The speaker argues that with advanced models like Gemini 3 and the anti‑gravity code editor, developers now spend most of their effort defining and reviewing AI‑generated code rather than typing it, turning the workflow into a collaborative specification process that reshapes how we allocate attention in software development. - [00:07:41](https://www.youtube.com/watch?v=_Z-YppWti1E&t=461s) **Visible Safety and AI Ops** - The speaker argues that safety must be built into user interfaces with clear guardrails, while AI operations is evolving into a dedicated team responsible for maintaining prompts, tools, and artifacts across multiple model platforms. - [00:11:00](https://www.youtube.com/watch?v=_Z-YppWti1E&t=660s) **Gemini 3 Use Cases by Role** - The speaker outlines how Gemini 3 can aid product managers and marketers with video‑based analysis and artifact handling, while noting its limits compared to Claude for persuasive writing tasks. - [00:15:15](https://www.youtube.com/watch?v=_Z-YppWti1E&t=915s) **Choosing the Right Coding AI** - The speaker advises developers to trial various code‑generation models (like Gemini 3, Codeex, and Clog) to find the personal fit for bug fixing, QA, and full‑service reasoning, emphasizing testing, token usage, and evolving supervised assistant workflows. - [00:18:34](https://www.youtube.com/watch?v=_Z-YppWti1E&t=1114s) **Gemini 3: AI for Video & Automation** - The speaker emphasizes that Gemini 3 isn’t a SQL replacement but shines in assisting video editing, generating code snippets, and powering multimodal agents that automate desktop and admin tasks. ## Full Transcript
0:00Gemini 3 came out and it is the number 0:01one model in the world. What does that 0:03mean for all of us and what does that 0:05mean for particular jobs like product 0:07manager, engineer, marketer? I'm going 0:09to get into both of those in this video 0:11and we're going to start with the 0:12overall takeaways. Number one, the unit 0:14of strategy is no longer the model. You 0:18should not be asking which frontier 0:20model is best. And I realize that's 0:22ironic because we're talking about 0:24Gemini 3 as the number one model, but 0:26really what you should take away is that 0:29Gemini 3 makes it unavoidable to ask 0:33which model is best for which workflow 0:37because it is clearly a lot better at 0:40some things like video screens, handling 0:44huge context, and it's not as obviously 0:47better at others like persuasive writing 0:49or everyday chat. So the implication is 0:51if you're still arguing and saying we're 0:53an open AI shop, that's all we do. Or 0:55we're an anthropic shop, that's all we 0:56do. You're kind of missing the plot. 0:59Someone in your org needs to own the 1:02routing layer. And I want to suggest a 1:05very, very cheap, easy, usefully 1:09incorrect abstraction for you. Every 1:11abstraction is incorrect. Some of them 1:12are useful. I think this one is useful. 1:14If it is a see or do task, think about 1:18Gemini 3. If it is a write or talk task, 1:22think about claude and chat GPT. If it 1:24is a cheap bulk task, you got to go with 1:27some small flash models. Is that going 1:30to work for every single thing? 1:31Absolutely not. Is it a nice handy 1:33abstraction that you can work with? 1:35Yeah, it fits on a flash card. Takeaway 1:38number two, Gemini 3 turns AI silent 1:41zones into AI native territory. There 1:43are places where AI has been silent in 1:46the past. That's no longer true. Let me 1:49give you a few examples. Before Gemini 1:513, a lot of high-value surfaces that we 1:53computed with were effectively dark to 1:56AI, right? Raw user interfaces and 1:58dashboards. We didn't always get great 2:00results coding them. We didn't always 2:02get great great results designing them. 2:04We didn't always get great results 2:06figuring out what they said, right? The 2:08being able to analyze them. Long messy 2:11video definitely was dark to LLMs. Giant 2:14piles of code with docs and screenshots. 2:17We are making progress there. There's 2:19definitely examples that I've seen with 2:20cloud code and codecs, but it's not 2:23necessarily a super easy space for most 2:26AIs to operate. You needed humans to try 2:29and digest some of that long and messy 2:32context and summarize it before an AI 2:34could do anything useful. So, Gemini 3's 2:37real unlock is that those surfaces are 2:39starting to become legible. Gemini 3 can 2:43read the UI directly instead of guessing 2:45from the logs. Gemini 3 can watch 2:48footage instead of just reading 2:49transcripts. Gemini 3 can digest much 2:52bigger chunks of everything related to 2:54this system at once. So the most 2:56interesting new new workflows won't be 2:58better chat. There'll be new places you 3:01can use AI that you couldn't before like 3:04UI debugging, like design QA, like maybe 3:07admin panel automation of some sort, 3:09maybe figuring out how to do video 3:11research or user testing. So a good 3:14question to ask each of your teams right 3:16now or ask yourself is where do I have a 3:20lot of eyes on the glass work today? 3:22Gemini is probably more relevant there. 3:25Takeaway number three, the hard skill 3:27now is specification and review, not 3:30figuring out the keystrokes. So models 3:33are getting better and better at doing 3:34and the bottleneck is starting to shift 3:36toward telling them what to do and 3:39deciding whether that's an acceptable 3:40choice. I think that Gemini 3 plus the 3:43new anti-gravity code editor makes this 3:46very literal because in anti-gravity 3:48agents propose terminal commands, they 3:50propose code diffs, they have browser 3:52actions, and you approve or reject their 3:55artifacts, their plans, their patches, 3:58their refactor proposals. That's not 4:00really prompt engineering in the sense 4:02that it gets made fun of. It's much 4:04closer to working with a colleague to 4:08write a runbook, to design a spec, to do 4:11fast and highquality code. I'm not here 4:13to tell you that this is the only way to 4:16develop. One thing I know having worked 4:18with engineers for a couple of decades 4:20is every engineer has a stack that feels 4:23ergonomic to them. Some are finding 4:25anti-gravity really compelling and easy. 4:28Others are preferring to stick with 4:30cursor, are preferring to stick with 4:31codeex or preferring to stick with claw 4:33code. All viable AI options. The thing I 4:36want you to know, regardless of which 4:39you prefer, is that anti-gravity is 4:42shifting our sense of how we pay 4:45attention in coding in ways that we all 4:48need to understand, even if you're not a 4:50coder. Because what anti-gravity does is 4:52it dares you to focus on where you need 4:55to intervene with an agent that's 4:58building something rather than to focus 5:00you on the code side of things. And we 5:03have seen glimpses of this in the 5:05direction that cursor is evolving. But 5:07anti-gravity really really leans in. And 5:10I think that this implies that a lot of 5:13the great work that we do going forward 5:15is going to look weirdly similar for 5:17great product managers and great tech 5:19leads because it's going to to be work 5:22that is done by people who can describe 5:25what they want built really really 5:27clearly and who can smell a bad artifact 5:31really really quickly. That is 5:32absolutely a vibe thing but anyone who 5:35has worked around code will tell you 5:37it's true. And so really, you should 5:39evaluate how you want to work with 5:41Gemini less in terms of its ability to 5:45purely write code and more in terms of 5:47your ability to articulate intent, see 5:50useful results, and your ability to 5:53quickly refine and review. Increasingly 5:56the models will get there on the code 5:58that needs to be done but you need to be 6:00the one who is given space to review, 6:04refine, pay attention and decide what's 6:07acceptable. The models and the 6:10interfaces that make it easier for you 6:12to get your hands on the work and decide 6:14what's acceptable are the ones that are 6:16going to win. And so I think 6:17anti-gravity is an interesting 6:19development in the AI landscape for 6:22exactly that reason because that's where 6:24Google is focusing you. Takeaway number 6:26four, context abundance is just going to 6:29change where you pay your cognitive 6:31taxes. So a million token context window 6:34and very strong retrieval does not mean 6:37hey dump in your knowledge base and go 6:38to sleep. It does shift where you spend 6:41your effort. So you spend a lot less 6:44time curating perfect little packets of 6:46context, but you're going to spend a lot 6:48more time deciding what is the shape of 6:51the question that is worth asking. How 6:54do I want this answer structured? Gemini 6:56is now good enough that the marginal 6:59return on another hour of cleaning the 7:01context window is often lower than the 7:04marginal return on a better question and 7:06a better output format. And the 7:08implication is pretty stark. you need to 7:11start thinking in terms of query design 7:14and not just data preparation. So as an 7:17example and I know not every repo is 7:19this small but given that we can throw 7:21in a chunk of the repo and docs what is 7:24the most valuable question to ask as an 7:26engineer or what structured artifact do 7:28we want back here? Do we want a diff? Do 7:30we want a table? Do we want a synthesis 7:33of the data in some fashion? Do we want 7:35a solid six pager? What is the output? 7:39Teams that are excellent at asking sharp 7:41questions and at defining outputs are 7:44going to start to run ahead of teams 7:46that obsess over shaving a little bit of 7:48noise out of the context window. 7:50Takeaway number five is that safety is 7:53becoming a visible part of the user 7:55experience. This is not a policy PDF 7:58anymore. Anti-gravity is designed around 8:01the idea that safety guard rails need to 8:04be visible. So the whole idea of draft 8:06for approval flows, the clear separation 8:08between suggestion and execution, the 8:11ability to review the plans of the 8:14agents, the ability to view diffs really 8:16cleanly in anti-gravity. Essentially, 8:19Google is putting their money where 8:20their mouth is and saying that they want 8:23design of our surfaces to reflect the 8:26need for humans to be deeply engaged 8:29with what models should and shouldn't 8:31do. And I appreciate that because I 8:33think we need a lot more work in that 8:35direction. We need more user interfaces 8:38that help us to put our hands on what 8:39the models are doing. Takeaway number 8:41six is actually for us and for our 8:43teams. AI operations is becoming a 8:46fullfledged headcount function. It is 8:49not a hobby job. And so once you start 8:51to accept the idea that some tasks go to 8:54Gemini, some tasks go to Claude, some 8:56tasks go to chat GPT, who maintains 8:59that? Who maintains the prompts? who 9:01maintains the tools and the artifacts, 9:03who teaches teams how to work with these 9:05different layers. This is part software 9:08engineering, part product management, 9:10part platform team. We're still evolving 9:13what the role means. But fundamentally, 9:15if you think one staff engineer who's a 9:17champion on AI can just do this, you're 9:20probably underinvested. One very 9:22reasonable 2025 move is to explicitly 9:25charter an AI platform group and give 9:28them a mandate around how they handle 9:30routing, how they handle internal 9:31education, how they handle shared 9:33prompts. give them a charter that is big 9:35enough that they can evolve the impact 9:39of AI across the organization because 9:42these models are going to keep getting 9:44better in specific areas and you need a 9:47team that champions moving workflows 9:51where it makes sense. And I'm going to 9:52get into the job functions in a second 9:54and start to give you a few hints as to 9:56where I see that happening with Gemini 9:583. Takeaway number seven, your 10:00intuitions about this model. And I will 10:02go so far as to say almost any model 10:04from here on out are almost certainly 10:07incorrect if you only test chat stuff. 10:11So if your lived experience with these 10:13models is biased toward writing emails 10:15or just answer me this question or very 10:18light coding or just summarize this doc 10:20quickly, these are exactly the areas 10:23where Gemini 3's advantage is the least 10:25visible. So if you poke around in chat 10:27for an hour and conclude it's not that 10:29different, you're not wrong. you're just 10:31looking in the wrong place. So, I would 10:33suggest to you if that's you, don't 10:36judge Gemini 3 on your first 10 prompts. 10:38Instead, ask yourself, does this give me 10:42the ability to imagine accelerating a 10:45piece of work that used to be off 10:47limits? And I'm trying to go through 10:49these takeaways in such a way that you 10:51can open your imagination and see some 10:53possibilities. Okay, now it's time to 10:56get into takeaways for job families. 10:58We're going to go job family by job 11:00family and I'm going to lay out where I 11:01think Gemini 3 has an opportunity to 11:03help uh and maybe where there's some 11:05nuance and maybe where Claude or Chad 11:07GPT should still be on the list. For 11:09product managers, you can now treat UX 11:12and video artifacts as first class 11:15inputs and not homework you have to 11:17watch to get into the AI. This is a big 11:19deal because it simplifies a lot of 11:21early discovery and user experience. You 11:23can ask Gemini 3 directly for opinions 11:26on these artifacts in a way you couldn't 11:28before. You can ask Gemini 3 for 11:30competitive analysis across raw input 11:33data on an app video recording. Now, I'm 11:37not here to say that Gemini 3 is the 11:39only thing you should be using narrative 11:41prder documents emails where you want 11:45maximal clarity. I would still stick 11:47with claude particularly set 4.5. I 11:51don't find that Gemini 3's persuasive 11:54writing is there yet. For marketers, you 11:56have a lot of really interesting 11:57workflows that open up as well. 11:59Similarly, in the video and visual 12:01space, you could ask things like, "What 12:03patterns do you see in our winning Tik 12:05Toks? What's visually different between 12:07our high click-through rate and our low 12:09click-through rate ads?" And you're 12:10going to get really structured takes 12:12that you just would not get from AI 12:14before. And so, post hawk creative 12:16analysis is really interesting. you have 12:18the chance to do some creative asset 12:20audits that you didn't have before. But 12:23again, I'm going to say I don't think 12:24it's going to be as easy to get brand 12:26voice, especially punchy brand voice out 12:29of Gemini 3. On the customer support and 12:32op side, think about tickets with 12:35screenshots. Now, not just tickets as 12:38strings of text, right? You can actually 12:41look at cluster these issues by what's 12:44broken on the screen, right? You can 12:46take a screenshot. you can look at 12:47tickets and Gemini 3 can put that 12:49together. Again, AI couldn't do that 12:51before. And so if you want to do 12:53something around an automated triage 12:55workflow, you want to tag parts of the 12:57UI to places that are broken in your 13:00customers supports, if you want to draft 13:04actions on admin panels and like play 13:07around with aic workflows, all things 13:09that Gemini 3 would be interesting to 13:11explore for. What stays on Claude or 13:14chat GPT is going to be that text piece. 13:17Again, I would actually lean on Claude 13:19for that. Chat GPT even after 5.1 is not 13:23as easy to work with. Sales, you want to 13:25think about call reviews here. How can 13:28you think about slides, faces, body 13:30language in a more structured way and 13:33not just feed AI transcripts? How do you 13:35start to think about really heavy 13:37lifting with Gemini 3 on RFP compliance 13:40or on contract comparisons or on video 13:42call analytics? You can do things like 13:45say summarize this 60-minute discovery 13:47call and white paper for my next 13:50meeting. Stuff like that is becoming 13:52possible in a way that it just wasn't 13:54before. What stays in claw or chat GPT? 13:58Cold outreach, follow-ups, LinkedIn 14:00messages. The conversational style layer 14:04is again not really there. Are you 14:06seeing a pattern? Executives and 14:08leadership, there's some really 14:09interesting takeaways here. You can ask 14:12where is there a difference between what 14:14the deck is telling me and what the raw 14:16KPI tables are telling me. I know a lot 14:18of execs who would love that one. Uh 14:21where is and by the way that is 14:23something that if you are presenting you 14:24should assume your exec will now be 14:26asking. You can ask how do I digest a 14:30large mixed packet like a board deck 14:32with annexes with screenshots with uh a 14:35whole set of data tables? How can I make 14:38this digestible as a single object with 14:40really good synthesis? Gemini 3 is good 14:43at that. Gemini 3 also makes 14:45presentations. I find that the visual 14:48style is quite creative. The narrative 14:50piece again is not quite where Claude 14:52is. front-end engineers. Look, UI state 14:55and visual bugs are now something the 14:57model can see. It's a massive 14:59breakthrough. The model is also much 15:01easier to push out of that blue purple 15:04convergence that it's been stuck in. And 15:06so visual debugging is easier, design QA 15:09is easier, accessibility QA is easier. 15:12Now, if you're doing some of the simple 15:15bug fixes, some of the simple tweaks, it 15:17doesn't really matter what model you're 15:19using. And if you're looking at what is 15:21your overall day-to-day model on front 15:23end, I think you are going to have to 15:26start to code up parallel projects in 15:29Gemini 3 codeex and clog code and see 15:33where the model feels ergonomic for you. 15:36I'll say it again with engineers, the 15:38fit of the model is a personal thing. 15:40And so as much as I can say, look, the 15:43model is better at seeing bugs and you 15:45should use it for QA, your daily coding 15:48driver is something that in part depends 15:51on your degree of comfort with how 15:53autonomous the model is and how often it 15:56checks in and how much code it burns and 15:58how many tokens it burns along the way. 16:00And so you need to test it and decide if 16:03it's worth switching from codeex or 16:04clock code. All I will tell you is you 16:07are probably incorrect if you're 16:08unwilling to test. I think it is worth a 16:10shot. Backend and platform engineers, 16:13you can now productively ask, here's the 16:15whole service, the code, the configs, 16:17the runbooks, the diagrams. Help me 16:19reason and think about this. And you 16:21don't have to elaborately shard the 16:22context window unless it's very very 16:24large. And so terminal terminal agents 16:27and sort of the way you engage with 16:29assistants are beginning to evolve for 16:31you. You can start to actually have a 16:34assistant that you supervise. And I felt 16:37that when I started to play with 16:38anti-gravity and we are starting to get 16:40that a bit with codeex as well and with 16:42cloud code. So this is very much 16:44something where we should expect the 16:46model makers to continue to push. The 16:48thing that I will call out is that it is 16:50handy to have the large context window 16:52and it is worth it to ask yourself if 16:56that large context window is something 16:58you need for a particular debugging 16:59task. I have less determined opinions 17:03here on debugging on backend. It may 17:06well be that codeex is still very very 17:09strong at debugging complex code bases. 17:11It just for lack of a better term, 17:13there's a special smell to it and it's 17:15solid. Just as for lack of a better 17:17term, there's a special smell to claude 17:19code and the way it can work within an 17:21ecosystem of skills and MCP and write 17:24good code. Those are both strengths and 17:26that's why I keep coming back to its 17:28ergonomics. You'd be wrong not to test 17:30it. It's going to be a matter of fit for 17:32you on the coding side. For designers, 17:34this is absolutely revolutionary. The 17:36model can critique, it can compare, it 17:38can spot inconsistencies in UIs, it can 17:41see, right? You can feed it screens. And 17:43so, if you are not using Gemini 3, you 17:47are absolutely missing out as a 17:49designer. It's big. This model is also 17:52going to help you to translate visual 17:54intent into code ready descriptions for 17:57engineers. And so being able to say this 17:59layout is whatever it is technically is 18:02something that Gemini 3 can really help 18:04you with because it can see the design. 18:06For data analysts, the boundary between 18:08data in your dashboard and data in your 18:11documents keeps getting thinner because 18:12you can treat screenshots and PDFs and 18:15CSVs as one blob of evidence and ask for 18:18conclusions and it can be one big 18:20conversation. Right? A quarterly or 18:22multi-report analysis might stay inside 18:25one context window and not be spread 18:27across a dozen chats. So having that 18:29exploratory analysis is really helpful. 18:32I think I would not use it to substitute 18:34for SQL. I feel like I have to say that. 18:36I hope you know that that's obvious. If 18:38you want it to start to draft SQL for 18:40you, it and Chad GPT and Claude are all 18:43going to do that. Well, if you want it 18:45to write Pandas code for you, it will do 18:47that, but so will Chad GPT and so will 18:49Claude. Really at that point it's just 18:51about tight code feedback loops and it's 18:52very table stakes. If you are in the 18:54video space it is required like you have 18:57to start working with this model. This 18:59model can be helpful in suggesting how 19:03long footage can be turned into 19:04candidate timelines that you can then 19:07refine in a cut. It can help with 19:09pacing. It can help with rough cuts. It 19:12can help with show me the good hooks in 19:14this recording. There's all kinds of 19:15things it can help with and we are just 19:18scratching the surface on this. Video is 19:20one of the places I'm most bullish on 19:21for Gemini 3. AI enthusiasts and vibe 19:24coders, you get to play with agents that 19:26use an editor, a terminal, and a browser 19:28together without building a specific 19:30harness to do that. That is by itself a 19:33big deal. And so that means that we are 19:35going to start to see small admin tasks 19:37and small personal desktop automation 19:40tasks get interesting. And we're going 19:42to start to see frameworks for that. And 19:43there's going to be a whole lot of build 19:45around that. And so Gemini 3 fits in a 19:48world where you're tinkering with 19:49environments like anti-gravity. It fits 19:51in a world where you are building proof 19:53of concept workflows. If you are still 19:55looking for the polished website that 19:57you can launch quickly with a minimum of 19:59fuss, lovable.dev is great. If you are 20:01still looking to do a comprehensive 20:05review of an ecosystem with markdown 20:09files and touching all the files on your 20:11computer and you have your cloud code 20:13all set up to do that, Gemini 3 is going 20:15to have a high bar to climb, right? It 20:16may be more intelligence, but it's it's 20:18a brain in a box and you have the hooks 20:20from MCP and you have the tools that you 20:22need with cloud code and you don't want 20:23to touch it. Fair. I would say try it 20:25and see what you think. If you're using 20:27codecs, codecs may have the power that 20:31you want from a debugging perspective, 20:33and you may not feel that you miss the 20:35planning and review and agentic thinking 20:38that anti-gravity lets you do. Try it. 20:41You'll see. I'm not saying you'll like 20:42it. I'm not saying you'll hate it. I 20:44think it's worth a try. This gets back 20:46to the engineering side where people get 20:47comfortable with cloud code. We get 20:49comfortable with codecs. And that 20:51comfort in and of itself drives 20:53productivity. And so I want to be 20:54careful, but I want to suggest that you 20:56should at least give Gemini F3 a try, a 20:59fair shake, and see how it does. If we 21:01zoom out across all of these job 21:03families, I think we see some pretty 21:04consistent patterns. Gemini 3 is for the 21:07work that you do with your eyes and your 21:09patients. Claude or chat GPT tend to be 21:12for work that you do with your voice and 21:14your keyboard. And so one of the simpler 21:16questions I would encourage you to ask 21:18is where am I stuck watching, scrolling, 21:21clicking, and reading for hours and I I 21:24just need to understand what's going on. 21:26Those are great Gemini 3 candidates. So 21:28summing it all up, Gemini 3 is beyond 21:32the benchmarks a fascinating push for 21:36all of us to start to think 21:38intentionally about where our workflows 21:40are focused on seeing and doing versus 21:43where our workflows are focused on 21:45talking, where our workflows are focused 21:47on writing. I think that we're going to 21:48see a ton of really interesting use 21:50cases explode out. I think anti-gravity 21:52is super exciting. I think the video 21:54application is exciting. We're just at 21:56the beginning of seeing what this model 21:58can do.