Learning Library

← Back to Library

Notion AI: Custom Agent Automation

Key Points

  • Notion just launched a new AI feature that lets users build “custom AI agents” by linking Notion databases with external tools, effectively turning the platform into an automation hub.
  • The video outlines three parts: an overview of the release, live notes on what works and doesn’t (including prompting tips), and concrete demos such as an interview coach, turning meeting notes into product requirement docs/backlogs, and a prompt‑evaluation harness.
  • Notion markets these agents as “AI‑powered agents across your Notion portfolio” that can perform multi‑step autonomous work for up to about 10‑20 minutes, while also adding connectors for Google Drive, Gmail, Linear, GitHub, and more.
  • A key use‑case highlighted is creating custom agents that act like teammates—e.g., automatically turning sales contracts into technical requirements for engineering—showcasing the tool’s flexibility for cross‑department workflows.
  • Notion’s broader value proposition is that by centralizing data and automation within its platform, users can cut costs on separate tools, positioning the AI agents as a money‑saving productivity boost.

Sections

Full Transcript

# Notion AI: Custom Agent Automation **Source:** [https://www.youtube.com/watch?v=BP-N7xjz-vM](https://www.youtube.com/watch?v=BP-N7xjz-vM) **Duration:** 00:22:13 ## Summary - Notion just launched a new AI feature that lets users build “custom AI agents” by linking Notion databases with external tools, effectively turning the platform into an automation hub. - The video outlines three parts: an overview of the release, live notes on what works and doesn’t (including prompting tips), and concrete demos such as an interview coach, turning meeting notes into product requirement docs/backlogs, and a prompt‑evaluation harness. - Notion markets these agents as “AI‑powered agents across your Notion portfolio” that can perform multi‑step autonomous work for up to about 10‑20 minutes, while also adding connectors for Google Drive, Gmail, Linear, GitHub, and more. - A key use‑case highlighted is creating custom agents that act like teammates—e.g., automatically turning sales contracts into technical requirements for engineering—showcasing the tool’s flexibility for cross‑department workflows. - Notion’s broader value proposition is that by centralizing data and automation within its platform, users can cut costs on separate tools, positioning the AI agents as a money‑saving productivity boost. ## Sections - [00:00:00](https://www.youtube.com/watch?v=BP-N7xjz-vM&t=0s) **Notion AI: Easy Custom Agents** - The speaker introduces Notion's newly released AI features, highlighting its seamless database integration and custom connectors that let users build flexible AI agents for tasks such as interview coaching, converting meeting notes into product backlogs, and evaluating prompts. - [00:03:31](https://www.youtube.com/watch?v=BP-N7xjz-vM&t=211s) **Notion AI Automates CRM with MCP** - The speaker explains how Notion uses Model Context Protocol (MCP) connectors to automatically pull Gmail responses and meeting transcript notes into its CRM, updating prospect status in the sales pipeline, and hints at upcoming releases that will let businesses add custom MCP connectors for deeper data integration. - [00:07:18](https://www.youtube.com/watch?v=BP-N7xjz-vM&t=438s) **Tables First, Quality Checks** - The speaker advocates prioritizing table-based data over raw text for clarity and efficiency, and stresses the importance of explicit AI quality checks to verify task completion criteria. - [00:10:49](https://www.youtube.com/watch?v=BP-N7xjz-vM&t=649s) **Undo Button, Run Logs, Prompt Discipline** - The speaker emphasizes that incorporating an undo function, detailed run‑log outputs, and strict, plain‑language prompting dramatically enhances agent reliability, auditability, and reduces hallucination. - [00:14:00](https://www.youtube.com/watch?v=BP-N7xjz-vM&t=840s) **Structured Table Generation Prompt** - The speaker outlines a detailed prompt that instructs an AI to create and manage structured table entries, perform quality checks (including length, content, banned words, and versioning), handle duplicates, and log changes. - [00:19:42](https://www.youtube.com/watch?v=BP-N7xjz-vM&t=1182s) **Notion as Prompt Evaluation Hub** - The speaker outlines how to use Notion to create a database that logs, versions, and scores prompts for various AI tools, enabling systematic tracking, rubric‑based evaluation, and self‑improvement of prompts. ## Full Transcript
0:00I'm here to tell you about the very very 0:02easiest custom AI agent automation 0:06software out there today. And not a lot 0:08of people are talking about it as if 0:09that's what it is, but that's what it 0:11really is. And we're going to talk about 0:12it. It's Notion AI. They just released 0:15this week. And the reason why I'm 0:17calling it custom AI agents is because 0:19of the way Notion has been able to marry 0:22together databases and custom connectors 0:25to other tools that you use throughout 0:26your daily life. and also of course the 0:28power of AI. So we're going to do this 0:31in a couple of sections in this video. 0:33Number one, I want to tell you what 0:34notion released. Number two, I want to 0:37give you my live actual notes using it, 0:40including things that don't work well 0:41and things that do work well, tips for 0:43prompting, all that stuff. And number 0:45three, I want to actually show you what 0:48Notion can do with specific examples. 0:51And so we're going to get into Notion as 0:52an interview coach. We're going to get 0:54into notion uh getting your meeting 0:56notes into a product requirement 0:58document into a backlog. We're going to 1:00get into notion helping you with your 1:03prompts as a prompt evaluation harness. 1:05There's a lot of really cool stuff in 1:07here and it underscores how flexible 1:09this tool is, which is why I'm calling 1:10it a custom AI agent builder, even 1:12though spoiler alert, Notion did not 1:15call it that. Okay, so what's in the 1:17box? What did Notion claim that they 1:19released? So what notion called this is 1:22really an AI powered agentic future. 1:25They talked about it really as AI agents 1:28across your notion portfolio rather than 1:30AI agents powering your whole workflow. 1:32And I think that there's a really big 1:34difference there. What they want you to 1:36see is that notion's AI agents can 1:40perform autonomous work across multiple 1:42steps. They claim up to 20 minutes. When 1:44I was testing it, I got five or 10 1:46minutes pretty easily. and they are 1:48adding tools to make that more useful. 1:51So it's not just notion. So they're 1:53trying to add other connectors as well 1:55including Google Drive, including Gmail, 1:59including your linear, including the 2:02GitHub tool stack, including a bunch of 2:04others. It's like they're trying to add 2:06as many tools as they can. They also say 2:08very shortly they're going to give you 2:10the ability to have customizable agents 2:12that act like teammates and take 2:14specific work flows for specific 2:17projects across departments. So imagine 2:19you want to always take a contract from 2:21sales and move that contract into 2:24technical requirements for your 2:25engineering team. They're trying to 2:26build custom agents to solve for that 2:28use case. And of course notion benefits 2:30because you're pulling more of your data 2:32into notion. And in fact, this is a 2:34really interesting value proposition 2:35because when you hit their landing page, 2:38what they say is that notion saves you 2:41money. You want to spend your money here 2:42because notion saves you money on a 2:44bunch of other things. That has been one 2:45of their larger value props in the age 2:47of AI. And I think it's going to 2:49resonate because everybody knows that 2:52you're not going to pay 100 bucks here 2:54and 100 bucks there and 200 bucks the 2:55other place just for AI. You want a 2:58single home and notion is trying to be 3:00that home by making your data at home in 3:03notion with AI. We shall see. But I want 3:06to show you some use cases that make it 3:07pretty tempting. One of the things that 3:09they have enabled AI to do that I think 3:12we don't really easily see other places 3:14is granular database row permissions. So 3:17notion now has page level permissions 3:20for databases. And so you can actually 3:22have notion AI make granular uh database 3:25controls and database changes per row. 3:28So, for example, if you are trying to do 3:31cold outreach to a contact for a B2B 3:34business, you can have Notion look at 3:36the response in Gmail, look at meeting 3:38notes in a transcript, and then come 3:40back and update a database row in Notion 3:44in your CRM. That helps you understand 3:46where that prospect is at in their 3:48journey, and maybe you move them along 3:49in the sales pipeline. That's the kind 3:51of thing that they're envisioning and it 3:53does work well for that. They are also 3:56strongly advocating a universe of model 3:59context protocol connectors. And so 4:02remember how I've talked in the past 4:03about MCP as sort of something anthropic 4:05seedated into the ecosystem and 4:07engineers have now picked up and used 4:09across all of AI. That is true at notion 4:12as well. Notion is using MCP servers and 4:15bragging about it and they are implying 4:18quick scale as a result. They want to 4:20add more MCP connectors. I would 4:22strongly expect a 3.1 or 3.2 2 release 4:25to allow businesses to add custom 4:28connectors with their own MCPs to pull 4:32in yet more data because that's very 4:33much what Notion wants to do. If you 4:35centralize more data here, you'll be 4:37stickier. You'll use our AI. You'll pay. 4:40You'll stay. Let's get into the actual 4:42experience that I had in part two of 4:45this video. What did it actually look 4:46like? Did it actually work? So, spoiler 4:49alert, it worked, but it is more prompt 4:52dependent than I think you or I would 4:55want. And so, I tried multiple ways of 4:57prompting the system. I tried the more 4:59casual just make it where it was like a 5:01one or twoline pass and I was like, just 5:03make a database for this. It did not 5:05work as well. The ability of the system 5:08to understand what I wanted seems to be 5:11somewhat dependent on really strongly 5:14typed prompting or really strongly 5:16structured prompting. I came up with 5:18eight prompting rules as I ran through 5:20these experiments that I want to share 5:22with you here that I think are highly 5:24correlated to successful notion 5:25prompting. And I'm going to go a little 5:27bit farther. Even if you're not in 5:29Notion, these are going to be useful 5:32prompting tips for a future where we are 5:35building digital artifacts with AI. I 5:38talked in a previous post about this 5:40idea that work is changing. We are going 5:42from a world of work where we handwrote 5:45our artifacts like docs to a world where 5:47artifacts are more interactive where we 5:50can sort of produce something and we can 5:52interact with it like an applet or we 5:54can ask people to contribute or maybe 5:56automatic actions are taken. Notion is 5:58really at the forefront of this trend 6:00especially with the idea that agents can 6:02take action against a page and update 6:04database rows as it goes and I'll sort 6:06of show you how that works. But to do 6:08that you have to apply these prompting 6:10principles. Number one, be really clear 6:14about where you want this tool to work. 6:16You need to say work only on this page 6:18and its subpages or something like that 6:20because you don't want notion to be 6:22broadly scoped and surpriseed elsewhere 6:25because that is equivalent in many 6:27people's notion wikis to an unwanted 6:30code change. You don't want that. So 6:32specify where it works. Number two, tell 6:34it what done looks like. Ask for a 6:36receipt at the end. it will listen and 6:38say, "When finished, please add a line 6:40at the bottom of the page, either okay 6:42or if you're blocked, add blocked and a 6:44reason why." This will let you know 6:46right away in the page text itself what 6:49it did and why. And so getting receipts 6:52helps you to be more auditable and to 6:54track what actually happened. I've 6:56actually developed a prompt that shows 6:58audit logs of previous runs right on the 7:01page itself, which I think is important 7:03if you're starting to make serious 7:04changes. And I'll show you sort of how 7:05that works. Number three, you want to be 7:08thinking in terms of tables and 7:10databases rather than text. If you're 7:13creating things in databases in Notion, 7:15you're leaning into Notion's strengths. 7:16You're leaning into the strengths of a 7:18lot of compute. Frankly, a lot of other 7:20systems depend on databases, too. Tables 7:22are much easier to sort, to operate 7:24against, to review, to fix later, to 7:26adjust. Whereas raw text can be 7:29difficult to format and engage with. I 7:31did try both with notion. I really felt 7:33like a tables first approach was much 7:36more useful for the kind of tasks I was 7:39doing and I think that this is something 7:40that we're going to see change the way 7:42artifacts are formatted. I'm used to a 7:44world out of Amazon with product 7:46requirement documents that were 7:48narratives and yes you have some tables 7:50but you also have a lot of narrative 7:52about the customer experience. We are 7:54moving to a world that is videoheavy and 7:57that also has tables. And it's a really 7:59interesting change for someone who came 8:01up sort of before all of that happened 8:04in the traditional six-pager era. But 8:06here we are. Make tables first, text 8:08second. Principle number four, you need 8:11to use quality checks. One of the things 8:13that you will really thank yourself for 8:16is if you are explicit with the agent 8:19about the conditions under which it can 8:21mark a task accomplished or done. So you 8:23can have it check the length of a 8:25particular piece of text. Is it 180 8:27tokens? Right? Is it 180 letters? You 8:29can have it check if it includes every 8:32piece of data that's relevant for the 8:33task. So if you're writing a cover 8:35letter and you tell notion to do that, 8:36which by the way it can have it include 8:38the company and the role. That seems 8:40like a reasonable requirement. If you 8:42are writing a justification for why you 8:45should work somewhere, make sure it 8:47includes at least one number from your 8:49resume. And you can give it the resume 8:51and it will do that. And the specificity 8:54that you can use here around quality 8:56checks is something that people forget 8:59about, but it's a critical part of 9:02learning to work with agents. They need 9:05that degree of clarity in order to know 9:08they did a good job for you. Otherwise, 9:10they'll just guess and they may 9:12hallucinate or they may not do anything 9:14at all or they may default to token 9:16efficiency and do less than expected. So 9:19use quality checks and also be really 9:21clear what the model should do if it 9:24doesn't have that item. And so you can 9:26write if info is missing please insert 9:29TK confirm which is a traditional 9:31editorial slogan. Or you can pick 9:32whatever you want. You can say insert in 9:34brackets please check this and ask you 9:36for it and it will do that. This helps 9:38the model not just depend on vibes but 9:40actually get to a past fail mindset. 9:42Principle number five don't create 9:45duplicates. If something similar already 9:48exists, you want the model to update it 9:51instead of creating new copy and 9:53dirtying up your context window because 9:55you're starting to think of notion and 9:56really we're starting to think of 9:58agentic tools as context windows 10:00themselves. Like even if you can't 10:02absorb all of it in the context window, 10:04notion itself is becoming a place where 10:07you always have that in mind because of 10:09the way AI operates on it. And if it's a 10:10context window, it needs to be clean, 10:12which means you don't want duplicates. 10:14And so you can actually specify when you 10:18touch a page, if you updated it, please 10:20include a little table where it has 10:22version and last run to describe your 10:24edit and describe the last time you 10:26touched it. This enables you to see what 10:28happened and see how pages changed. It's 10:30one of the things that's going to be 10:31increasingly important as humans and 10:33agents work together in wikileike 10:35environments. Number six, I want you to 10:39create a run log. I want you to think 10:42about each change you're making as if 10:45it's something that needs to be undone. 10:47One of the hallmarks of good agent 10:49architectures is the ability to hit 10:52undo. And I appreciate that notion has 10:54put a literal undo button in the chat 10:57interface. I haven't seen that a lot. I 10:59think people are going to appreciate 11:00that. But if you print a tiny run log, 11:03it's going to help you to go farther. 11:04And so you can stick that into the 11:06prompt. It's one of those things that 11:07sort of extends the idea of a page 11:10update note into a full run log that 11:14actually has links to what happened, 11:16warnings for when things go wrong, etc. 11:18The more you invest on the validation 11:21and audit side, the more you can keep 11:23your context window happy. And by the 11:24way, if you think this is overkill, this 11:27is just not so hard if you have the 11:30right prompt. like I did not actually 11:32have to suffer that hard creating the 11:33pages I made because I was able to work 11:36with chat GPT5 in thinking mode to 11:39create the prompts. And so I'm going to 11:41go through the sort of some of the sort 11:43of conversations, the prompts, what I 11:44got. You'll get the idea. You'll get how 11:46I was able to use these eight principles 11:49without too much blood, sweat, and tears 11:51on my part. Principle number seven, 11:54write in really plain strict language. 11:57So, say create six questions instead of 11:59create a few questions. Use one metric 12:01instead of use a metric. Please, 12:03wherever you can, you want to avoid 12:05open-ended phrases that will encourage 12:08the model to hallucinate unless you are 12:10happy with hallucination. So, as an 12:12example, be inspiring is not a helpful 12:15frame for a cover letter. Asking it for 12:17a specific metric, asking it to include 12:19the name and the company, asking it to 12:21include a specific reason from your 12:23resume. This ensures the agent stays 12:25consistent. So write as plainly and 12:27strictly as you can. And GPT5 is 12:29actually very helpful in this. You're 12:30working with its default language 12:32preferences. Principle number eight, 12:35please don't let it make things up. I 12:37know I said unless you want 12:38hallucination, but really wherever you 12:40can, you want to be underlining to the 12:42model. If you cannot find a claim in the 12:44input data that I give you, please use 12:47check this and do not mark it as done 12:50and come back to me because you don't 12:52want to get into a situation where 12:53you're making up dirty data and then the 12:56model is basing future actions on that 12:59dirty data. So with that in mind, let's 13:01put it all together and let's first look 13:04at a sample prompt in GPT5 that helps us 13:07understand how notion works and then 13:09look at some notion pages that I was 13:11able to create with that kind of a okay 13:14here we are. We have obviously the role 13:16you are a notion agent you want to 13:18specify and limit the page and the 13:20subpages here and you want to make it 13:23clear what done looks like. This is 13:25where I include showing receipts. what I 13:27where I include showing what blocked is 13:28and why. You also want to make sure that 13:31you define the scope. I do not want it 13:33touching things that were sort of 13:35created or edited a long time ago. 13:37Again, I'm trying to keep this context 13:38window as clean as possible. Um, please 13:42do not overwrite unless you are updating 13:45a newer version. And so, it's very 13:46precise about when overwrites happen and 13:48why. Uh, this is a table and I give it 13:51the choice to create or not. So, I could 13:54have this table already here or not. If 13:56it's not here, I tell it to create it. 13:58And I'm literally giving it the columns 14:00and I'm showing it what I want and what 14:02the format is of each column. It's very 14:04specific. Name, which is a title. You 14:07have notes, which are text, a version, 14:08which is a number, etc. Then I get to 14:10the tasks the model should do. Find up 14:12to five items that need work. You can 14:14see we're starting to build a to-do list 14:16here. For each item, draft the content 14:19in the table fields. Run quality checks. 14:21See below. And you tell it to look down. 14:23If all the checks pass, set the status 14:25to ready. If the checks fail, set the 14:27status to needs fixed. Quality checks 14:29then include length is within limits, 14:32company enroll if relevant, at least one 14:34number, avoids banned words, if info is 14:36missing, insert. By the way, avoids 14:38banned words will help you. If you are 14:41writing for a quote unquote AI detecting 14:46uh tool, like there are some tools now 14:47that companies use that others use that 14:49claim to detect AI wording. you can 14:51pretty easily game them if you come up 14:53with a list of words that AI tends to 14:55use like delve and make sure that it 14:57doesn't right and so there's ways you 14:58can start to even game the writing style 15:00here. Then it gets into duplicates and 15:02versions how you handle update what the 15:04requirement is. It's last 10 minutes in 15:06this case. Uh it gives me a version 15:08number and then finally please add a row 15:11in the run log with the times and the 15:13items changed and the warnings. And so 15:15as complicated as those eight principles 15:17sounded, I got all of that fixed into 15:19about a 20line prompt and it's 15:21relatively easy to run. Let's see what 15:23it looks like in practice on a few 15:25actual pages. All right, here we are. We 15:27are looking at the meeting notes to PRD 15:30backlog. I constructed this in just a 15:32couple of minutes. As long as you have 15:33the data, you can do that, too. Uh you 15:35might be wondering, how did I do this? 15:37This looks really complicated. There's 15:38multiple tables here. They scroll along. 15:41You can see that these tables have 15:43statuses that have now changed. You have 15:45PRDs. If I click the PRD, I'm actually 15:48going to see a real page here. So, let 15:50me just click that. Um, and I can click 15:53it and kind of look in and see where the 15:55PRD is. Oh, here we go. Um, so it gives 15:58me an acceptance test, a goal, a 16:00problem. It's actually writing the PRD 16:01as a table, which is really cool because 16:03then it can do operations against 16:04individual components of that PRD. This 16:07is a great example actually of an 16:09artifact that is created to be agent 16:11readable first and human readable second 16:14because it's very easy for me to look 16:16and say what is the what is the TLDDR of 16:19notion agents reliability and I can get 16:22a nice summary from notion. Um and so if 16:26I go here and if I like copy and paste 16:29this 16:30um let me just pull up the actual notion 16:33you can see where I actually did this. 16:35uh please give me a 20word summary of 16:39this PRD and it will just come back and 16:42it will work on on doing that as we chat 16:44here it's looking at it thinking about 16:46it and there you go that's what's in the 16:48box uh what is the highest risk element 16:52of this build right I can actually then 16:54start to inquire into how it works and 16:56so that's one of the powerful things 16:58here right like you can start to 16:59actually ask it to exercise judgment ask 17:01it to think through is talking about 17:03schema drift here you may or may not 17:04agree But the point is you can have that 17:06conversation with it very easily. Now if 17:09we go down here, it can actually go and 17:11automatically fill out to-dos associated 17:14with these PRDs. And so these are all 17:16associated with particular PRDS. And 17:18these are basically to-dos for the data 17:20team, for the email team, the backend 17:22team, all automatically created. All I 17:25did to add to this was to input some 17:28data from meetings. And you can actually 17:31even automate that because notion now 17:33has meeting notes that you can take by 17:34audio. And so I'll share the the 17:36original prompt that I got for this um 17:38and you can sort of see how you can make 17:40it your own. But it illustrates to me 17:43that it's increasingly possible to 17:47move from a world where you consider 17:49these artifacts as static to one where 17:51they are truly dynamic and you can 17:54actually like evaluate how the overall 17:59projects break out in a matter of a 18:01couple of minutes rather than a couple 18:03of days or even longer. I remember when 18:06I was doing PRD work as a product 18:08person, what I'm showing you here would 18:10take days. It took about 2 minutes and I 18:13think that's really compelling. Let me 18:15show you another cool example that I 18:18found. So, this is the notion interview 18:20coach. It may not look like a lot, but 18:23it gives you a rubric and it gives you 18:25everything you need to actually run your 18:27own notion interview scorecard. And so 18:30this is actually it's simulated data, 18:32but what you see here is an entire 18:34database where it can take a notes 18:38transcript. Let's say you interview 18:39yourself and you're practicing your 18:41answers. It can take a notes transcript 18:43with questions, feedback, interviews, 18:46and it can put it into a database like 18:48this and it can actually run against a 18:50rubric for clarity, impact, specific 18:53specificity, structure, whatever you 18:55want. score it and deliver an overall 18:57scorecard to you of how you did. Um, and 19:02I think that's really cool. It was 19:04relatively easy to spin up. There's a 19:05lot more we could do, but it shows you 19:07that you can actually build an entire 19:09system with multiple databases off of a 19:12single prompt and start to actually work 19:14to populate it with real data and get it 19:16going from there. Let me show you one 19:18more and I think you'll find that really 19:20sort of an overall picture of what 19:22notion can do. My goal here is not to 19:24give you the complete picture of notion 19:25because I don't think I can do that. I 19:27want to give you a sense of how I think 19:28notion undersold this. This is actually 19:30an agentic artifacts factory. There's a 19:33lot more to this and I think that with 19:36proper prompting you can go a long way. 19:39So let's do one more. So this is a 19:42prompt and eval harness. This is going 19:44to be more technical and you can 19:45actually look at particular experiments 19:48that were run. You can look at a uh date 19:52for those experiments. You can look at 19:53inputs. You can look at versioning 19:55essentially. And then you can look at 19:57results that were scored pass or fail 19:59based on a rubric, right? Um and you can 20:02get into eval rules. What must the 20:04prompt include? Did the prompt work? Did 20:06it not work? A run log on updates. Um, 20:10at the end of the day, I think what you 20:11should be taking away from this is that 20:16you can do things as nerdy as 20:19self-improving your notion prompts by 20:22using Notion AI. You can do things as 20:25detailed as getting into particular 20:29prompt structures for different tools, 20:31say your perplexity prompts, your OpenAI 20:33prompts, your Claude prompts, and start 20:35to track them in a thoughtful way in a 20:37database. I've been, you know, told by a 20:39lot of people that they are looking for 20:42great prompt tooling. And there's lots 20:44of answers to that question depending on 20:45your workflow. But one of the answers is 20:48notion. One of the answers is actually 20:50building out a prompt database in 20:52notion. And I'll share this prompt in 20:53the in the post and starting to track 20:55and score how your prompts do. Now, if 20:58you're one of those people who says, "I 20:59don't care about prompts." That's fine. 21:01But you're probably going to be getting 21:03better results if you take your prompts 21:05this seriously and actually start to 21:06score them. And of course you can adjust 21:08them to score how you want. This is just 21:09a sample score. Uh and you can see how 21:12the sample score works. Um yeah, so this 21:15is one of those things that I think is 21:18getting slept on and I'm sharing about 21:20notion because I think that we need to 21:22get past an assumption that work is a 21:25series of individual things that we 21:26create with the help of AI and we need 21:28to move to a to an idea of an 21:31agentpowered work factory where the 21:33agents are processing through these 21:35artifacts often autonomously and it's 21:37our job to prepare the environment and 21:40to shape the direction for these agents. 21:43And that sounds super fancy and it 21:45sounds like a big company thing, but 21:47Notion is making that possible for 21:49anybody. Notion is making that possible 21:51at a price of like 20 bucks a month. 21:53Like it it's really very affordable to 21:56have this kind of a thing. And I think 21:58that's really really cool. And so I hope 22:00you've enjoyed this breakdown of notion. 22:01I hope you see why I think it's really 22:02interesting. We are headed to this 22:04future where agents are powering 22:06artifacts. I hope these prompts that I'm 22:08going to share are helpful to you as 22:10well. Cheers.