Learning Library

← Back to Library

AI Note‑Taking: Promise vs Reality

Key Points

  • The current hype around AI‑powered note‑taking apps mirrors earlier VC bubbles, but the speaker remains skeptical and wants to assess their real value.
  • Studies show workers waste roughly 10 hours a week (about 25% of their time) searching for information across Slack, Docs, and other sources.
  • Corporate note‑taking suffers because the underlying data is “dirty” and LLMs struggle to interpret temporal cues and revision histories that humans rely on when evaluating relevance.
  • Most personal and corporate note‑taking systems are abandoned not out of laziness but because the maintenance costs outweigh the benefits, and the promise of AI is to make those systems worth keeping.

Full Transcript

# AI Note‑Taking: Promise vs Reality **Source:** [https://www.youtube.com/watch?v=JdTgxpfCa3E](https://www.youtube.com/watch?v=JdTgxpfCa3E) **Duration:** 00:14:51 ## Summary - The current hype around AI‑powered note‑taking apps mirrors earlier VC bubbles, but the speaker remains skeptical and wants to assess their real value. - Studies show workers waste roughly 10 hours a week (about 25% of their time) searching for information across Slack, Docs, and other sources. - Corporate note‑taking suffers because the underlying data is “dirty” and LLMs struggle to interpret temporal cues and revision histories that humans rely on when evaluating relevance. - Most personal and corporate note‑taking systems are abandoned not out of laziness but because the maintenance costs outweigh the benefits, and the promise of AI is to make those systems worth keeping. ## Sections - [00:00:00](https://www.youtube.com/watch?v=JdTgxpfCa3E&t=0s) **AI Note‑Taking: Hype vs Reality** - The speaker critiques venture‑capital hype around AI note‑taking apps, emphasizing the massive time wasted searching for information due to dirty data, and argues for a realistic, grounded use of LLMs to improve knowledge retrieval. - [00:03:09](https://www.youtube.com/watch?v=JdTgxpfCa3E&t=189s) **LLM Limits in Enterprise Knowledge Management** - The speaker critiques AI‑driven tools such as Glean, noting high costs, integration hurdles, and hallucination risks that undermine reliable extraction of corporate decisions and information. - [00:06:26](https://www.youtube.com/watch?v=JdTgxpfCa3E&t=386s) **AI‑Assisted Judgment and Auto‑Organization** - The speaker contends that, while AI lacks human taste and judgment, using AI tools like the Sparkle auto‑filing system provides indispensable wisdom and seamless organization, compensating for our memory flaws and chaotic file management. - [00:09:30](https://www.youtube.com/watch?v=JdTgxpfCa3E&t=570s) **Optimizing LLM Semantic Search** - The speaker emphasizes feeding large language models clean, well‑structured data and using them for semantic retrieval while recognizing their limitations and building consistent note‑taking habits. - [00:12:53](https://www.youtube.com/watch?v=JdTgxpfCa3E&t=773s) **Building an AI‑Powered Second Brain** - The speaker stresses organizing the LLM layer over your data so AI can act as a digital librarian, enhancing note‑taking habits and providing sustained cognitive lift that makes a long‑term “second brain” possible. ## Full Transcript
0:00You know, the joke is the peak of 0:01venture capital is when you get excited 0:03about note-taking apps. And we have now, 0:05by that measure, hit the peak of the AI 0:07cycle because people are talking about 0:09AI powered note-taking apps. So, let me 0:12make instead an honest case for LLMs and 0:15note-taking. And I'm telling you, I'm 0:17coming from a somewhat skeptical 0:19position. And I want to start by 0:20explaining how bad it has been for how 0:22long with note-taking and how important 0:24it is before I set up kind of where I 0:26want to go. We waste about 10 hours a 0:28week searching for information. That's 0:30not me. That's actually studies done on 0:31workers. Like roughly a quarter of our 0:34working time is spent looking for 0:36something, looking through Slacks, 0:38looking through Docs. Now, I know that 0:41there are tools that claim to do this. 0:43There have been tools that have claimed 0:44to help us do this since before Windows 0:47introduced the folder system to most 0:50people as the PC rolled out. And almost 0:52always the data is dirty. In fact, one 0:54of the things I talk about in other 0:56videos is how the dirty data inside 0:59businesses isn't valuable as much as 1:02people think because it is so dirty and 1:05because of the way LLMs process 1:07information. As a very trivial example, 1:09you as a human look at a wiki page and 1:12you look at the updated date and you 1:14look at what is new at the top and you 1:17say, "Aha, I now know what I need to pay 1:19attention to." or oh my gosh, this was 1:21updated, you know, six years ago by 1:24someone who is no longer with the 1:26company. I'm not going to do anything 1:27with this page at all and I'm going to 1:28go ask an actual human, which is what we 1:30do like 80% of the time. But if you do 1:33see something useful, you know how to 1:35observe it. LLMs, even if they can read 1:37the wiki, don't always know. They don't 1:40because they process information as an 1:43entire semantic context. The idea of 1:46linear time affecting updates is not 1:48intuitive to LLMs. One of the challenges 1:50with most notetaking systems in 1:53corporate contexts or even at home is 1:55that we have this implicit idea of the 1:57timeline. It is today therefore I'm 1:59going to make a diary entry at its 2:00simplest or you know it is the 23rd of 2:03June and I'm making a entry into my 2:05project folder to talk about what I've 2:07worked on in the weekly status review 2:08with the engineering team. And then we 2:10abandon it eventually. Hopefully we keep 2:13up with our diaries. You never know. But 2:14but we abandon most of our note-taking 2:16efforts eventually because they seem to 2:19add nothing. We write things down. We 2:22don't know whether the program manager 2:23is paying attention. We're tired at home 2:25and it's 10:00 at night and we don't 2:27really feel like taking notes because 2:29who's going to read our diary of the 2:31day? We're just going to skip today. The 2:33abandoned notetaking setup, it's not 2:36laziness, it's rational behavior. The 2:39cost of maintaining these systems 2:42exceeds their benefit. And the promise 2:44of AI is that that is going to change. 2:47And I want to talk about how much of 2:49that promise has come true and how much 2:51of that promise we still have to make 2:54come true cuz it's not all guaranteed. 2:57Fundamentally though, things should 2:59change with LLMs. LLMs don't just make 3:01search better. Ideally, they eliminate 3:04the need for organization entirely. 3:07Think about it. Why do we organize? 3:09Because computers are dumb. They need 3:12exact matches. They need proper filing. 3:14They need consistent naming. But what if 3:16your computer could understand contacts 3:18like a colleague would? Look, if you can 3:20dump a message transcript in and ask, 3:22"What did we actually decide and watch 3:23the LLM extract the decisions?" That's 3:26not just an incremental improvement. 3:28It's a paradigm shift in the way we 3:30organize information. This is what has 3:32made Glean a valuable company for the 3:34enterprise. Now, no one's recommending 3:36Glean for your personal note-taking 3:38system because it's like 50 or 60 grand 3:40to start. And I've used Gle. It's okay. 3:43It reads a lot like a chat GPT4 model 3:45that suddenly got access to corporate 3:47data. And even that is somewhat 3:48questionable because Salesforce is 3:50apparently cutting off access to Slack, 3:51which is the living, breathing backbone 3:53of information for a lot of companies 3:56unless you're using Teams. Maybe Glean 3:57will be more of a Microsoft angled 3:59company going forward. We will have to 4:00see. That's that's speculation. But 4:02let's be honest about what doesn't work. 4:04It's not just the Mark Beni offs of this 4:07world saying you can't get access to our 4:08data because we value data in the age of 4:10AI. It's that LLMs hallucinate. I've 4:14watched, we just did a case study on 4:15this. We talked about the LLM Claudius 4:18that ran the vending machine. Claudius 4:20made up a colleague named Sarah who did 4:23not exist. That happens. I've watched 4:25them quote policies that do not exist. 4:28There was an entire lawsuit about that 4:30with Air Canada and a bereavement policy 4:32that I've talked about. Stanford has 4:34suggested that in the workplace in 4:36actual use cases, it's a 15 to 20% 4:39fabrication rate. That seems really 4:41terrifying. Why on earth would I be 4:43advocating for that if if this is in a 4:46business context and we have to get this 4:48right? Well, I'll tell you why. Because 4:50at the end of the day, any incremental 4:52forward progress, if it is correct, is 4:55better than nothing. And so what that 4:57suggests to me is that if in the 5:00previous age of computing, our problem 5:03was file organization and we had to bend 5:06our brains to make them work like 5:08computers do today. In this world now 5:11where AI sits, our fundamental problem 5:14is good judgment. We have to have the 5:16judgment to say, "Hey, Sarah's not a 5:19colleague. Sarah doesn't exist. Try that 5:21again." Or, "I'm going to go look at the 5:23sources on this one." And that is the 5:25trick that we have to trade in order to 5:28use these AI note-taking tools the way 5:30we need to. And I don't want to sit here 5:31and pretend that there's something 5:32magical that's going to take that 5:34hallucination rate to zero. There are 5:36absolutely tricks you can do that reduce 5:38it. You can ask more precise questions. 5:40You can install systems that will give 5:43the LLM the option to say, "I don't 5:46know." You can install systems that give 5:48the LL system prompts that give the LLM 5:50the encouragement to ask questions when 5:52it's confused. There are things you can 5:54do that materially reduce hallucination 5:57rates. Clean data is a good help, too. 5:59But you're not going to get it to zero, 6:01which means that your most valuable 6:02skill has moved from can I organize like 6:04a machine if I want to collect 6:06information to can I name and label 6:09appropriately and then can I go and get 6:12it and have the taste to see when it's 6:15wrong if the LLM comes back badly. It's 6:17like you have a magical fishing net with 6:18an LLM and sometimes it brings something 6:21up that is fool's gold and it's not real 6:23and you have to tell the difference. 6:24That taste and judgment is what we're 6:26missing. And it's ironic that it shows 6:28up in so many places because it's almost 6:30like we have some universal truths 6:32coming with this computing revolution. 6:35We always talked about the value of 6:36wisdom as humans. Now we have to show 6:38wisdom and judgment to use our computers 6:41because the computers take care of a lot 6:42of the other things. The computers will 6:44remember for us and sometimes they will 6:46invent memories which by the way is very 6:48human. Humans invent memories too and we 6:50have to tell the difference between an 6:51invented memory and the real thing. So 6:53what I want to suggest to you is that 6:55despite all these drawbacks having an AI 6:58and a note-taking system is eminently 7:01worth it. You want to be in a position 7:03where you can just heap things and it 7:05will just magically work. 7:08I I have been a devoted fan of a product 7:11from every called Sparkle for a long 7:14time. Sparkle is very simple. All it 7:17does is it gets rid of the filing 7:19problem, which is a huge deal for me. I 7:21am not a good filer. I'm not a good 7:22organizer. My local hard drive, every 7:24computer until now has been a complete 7:26mess. Sparkle makes that go away. All 7:29Sparkle does is it automatically runs on 7:32my downloads folder. It automatically 7:34characterizes it into a neat series of 7:37folders by type of data. That's it. Very 7:40simple. Not necessarily the 7:42organizational scheme I would have 7:43chosen if I'm being very honest with 7:44you. But I don't have to care because I 7:47know where stuff is now because the 7:49organization system is rigorously 7:50followed and because I can easily 7:52search. And so even if something is not 7:55fully AI enabled, having automations 7:58like that is a huge cognitive load 8:02lifter. And having optimizations like 8:05that combined with AI, that's where the 8:07value is. Look, there are all kinds of 8:09options for note-taking. I run through a 8:11few. There's Obsidian, there's MEM, 8:14there is notion. I like notion. I put a 8:16lot in Notion. I find Notion's search is 8:18very helpful. Notion allows me to kind 8:20of do my little like throw stuff in the 8:23junk heap habits and I can still find 8:25stuff pretty reliably. Notion also 8:28understands the idea of recency and the 8:29introduction of AI has made it very easy 8:31to add and create and hybridize notes 8:34together the way my brain works. But 8:36everybody's different. I'm not saying 8:38use notion, use Obsidian, use me, use 8:40something else. The point is find a way 8:42for the AI to take some of the cognitive 8:45load off so that you can throw things in 8:47a heap and you can go after what you 8:50want with easy search and focusing on 8:52your good taste and your good judgment. 8:54Now, if you are someone who finds deep 8:56relaxation in organization, that's also 8:59fine. You can still find AI systems that 9:01will allow you to define the 9:03organizational hierarchy and then search 9:05across that. The larger value is still 9:07there. The larger value is that semantic 9:10meaning is not something you have to 9:11remember anymore. Semantic meaning is 9:14something that the AI can help you 9:16remember. Now, there's weaknesses to 9:17that, but guess what? As a human, you 9:19already have those weaknesses. You are 9:21also a semantic meaning maker. And so if 9:23you're searching for something, you're 9:24like, "No, no, no. It's not, you know, 9:25it's it's not like the project manager 9:27and the product manager are similar. 9:28It's like the project manager is 9:30actually connected to this project." I 9:32do that in my head all the time. We are 9:34meaning makers and semantic makers in 9:36the way our neurons make memories. So do 9:38LLMs. They do something similar when 9:40they encode things in vector space. And 9:42so our job is just to set up systems 9:45that enable those LLMs to search 9:47semantic memory appropriately. Clean 9:49data. Maybe don't keep the six-year-old 9:51wiki in there. Make sure that you have 9:53clean markdown. Make sure that you're 9:55comfortable with the file structure. For 9:57me, I don't need to define it. Other 9:58people do. And make sure that you are 10:00using the AI for what it's good at right 10:02now, which is very much semantic, 10:04meaning search, and not for what it's 10:06not good at. It is not good at reliably 10:09getting everything correct. If you got a 10:11keyword search in Windows and the 10:13keyword hit, you know, 100% of the time 10:15that keyword hits. That is not true. And 10:17that is a big difference in search. It's 10:19very fundamental to how AI works and we 10:22have to get used to the idea that we 10:24need to challenge these systems, but 10:26they still add tremendous value because 10:29of the cognitive load they lift the 10:31other 80 90% of the time. Net net, 10:33they're worth it, but you have to be 10:35aware of what you're doing and give them 10:36as clean a data as you can. So, pick a 10:38tool, commit to it, recognize that the 10:41incremental value is the habit you're 10:43building. It is not any individual 10:45retrieval. It is not any individual note 10:47you take. and then lower the barrier to 10:49note-taking. One of the beautiful things 10:50about AI is it also has simplified 10:52note-taking. If you use Granola, I use 10:55Granola. You know what I mean? It's 10:56super easy. You get the transcript right 10:58there. You get the notes right there. 11:00It's not hard. Other people use other 11:02things. People use Otter. Some people 11:03are using Chat GPT's native 11:05transcription. I don't like that as much 11:07because it just sort of hides from you 11:08and then it gives you very generic 11:09notes. I tend to have big surprise 11:11opinions about my notes and I like to be 11:13able to write up custom prompts against 11:15the transcript for the notes. do what 11:17you want. The beautiful thing is you can 11:19actually use AI and its ability to 11:21organize semantic meaning to quickly 11:24organize and reduce the cognitive labor 11:25to put the notes in to your note-taking 11:28app in the first place. And then you can 11:30use AI to search across that. I can use 11:32AI to tag my notes, which I would never 11:35have the discipline to do otherwise. But 11:37by tagging the notes, it makes it more 11:38easy for another AI to find it. And this 11:41is actually not creating synthetic data 11:44in a way that is likely to accelerate 11:47information decay because 11:50the individual steps can be easily kept 11:53an eye on by a human me in this case 11:55just going to let say oh look you know 11:57you applied the wrong label or oh look 11:59the label's right which it almost always 12:00is and one of the things about AI is 12:03those really dramatic hallucinations 12:05that are unhelpful tend to arise in 12:07large multi-step complex situations like 12:10when Claudius went off the rails and had 12:13what I can only describe as the LLM 12:15version of a psychotic break on March 12:173rd during the vendor uh experiment and 12:19then recovered on April Fool's Day 12:21spontaneously in a way none of us 12:23understand. It was engaged in a month'sl 12:25long complex effort to run a vending 12:29machine with minimal tooling and no 12:32access to the vending machine 12:33physically. I would describe that from a 12:35human perspective as being under a fair 12:37bit of stress. When an LLM is simply 12:39asked to summarize 30 minutes of notes, 12:41I actually rarely see issues. And so 12:42it's important to understand the task 12:44sizing and the retrieval scope when you 12:47are doing this note takingaking a note 12:49architecture exercise. Big surprise. 12:51This is what I say a lot on this 12:53channel. If you put thought into how you 12:55structure the LLM layer on top of your 12:57data layer, you're going to be in better 12:59shape. So netnet, what I want to leave 13:02you with is this. Your brain evolved to 13:04think. It did not really evolve to file. 13:06We've been doing filing for a while 13:08because our computers have been stupid. 13:10But now AI can help help by playing 13:12librarian. It may not be a perfect 13:14librarian, but having a librarian at all 13:16for our data and our memories is really 13:18helpful. LLM can enable you to focus on 13:20what matters and the thinking is what 13:22you need to have good judgment when they 13:25are not great. The question is whether 13:28them helping you is better than you 13:30going it alone. And in particular 13:32whether the cognitive lift you get from 13:34a note-taking system with AI enablement 13:36and support is enough to keep you in the 13:39habit of keeping notes long term so that 13:42over time as you have a good note 13:44takingaking body of work the second 13:47brain can really start to come into 13:49focus. The value of a second brain is in 13:53all of the effort together. It is not in 13:55any individual effort. You have to stick 13:57with it for a period of time in order to 13:59make it work. And that's why I think 14:00it's such an important subject right 14:01now. We have to spend our time thinking 14:04better. And so a good second brain is a 14:06huge step in the right direction for us. 14:08And LLMs can be a big help. And I wanted 14:10to take a minute to just unpack what 14:12makes them difficult to work with, what 14:14makes them easy to work with, why I 14:16think they're a breakthrough in this 14:17whole effort around note-taking. 14:19Everyone I know who I have studied who 14:21is considered a genius or someone who's 14:24an inventor has had some kind of 14:27note-taking system or some kind of 14:29notebook. I don't think that's an 14:30accident. Having a second brain was 14:32actually a skill that was taught in 14:34Scottish universities. It was called 14:36common placing. We have been doing this 14:38for a long time. Now we can do it with 14:40the help of AI. It may not be perfect, 14:42but I would sure rather be here than I 14:44would be trying to make the fountain pen 14:46write the ink right in a Scottish 14:48university in the 18th century.