Learning Library

← Back to Library

Three Questions to Vet AI Tools

Key Points

  • The market is flooded with over 100,000 AI tools, most of which add complex integration points and failure modes that can be harmful if an organization isn’t prepared to sustain them.
  • Successful AI adoption hinges on asking three critical evaluation questions, starting with whether the tool directly eliminates a clearly measurable pain point.
  • A concrete example is Lera Guard, which mitigates prompt‑injection attacks in production AI systems, illustrating disciplined tool selection based on a specific risk.
  • For personal productivity, Nessie Labs offers a Mac app that consolidates chats from Claude, ChatGPT, and Perplexity, showing how targeted tools can solve niche user problems.

Full Transcript

# Three Questions to Vet AI Tools **Source:** [https://www.youtube.com/watch?v=vDtwS1w16K4](https://www.youtube.com/watch?v=vDtwS1w16K4) **Duration:** 00:08:53 ## Summary - The market is flooded with over 100,000 AI tools, most of which add complex integration points and failure modes that can be harmful if an organization isn’t prepared to sustain them. - Successful AI adoption hinges on asking three critical evaluation questions, starting with whether the tool directly eliminates a clearly measurable pain point. - A concrete example is Lera Guard, which mitigates prompt‑injection attacks in production AI systems, illustrating disciplined tool selection based on a specific risk. - For personal productivity, Nessie Labs offers a Mac app that consolidates chats from Claude, ChatGPT, and Perplexity, showing how targeted tools can solve niche user problems. ## Sections - [00:00:00](https://www.youtube.com/watch?v=vDtwS1w16K4&t=0s) **Evaluating AI Tools Effectively** - The speaker warns that most AI tools increase integration complexity and failure risk, and outlines a three‑question framework—starting with whether the tool alleviates a measurable pain—to responsibly assess any AI solution. - [00:03:06](https://www.youtube.com/watch?v=vDtwS1w16K4&t=186s) **Evaluating Tool Adoption Viability** - Deciding whether a tool like Nessie fits requires clearly identifying the problem it solves and carefully weighing the effort, behavioral changes, integration complexity, and long‑term support needed to sustain it. - [00:06:12](https://www.youtube.com/watch?v=vDtwS1w16K4&t=372s) **Evaluating Unregulated AI Tool Spending** - The speaker cautions against the unchecked VC‑driven rush to buy AI tools, stresses the importance of assessing usefulness, and spotlights two niche solutions—LERA for prompt‑injection protection and Nessie as a personal AI knowledge‑base. ## Full Transcript
0:00You know, there are more than a 100,000 0:02AI tools out there and most of them are 0:04going to be useless. And in fact, 0:06they're going to be actively harmful. 0:07Let me explain why. If you add any AI 0:11tool to your system, you are adding at 0:13least two new handoff and integration 0:16points, not to mention a whole host of 0:19failure modes. Because generative AI 0:21products, they're more complex. They 0:23solve things that are harder to solve. 0:25And so, there are more ways they can 0:27fail. And so unless you are bought in 0:29and ready to sustain the product, you 0:32are buying yourself a load of failure. 0:34And that is why we see study after study 0:36coming out showing companies investing 0:38in AI tools and being disappointed by 0:40what they buy. Sometimes it's not even 0:43just the tool itself. It's the fact that 0:46the organization isn't ready. And so 0:48today I want to walk you through the 0:51three critical questions that I ask when 0:54I'm looking at AI tooling so that you 0:56have a framework. Then I want to show 0:58you a couple of tools that I think are 1:00worth thinking about for specific pain 1:02points that you should ask yourself 1:03those questions for. And if you want to 1:05go deeper, I've got a whole load of 1:08tools to review over on the substack 45 1:11or so that you can dig into that I've 1:13started to ask these questions for. 1:14Frankly, I think we should be asking 1:16these questions whenever we evaluate a 1:19tool. Question number one, does it kill 1:21a pain that we can measure? So when you 1:23are trying to find an AI tool, so often 1:25you think about hopes, you think about 1:27dreams, you think about how far you can 1:29go with the tool, the vendor sells you a 1:31lot of cool stuff. Do you have a 1:32specific painoint? Do you have something 1:35that is absolutely crystal clear? So as 1:37an example, Lera Guard, which I'm going 1:40to show here in a second, it cuts down 1:42prompt attacks. That is its purpose. It 1:44stops prompt injection attacks. Maybe 1:47not perfectly, but a lot of them. If you 1:49have a production AI system, you may 1:53want to think about a tool like Laragard 1:56because of that. And I'm going to show 1:58these tools at the end of the video. So, 2:00we're going to stay with the principles 2:01here. We'll get to the tools at the end. 2:02The point is simple though, right? I I 2:04can name a painoint. I can say I have 2:07prompt injection risk. Therefore, I need 2:10a tool to address that. Therefore, I 2:12need to review some vendors to go after 2:14it. I rarely see that level of 2:17discipline from people who are shopping 2:19for tools, whether they're individuals 2:21or whether they are larger companies. 2:24Either way, that I'll give you another 2:26example that's individual sort of 2:27focused. I want to keep track of my 2:29chats when I have them with Claude, when 2:32I have them with chat GPT, when I have 2:34them with Perplexity. But I I don't have 2:36one place to do that. Well, there is a 2:38startup that's addressing that now. It's 2:40called Nessie Labs. and they have a 2:42product out for Mac called Nessie and 2:44that's exactly what it does. It imports 2:46your chat GPT chats. If you use Chrome, 2:49it's going to work with you to keep 2:50track of your chats. It is laser focused 2:52on that specific painoint. But again, 2:55it's not perfect. It does not work if 2:57you use the clawed app builtin. It 2:59doesn't automatically keep track of your 3:01chat GPT chats that are not in Chrome. 3:04So there are weaknesses, but you can't 3:06assess whether it's right for you or not 3:09if you don't know very specifically what 3:12the pain is that you're trying to solve. 3:13If you really care about getting all 3:16your chats in one place and organizing 3:18them and you can name that pain, then 3:20maybe it's worth it. Maybe that's the 3:22right one to go after. Question number 3:23two, can we integrate and sustain this 3:27tool? So you need to map out the effort 3:30of change. If it's an individual tool 3:33like Nessie, the effort is a change in 3:35behavior. Maybe you're using another 3:36browser, not Chrome. Maybe you're used 3:38to using the desktop apps for AI. Maybe 3:41you're not ready to do the work of 3:44exporting a zip file of old chat GPT 3:46chats to get the organization started 3:48inside Nessie Labs to have your memory 3:50layer. But you have to decide if that's 3:52worth it. You have to decide if the cost 3:55of sustaining that over time of changing 3:57your behavior over time is worth it. If 3:59you are installing an enterprise tool, 4:02it's of course much more complicated. 4:03Your teams will need training. They will 4:06need to understand edge cases where the 4:08tool doesn't work. Your IT department is 4:10going to have to support it. It's 4:12exponentially more complex. And every 4:15single tool you add adds edges that you 4:18have to sustain. It touches other tools 4:21in your ecosystem. and it touches other 4:22teams. Have you mapped that out? Are you 4:25ready to sustain the tool? Good tools 4:28are going to make it as easy as possible 4:31to own setup, to tune your alerts, to 4:34figure out what ongoing maintenance 4:35looks like in a way that is sustainable 4:37for your business. Tools that are poorly 4:39constructed assume most of the work for 4:42figuring that out falls on you. That's 4:44why I'm a big fan of looking at 4:45documentation when you want to figure 4:47out what tools work. Number three, when 4:50you're asking yourself about tools, ask 4:52yourself, what is the worst failure mode 4:54here? And can we stomach it? Now, 4:56individuals sort of get away with one 4:58here. The worst failure mode for 4:59individuals is usually not too bad. You 5:03can actually just look at a particular 5:07tool set and say, "Yeah, you know what? 5:09I'm going to try Nessu Labs. It's going 5:11to be fine. The worst thing that could 5:12happen is that I end up with a tool with 5:14some memory that I didn't end up using. 5:16That's not too bad." or I forget to use 5:18Chrome for a chat. That's not too bad. 5:20Companies have a much higher bar to 5:22meet. Let's say you're using mem zero. 5:24It's a memory layer for customer success 5:26agents so that the customer success AI 5:28agent can remember your customer and 5:29interact with them more personally. 5:31Great idea. What if there's a 5:33catastrophic failure and there's a 5:34memory leakage of some sort? Can you 5:36stomach the lack of trust that comes 5:37with that? Do you have asurances? You 5:40have architecture in place to make sure 5:42that that's mitigated. What if you are 5:44using Lera Guard and a prompt injection 5:47attack does succeed? What do you do 5:49then? And so a lot of what we're doing 5:51when we look at tools is we're 5:52essentially trying to reference like do 5:54we understand the pain? Does this 5:56actually act like a heat-seeking missile 5:59and just go after that particular pain 6:01point? If it does, can we integrate and 6:03sustain it? And if we can integrate and 6:05sustain it, do we understand the 6:07downside and have we mitigated for it? 6:09If we ask ourselves those questions, we 6:12are going to be so much farther along on 6:14unreged tool purchases. There are, look, 6:16there are billions of dollars being 6:18thrown around here. Part of how the VC 6:19industry is sustaining itself right now 6:21is that people are throwing money at AI 6:22tools and not asking themselves, is it 6:25useful? So, without further ado, I'm 6:27going to show you just a peek at 6:28Memzero, at Laragard, and at Nessu Labs 6:32because I think those are all ones that 6:34I've referenced here. And if you're 6:35curious, I'd love you to dive in. 6:37There's, as I'm saying, a bunch more 6:39tools that I'll have up on the substack 6:40as well. So, this is LERA. The idea is 6:43it's a layer in between your generative 6:44AI applications and bad actor. And this 6:48is a tool that enables you to 6:50proactively understand what is going on. 6:53You get visibility. It's going to 6:54protect you from prompt injection 6:55attacks. You can control and configure. 6:57Think of it as like it's the classic 6:59security play. It's a shield, right? 7:01Like, and you can decide how you 7:02configure your shield, etc. Nothing is 7:04perfect, but it's an example of a tool 7:06that's aimed at a particular risk that 7:08companies tend to articulate is very 7:10painful. This is Nessie. Nessie is 7:12exactly what I talked about, an 7:13individual AI knowledge base for the 7:15mind. You can download it for Mac. The 7:17idea is that it's able to capture your 7:19overall chats and get them into 7:21summaries. You can organize, you can 7:22play with, you can use laser focused on 7:25a particular painoint I hear from a lot 7:27of people. And last but not least, this 7:29is me zero. It's focused specifically on 7:32how you can help AI agents to remember 7:35customer success use cases. And so this 7:37is a travel example, but you can imagine 7:39this for a lot of other examples, too. 7:40If you have generative AI applications 7:42that you're focused on, this becomes a 7:45really powerful way to connect with your 7:46customers. So, I don't pick these 7:48because they paid me or anything. They 7:50don't know they're being talked about. 7:51I'm mentioning them because I think that 7:53they do a good job talking about a 7:54particular painoint and I think that 7:56they are enabling us to talk about how 7:59you choose tools. Well, I don't care if 8:01you choose these tools or not. I want 8:03you to understand how to pick tools that 8:06work for you. If you'd like to dig in 8:08further and understand how I think about 8:10tools, I have the same set of questions 8:13that I just outlined around sustainment, 8:15around picking tools, well, around 8:17finding the pain point, around worst 8:18case scenario planning. And I have that 8:20for like 45 tools because I think that 8:23we need to have honest conversations 8:25about this and we have to start 8:26somewhere and we have to start with 8:28harder questions than we're asking. So, 8:30this is the no tools tool episode. It's 8:32it's I want you to bias to not buying 8:33the tool unless it says yes to these 8:36questions because I think too often 8:38we're too easy like we open the wallet 8:41too quickly. We need to be sort of 8:43hard-nosed about what tools really 8:45matter. So there you go. That's my take. 8:46It's how you build a no tools tool 8:49culture and pick AI tools that actually 8:51matter.