Learning Library

← Back to Library

Flawed Prompt Packs Undermine AI Literacy

Key Points

  • The newly released ChatGPT prompt pack offers overly generic, one‑line prompts that lack the necessary context for complex tasks like GDPR compliance, making them ineffective for professional teams.
  • Relying on such superficial resources promotes a false sense of mastery, trapping a future generation of knowledge workers in the “messy middle” of AI adoption where they treat AI like ordinary software instead of a skill‑intensive tool.
  • To stay competitive, individuals must continually deepen their prompting expertise, as AI advances rapidly and only those who “lean all the way in” will keep up.
  • Real‑world examples show the difference that sophisticated prompting can make—e.g., generating a full financial analysis from a screenshot in Excel with Sonnet 4.5—while less capable attempts (like with ChatGPT‑5) fall short.
  • Effective AI use requires teaching teams how to craft detailed, context‑rich prompts rather than deploying generic packs, ensuring AI serves as a true intelligence‑augmentation resource.

Full Transcript

# Flawed Prompt Packs Undermine AI Literacy **Source:** [https://www.youtube.com/watch?v=N8ddmMBJrzo](https://www.youtube.com/watch?v=N8ddmMBJrzo) **Duration:** 00:12:23 ## Summary - The newly released ChatGPT prompt pack offers overly generic, one‑line prompts that lack the necessary context for complex tasks like GDPR compliance, making them ineffective for professional teams. - Relying on such superficial resources promotes a false sense of mastery, trapping a future generation of knowledge workers in the “messy middle” of AI adoption where they treat AI like ordinary software instead of a skill‑intensive tool. - To stay competitive, individuals must continually deepen their prompting expertise, as AI advances rapidly and only those who “lean all the way in” will keep up. - Real‑world examples show the difference that sophisticated prompting can make—e.g., generating a full financial analysis from a screenshot in Excel with Sonnet 4.5—while less capable attempts (like with ChatGPT‑5) fall short. - Effective AI use requires teaching teams how to craft detailed, context‑rich prompts rather than deploying generic packs, ensuring AI serves as a true intelligence‑augmentation resource. ## Sections - [00:00:00](https://www.youtube.com/watch?v=N8ddmMBJrzo&t=0s) **Critique of ChatGPT Prompt Pack** - The speaker condemns the newly released ChatGPT prompt pack as overly generic and ineffective, arguing it exemplifies the poor state of AI education and underscores the need for more thoughtful, context‑rich prompting. - [00:03:42](https://www.youtube.com/watch?v=N8ddmMBJrzo&t=222s) **Designing Contextual AI Upskilling** - The speaker argues that effective AI education should target specific job‑family pain points and real‑world use cases, rather than assuming generic prompting skills will seamlessly transfer from search‑engine experience. - [00:07:18](https://www.youtube.com/watch?v=N8ddmMBJrzo&t=438s) **AI Adoption Hindered by Human Gaps** - The speaker argues that AI tools like Claude and Copilot are powerful when integrated into clear workflows and proper training, but widespread reluctance, insufficient training, and token compliance efforts are the primary barriers to effective implementation. - [00:11:45](https://www.youtube.com/watch?v=N8ddmMBJrzo&t=705s) **Call for Better AI Education** - The speaker urges model makers to invest in clear, beginner‑friendly AI education and resources, while personally sharing content and encouraging passionate problem‑focused engagement with AI. ## Full Transcript
0:00ChatGpt launched an absolutely terrible 0:03resource for prompting and I think it 0:05deserves more attention because we need 0:06to talk about how bad AI education is 0:10today and how much is dependent on 0:12getting it right. And Chad GPT is a 0:14leader in the space. They're seen as an 0:16influencer as the first mover. People 0:18will look to things like the Chat GPT 0:21prompt pack that just got released and 0:23say this is something we need to give to 0:25all of our teams. They're terrible 0:27prompts, guys. They're like one or 0:28twoline prompts that are extremely 0:30generic. In fact, I'm going to go ahead 0:32and read you one for their most 0:34technical team, engineers. Let's say 0:36your engineers are asked to come up with 0:39GDPR compliance uh responses from a 0:42technical perspective. How should we 0:43advance GDPR compliance? You might think 0:45that you need a fairly complex prompt 0:47for that. It should take account of your 0:48data schema. It should look at the 0:50countries that you have a footprint in. 0:51It should look at data processing, also 0:53where data is stored, what your existing 0:55stack looks like. None of that. None of 0:58that comes out in this prompt. Research 1:00best practices for GDPR CCPA compliance. 1:03Not even one. It mashes them together. 1:06So we can help kick off our discussions 1:07with our legal team. When has 1:09engineering kicked off discussions with 1:11legal ever? Context. Our app stores 1:13sensitive user data in the EU and US. 1:16Output a compliance checklist with 1:18citations sorted by regulation. Include 1:20links to documentation and regulations. 1:22No, that's what Google is for. That is 1:24not what intelligence is for. If you're 1:27building intelligence that's too cheap 1:29to meter, teach us how to use it. Be 1:31useful with it. And this worries me 1:34because one of the looming fears I have 1:36for 2026 is that we are going to get a 1:39generation of builders of workers of 1:42knowledge workers trapped in the messy 1:46middle of AI adoption. And resources 1:48like this encourage that kind of 1:51behavior. They encourage the assumption 1:53that we only need to pretend that this 1:56is regular software we have to adopt. I 1:59can go get the prompt pack from OpenAI. 2:01I can roll it out as a manager to my 2:03sales team or my engineering team or my 2:05product team and I'm done and we can 2:06move on and it's just it's a oneanddone 2:09thing. AI is on an exponential curve. 2:11This is a case of getting onto a moving 2:14train. You are either going to lean all 2:16the way in and you are going to learn 2:18fast and you are going to scale up 2:21quickly in your skills and keep leaning 2:23in or you're going to get left behind. 2:25And if you learn two or three lines in a 2:28prompt and you think you've got it, 2:30you're in the left behind contingent. 2:32You're going to be surprised when people 2:34come along and say, "I one-shotted an 2:37entire financial analysis off a 2:38screenshot." And here it is in Excel, 2:40which by the way, real example, I did 2:42that with Sonnet 4.5 last night. Very 2:45helpful. I actually tried it with Chat 2:46GPT5 as well. Chat GBT5 did not do as 2:50good a job, which I thought was really 2:51interesting because it's usually very 2:52good at image analysis. But that being 2:54said, that's an example of the kind of 2:56thing that I tried. I learned something 2:59new about image capabilities that wasn't 3:01really published very well from Claude 3:03and now I know more and now I'm sharing 3:05it. There are hundreds of those 3:07examples. Part of why I make this 3:08channel is so that it is easier to keep 3:11up. It is easier to understand. Part of 3:13why I write the posts I do on Substack 3:15is so it's easier to find. My response, 3:18by the way, to the Open AI choice to 3:20release effectively a gigantic packet of 3:23lousy prompts. By the way, not just me 3:26saying that, Reddit also has been 3:28ripping it apart. And I know we don't 3:29always like Reddit, but they they have 3:30rightly been ripping this prompt packed 3:32apart as completely useful for people 3:34who are serious about AI. I am making a 3:36prompt pack in response that is actually 3:38useful. I'm going to put on Substack. 3:40And so if you want something by job 3:42family, I'm putting it together. I just 3:43this is it's really bad. You can't 3:46assume that all you need is basic 3:49ability to ask questions of AI. If that 3:52were true, one, Chad GPT5 would be 3:55easier to prompt, which it is not, and 3:58two, you would assume that people would 4:00be able to transfer their existing 4:02Google skills to AI seamlessly, which 4:04it's actually a very different skill set 4:06because people have been asking 4:07questions of Google for a very long 4:09time. That's not really a new thing. I'm 4:11concerned. I'm concerned that our 4:14assumptions about what is needed for AI 4:16education do not match the pace of 4:19development. If I were designing a 4:22curriculum for teams, and I get asked 4:24this, so I'm going to share right here 4:25with you what I would say. If I were 4:26asked to design an upskilling curriculum 4:28for teams, I would start by working 4:31through use cases with them. Where are 4:34the pain points in the team's existing 4:36workflow? Engineers, product managers, 4:38sales, whatever it is. Where are the 4:39pain points where we see lots and lots 4:43of manual cycles and not a lot of 4:45results? Like you just grind on it. 4:48Great. Thank you. that is a candidate 4:50for talking about AI. And then we start 4:52to ground the whole day, the whole time 4:54we have together in actually talking 4:56through how AI can unlock that for you. 4:59And that makes it tangible. It 5:01immediately goes from the silly two or 5:03threeline prompts like I've been tearing 5:05apart here into something that is useful 5:07for your use case. Maybe your use case 5:09is that your team struggles to get 5:12classic strong bulleted technical 5:15requirements out of the documents 5:16product gives you. Great. That's one we 5:18could work on with AI. Maybe your team 5:21struggles with getting accurate sales 5:22pipeline predictions. Well, thanks to 5:24tool use with LLMs, you can start to get 5:26that, too. Maybe you're struggling with 5:28the just the pace of the interview 5:30pipeline as you try to bring people on 5:32board. You can get note-taking. You can 5:34get some standardized forms to review. 5:37You can get standardized question sets. 5:39There's a lot you can do with AI to lift 5:40that burden and still put the human at 5:42the center of the interview process so 5:43you can focus on assessing candidates. 5:45Those are just off the top of my head. 5:47Every single department is full of those 5:50kinds of opportunities. And the gap is 5:53our ability to understand how quickly AI 5:56is scaling and how much capability we 5:58have on the table. There is meat on the 6:01bone here that we are not touching. Most 6:03managers have no idea how much AI 6:07opportunity there is in their space. 6:08It's like I look at it when I come in 6:10and I'm like 80 or 90% of the AI 6:13opportunity is untouched. You guys are 6:15sitting here talking about you know how 6:17Copilot can do this and that or how chat 6:19GPT can do this and that. Great. I'm 6:22glad you're chatting with Jack GPT. I'm 6:24glad you're using Copilot for your 6:25emails. Have you thought in workflows? 6:28Have you thought about the impact your 6:29team is delivering and worked back from 6:31that into your pain points? No. Well, 6:33maybe we should start with that and then 6:35get into training. And so, yes, when I 6:37build prompts, when I think about what 6:39teams need, I think about how to build 6:44prompts that are going to be supportive 6:46of workflows. So, of course, they are 6:48longer and they can be longer because AI 6:51can do more. And by the way, if you're 6:53listening to this and you're like, "My 6:54or uses Copilot, Nate, like Chad GPT, 6:56what Claude what?" Well, one, I have 6:59news for you. Claude is now in the 7:01Office family for Microsoft. It's it's 7:04blessed. That is why Sacha Nadella was 7:07bragging about having the best Excel 7:10model. He just put Claude in a rapper, 7:12right? Like he doesn't have a magical 7:15best Excel model that he's been hiding. 7:16He put Claude in a rapper. So Claude's 7:18going to be there. But two, it is not 7:20the AI model that matters. It is the way 7:23you use it, which is a very sort of zen 7:26thing to say, but it's true. If you have 7:29a good idea of what you want to get done 7:31with workflows, you can do a ton with 7:32Copilot. I wrote a whole guide for that. 7:34You can do a ton with co-pilot to enable 7:38your business to actually use AI. It is 7:41not just for email. It is a model you 7:43can actually employ. Like to sidetrack 7:45conversations around my model's terrible 7:48or my model isn't as good as like the 7:49best thinking models out there. You can 7:52still do a lot with it. We would still 7:53be impressed if it was 2022 and that 7:55model launched. If co-pilot came out in 7:572022, everyone would be over the moon. 7:59There's a ton you can do with it. The 8:01gap is people. The gap is people. one, 8:04not being willing to train. And that's 8:06part of why Accenture fired 11,000 8:09people is the strong implication was 8:11they were not willing to be trained on 8:12AI. I don't know if that's true. It's 8:14Accenture's side of the story, but 8:15that's what they said. Um, and then two, 8:17the gap is people thinking a little bit 8:19of training is enough. And that is why I 8:21am concerned about what Chad GPT did 8:23because they basically said, do you want 8:25to get started? A little bit is enough. 8:27You can just get started. Put these two 8:29sentences in on GDPR CCPA and you'll be 8:32done. You'll be good. And then they did 8:33that 200 times. People on Reddit were 8:35saying the intern wrote the prompts uh 8:36with Chad GPT. And I'm like, "No, I 8:39think the intern just wrote it by 8:40themselves because Chad GPT would write 8:42a better prompt." We owe it to ourselves 8:44and people farther in AI, people at 8:47ModelMakers owe it to the community to 8:50produce better resources. And I know 8:52that we have a gradation of talent and 8:54we need on-ramps for everybody to get 8:56into AI. Not everybody's going to sit 8:58there and listen to Andre Carpathy talk 9:01about LLMs and just go, "Wow, this is 9:03amazing." Yes, they're stoastic people 9:05spirits. It's an illusion to the YC 2025 9:08presentation that he made. No, like 9:10they're not all going to do it. And so, 9:11everybody needs to get on at their own 9:13pace, but we need to have really clear 9:15progression and we need to help people 9:17to understand principles that can scale. 9:19And so, if you're going to give people 9:21simple prompts, maybe that's all right 9:23as long as they understand one, this is 9:25just the start and you need to do 9:26better. and two, this is how it ties to 9:28your workflow and moves things forward. 9:30And three, these are the principles that 9:32scale with it. Like if they had taken 9:34the time to say from their own best 9:36practices, if OpenAI had taken the time 9:39to say it's really important to 9:40establish context for the prompt, having 9:42a goal for the prompt is important. Look 9:44how we're doing that even in a simple 9:46prompt, right? Like that's helpful. That 9:47helps you to internalize these 9:49principles. If you don't do that, you're 9:51going to be stuck thinking that you 9:53understand prompting and AI and you're 9:55going to get left behind in 2026. We 9:57don't want that. We need better prompt 9:59education. We need better AI education. 10:02We need better understanding of where AI 10:05opportunities lie in our fields of work 10:08so that we retain our curiosity and we 10:10learn with AI. And we're just not 10:13getting that when we get resources like 10:14this. And so I call them like I see 10:16them, right? Every model maker has spots 10:19they do well on and spots they don't. In 10:21this case, I don't think the new chat 10:22GPT prompt pack is moving the ball 10:24forward at all. It reads very much like 10:26a defensive gesture where they needed 10:29people buying chat GPT for enterprise to 10:32have a link they could point to to say 10:34they offer prompt pack education and 10:36then like somebody ticks the box in it 10:38and they get the sale. They don't. That 10:40is not what that is. So, I built some 10:42prompts, but mostly make sure you 10:45understand why you are learning the AI 10:47you're learning. Make sure you 10:48understand your use cases and make sure 10:50you lean in on growing your AI knowledge 10:53over time. This is not a typical 10:55software adoption story. This is a new 10:57general purpose technology and we need 10:59to treat it like that if we are going to 11:01successfully hang on to the train while 11:03it is scaling exponentially. Onet 4.5 11:07did 30 hours of continuous work and 11:10rebuilt Slack. They built their own 11:11version of Slack and Sonnet just went 11:12and did it and wrote 11 thou th,000 11:14lines of code and it worked. That is 11:16what the bar is becoming. I'm not saying 11:18any of the dramatic things about that 11:20replaces engineers or this and that 11:22because if you work in software 11:24engineering like you will see the weak 11:26spots of AI all over the place, but it's 11:28a big big deal. It is going to change 11:30how engineers work. It's going to change 11:32how PMs work. It's going to change how 11:33product gets built. It's going to change 11:34our velocity expectations. And we need 11:37to have AI education that keeps that in 11:39mind. When we talk about prompting, we 11:41need to prompt with that world in mind. 11:43And that's why I care so much about this 11:45because we deserve better. So this is my 11:48plea. If you were to model maker, please 11:50invest in yes AI education for 11:53beginners, but really clear on-ramps, 11:55really clear scaleups. Help us to be 11:58able to teach this well. And in the 12:00meantime, I'm doing my best to put 12:02content out there everywhere I can think 12:04of that is going to be more useful, that 12:06is going to be more aligned to where AI 12:09is going. So, if you want the prompts, 12:11you know where to get them. In the 12:12meantime, have fun, enjoy AI, pick a 12:15problem space you care about, and uh 12:18yeah, get passionate about it because I 12:19don't think we're going to survive if 12:20we're not passionate about