Learning Library

← Back to Library

Generative vs Agentic AI, Dark Web

Key Points

  • Generative AI focuses on on‑demand content creation (text, code, images, music) by responding to a single prompt, whereas agentic AI pursues a defined goal through multi‑step planning, execution, memory, and self‑improvement without continuous human input.
  • Agentic AI’s workflow typically involves a planning phase, execution using large language models or specialized tools, ongoing context management via memory, and a feedback loop that refines its actions.
  • Common generative AI use cases include copywriting, image and code generation, and summarization, while agentic AI is suited for complex, adaptive tasks such as autonomous incident‑response runbooks and robotic process automation.
  • The “dark web” is called “dark” because it is unindexed and hidden, not because it solely contains illicit material, making it difficult to locate and block.
  • Estimates suggest the dark web comprises less than 2 % of all web content, further complicating any effort to outlaw or comprehensively block it.

Sections

Full Transcript

# Generative vs Agentic AI, Dark Web **Source:** [https://www.youtube.com/watch?v=79u2qP4Qhaw](https://www.youtube.com/watch?v=79u2qP4Qhaw) **Duration:** 00:18:00 ## Summary - Generative AI focuses on on‑demand content creation (text, code, images, music) by responding to a single prompt, whereas agentic AI pursues a defined goal through multi‑step planning, execution, memory, and self‑improvement without continuous human input. - Agentic AI’s workflow typically involves a planning phase, execution using large language models or specialized tools, ongoing context management via memory, and a feedback loop that refines its actions. - Common generative AI use cases include copywriting, image and code generation, and summarization, while agentic AI is suited for complex, adaptive tasks such as autonomous incident‑response runbooks and robotic process automation. - The “dark web” is called “dark” because it is unindexed and hidden, not because it solely contains illicit material, making it difficult to locate and block. - Estimates suggest the dark web comprises less than 2 % of all web content, further complicating any effort to outlaw or comprehensively block it. ## Sections - [00:00:00](https://www.youtube.com/watch?v=79u2qP4Qhaw&t=0s) **Generative vs Agentic AI** - Generative AI produces on‑demand content in reaction to prompts, whereas agentic AI autonomously plans, executes, and iterates multi‑step actions to achieve a specified goal. - [00:03:09](https://www.youtube.com/watch?v=79u2qP4Qhaw&t=189s) **Challenges of Blocking the Dark Web** - The speaker outlines the technical, jurisdictional, and ethical obstacles to censoring dark‑web content, noting its hidden nature, global legal gaps, constantly shifting sites, and occasional importance for free‑speech protection. - [00:06:15](https://www.youtube.com/watch?v=79u2qP4Qhaw&t=375s) **Why LLMs Hallucinate Answers** - The speaker explains that large language models generate text by predicting the most likely next token rather than retrieving factual data, which causes plausible‑sounding but inaccurate outputs—especially for recent events, niche subjects, or leading questions—though larger, more advanced models tend to reduce these hallucinations. - [00:09:22](https://www.youtube.com/watch?v=79u2qP4Qhaw&t=562s) **Browser Bugs Can Infect Systems** - The speaker explains how browser plug‑ins, extensions, and JavaScript can introduce vulnerabilities, allowing malicious code to escape sandbox protections and compromise a user's computer. - [00:12:35](https://www.youtube.com/watch?v=79u2qP4Qhaw&t=755s) **AI's Strengths and Job Risks** - The speaker explains that AI currently excels at pattern recognition, data processing, and drafting documents but lacks creativity, empathy, complex reasoning, physical dexterity, and adaptability, making rule‑based, documentation‑heavy, low‑judgment jobs especially vulnerable to automation. - [00:15:50](https://www.youtube.com/watch?v=79u2qP4Qhaw&t=950s) **Job Request, Promo, and Conference Talk** - The speaker dismisses a viewer’s job‑hunting plea, plugs another video and the TechXchange conference, and jokes about using a lightboard to demonstrate writing backwards. ## Full Transcript
0:00All right, let's see. I'll start with an easy one. How about that? Right. 0:02Because I know you guys hear this one all the time. So, 0:04what's the difference between generative AI and agentic AI? Martin. 0:09Yeah, I'll take that one. So, well, 0:11they're pretty similar, right? 0:13I mean, we think gen AI is all about— the clue is in the name—generation. 0:17Producing new content, so text or maybe some code or even images, music and so forth. 0:26So, it's about generating on demand. 0:28It's reactive, it waits for a prompt 0:30and then it gives you an output once you've prompted it. 0:33But the other option is really agentic AI, 0:37which is super hot right now, 0:39which is all about actually achieving—so hot— 0:42It's about achieving a goal, right? 0:44So rather than just prompting it—we 0:46put a prompt in and then we get a response out—with 0:48an agentic AI, we give it a goal 0:51and it now has to plan. 0:54It has to decide 0:56and it has to take multi-step actions along the way, 0:59and it's going to do it without us being involved all the way through it. 1:03So, it can trigger its own next steps. 1:06It can adapt to ... to changing context and keep going 1:09until it finally meets that goal. 1:13So, if you think about the ... the stages of agentic AI, 1:16there really are multiple. 1:18There's kind of the ... the planning stage where it gets started. 1:23Then once it's figured out a plan, 1:25there's the execution stage where it's going to 1:27maybe call a ...a large language model 1:29or some domain specific tools. As it goes, it 1:32needs to be talking to memory, 1:35so it remembers stuff about what's going on, 1:37because that's pretty important that it keeps context. 1:39And then we go through kind of a feedback loop 1:42where it keeps on self-improving as it goes. 1:45So, in terms of use cases, 1:48generative AI, copywriting, image generation, code generation, summarizing that sort of thing. 1:55But agentic AI, that's going to be bigger stuff like, well, Jeff, 1:59you'd know about this, like autonomous incident response runbooks, 2:03Absolutely something ... Right. 2:04Securitystuff 2:05Yeah, or robotic process automation. 2:07Stuff that needs to adapt on the fly. So, they're really pretty different. 2:12Okay, let's see here. 2:14We've got Jeff over here, 2:16we've got Martin over here. Okay. 2:18So that's one for Martin. 2:20All right. Now let's ask Jeff a question. So, 2:22Jeff, why can't we just outlaw 2:24or even just block the dark web? 2:28Yeah, yeah. So, a lot of people ask this because they ... 2:31they first of all think that dark means dark content. 2:35And that's not why we call it dark, although there certainly 2:37is some dark and prohibited kind of content. 2:40We call it the dark web because it's not indexed. 2:43It's hard to find. It's in the shadows. 2:46And if you think about uh ... the dark web, 2:49the first question, if you were going to try to block 2:51it is, you'd have to find where it is. 2:53So, imagine all of these websites that are out on the internet. Well, 2:57maybe there's a site 2:59that is part of the dark web, 3:01and our estimates are—nobody has any official numbers—it's 3:04less than 2% of the content on the entire web. 3:09That would be dark web. So good luck in finding it. 3:11First of all, because it's a small amount. 3:13And secondly, it would be hard to find 3:16because there's just not ... not much of it and there's no indexing. So, 3:21you can't go to a search engine in order to ... to get to it. 3:24So, the first challenge would be finding it if you wanted to block it. 3:28The next issue then becomes one of jurisdiction. 3:31I mean, who gets to basically outlaw things on the internet? 3:35The internet is a global phenomenon, 3:39so that means individual countries can do what they want to do. 3:42And even though you might outlaw the content in one area, another area may not. 3:47And therefore, the content just moves there. 3:50So that becomes a problem. Uh 3:52... Also, the whole thing with the dark web 3:54is that it's a bit of a game of whack-a-mole. 3:57You've got a site that pops up 3:59and then maybe it shuts down and it relocates to a different place. So, 4:02this is always going to be a chasing your tail kind of situation. So, 4:06it's not really practical. 4:08And then, the question I would ask a lot of people to think about—Would 4:12it even be desirable to block the dark web? 4:15Well, certainly some of the content, I think we would all agree, 4:18would be better if it didn't exist. And we'd love to block that. 4:21But that's hard to, again, filter that stuff out. 4:24But there's also some content on the dark web that actually serves us. 4:27There are some places in the world where free speech is not honored, and therefore, 4:31if a reporter wanted to get a story out, 4:33putting it on the dark web is another way to do that. 4:36Uh ... If there are cases where we want to do research, 4:40if we want to be able to figure out 4:42how hackers are doing what they're doing, 4:44we can monitor their activities because they're talking on there. So, 4:47there's a lot of different things like that 4:49that actually could benefit us if we use it well. But, 4:53we have to use it well, and that's not always easy to do. 4:56Wow, what do you think? Was that a good answer? 4:58It's really interesting to hear you say that 5:01we should actually find some use cases 5:03to keep the dark web around because initially, not knowing 5:06too much about this, I'm like, oh yeah, we want to shut that thing down. 5:09But actually, it does have some 5:10uses too. Might as well, because we can't make it go away. 5:12Yeah. Wow! Alright. 5:14Martin, how are you guys ... 5:16You're so smart, it's 5:17almost intimidating, but ... 5:18uh ... but I'm doing the best I can to keep up here. Alright, 5:20I got another question for you. Are you ready for this one? So, 5:22if AI is so smart, like ... 5:25like you guys are, right, why 5:27does it make stuff up? 5:30Yeah, that is such a good question. 5:34Such a good question. So yeah ... 5:35We make stuff up all the time, so 5:36I don't think it's any different. 5:37Let me go ahead and do it right now. 5:39Go right ahead. 5:40Yeah! So ... so when AI makes stuff up, we call that hallucination. Right? 5:44So, a ... a hallucination to you and I is like, you know, we're kind of trippy and going off on something, but to an AI, 5:52it's kind of confidently 5:54stating false information as though it were a fact. 5:57And it does it in such a way that you kind of think, 6:00you know, maybe that's true. 6:02It's not lying. 6:03It's not really fair to say it's lying because there's no intent behind it, 6:07but it's just kind of pattern matching gone wrong. 6:09So ... so why does it happen? Well, 6:11it all comes down to the fact that LLMs are prediction machines. 6:15They're not really knowledge databases. 6:17They're not looking up an answer from a database of truth. 6:21It's kind of coming up with its own. 6:23And they're trying to predict the most statistically likely next token in a sequence. 6:28So, if I come up with token A, B and C, what we're asking 6:33the large language model to do next is to come up with the next token. 6:36And the token is kind of more or less a word. So, 6:39it's just coming up with what is most plausible next, 6:43rather than having any sort of fact checking or truth detection. 6:47So, it's optimized for fluency and for cohesion. 6:51It's definitely not optimized for accuracy. 6:54It'll fill in gaps of knowledge with plausible-sounding text. 6:58And there are certain things that can cause hallucinations a bit more than others. 7:02So, for example, if you ask it anything that is recent—so recent 7:06events, especially if they're not in its training data, 7:09so it's post the training cutoff—it's 7:11not going to say 'I don't know what happens then'. 7:13It's probably just going to hallucinate an answer. Uh ... 7:16The other sort of situation is 7:18if you have very niche topics, 7:20but there's not a lot of training data from it to pull from, 7:23it might do that as well. 7:24And then also, if you kind of ask a leading question 7:27where you sort of give it the answer in your question, 7:30it's probably going to go along 7:32and you know, 7:34maybe take your lead with that and continue with your thought. 7:37So, yeah, hallucinations ... that ... they have been reduced 7:41as models get a bit bigger and a bit smarter. 7:43But just the very nature of AI models 7:47means they are always going to be the opportunity for hallucinations, 7:51unless you're able to do some sort of fact checking. 7:53Now there are some ... some mitigation strategies. 7:56One of the biggest ones I think that we're seeing now 7:58is RAG—retrieval-augmented generation—where 8:01we actually pull contextual information 8:04in from uh ... an external vector database into the model 8:07and kind of give it the right answers. 8:09So, things like that can help. 8:11But right now, we're definitely in a case 8:13where we still need human-in-the-loop validation 8:16to actually check the outputs of these things, 8:18that they're actually true. 8:21Um ... I also need human-in-the-loop validation 8:24because I'm very good at saying the wrong thing confidently. 8:26So ... I think he hallucinated that entire answer. 8:29I think he did too. But ... but we're going to give him ... 8:30we're going to give him a checkbox ... a checkbox anyway. 8:33Alright. Jeff, coming back to you, 8:34your turn, your turn. Here it comes. So, 8:36um, I, I 8:38love the internet, right? 8:39I think everybody does. 8:41Going to my favorite websites is a joy, right? So, 8:44uh ... there's ... I mean, there's no harm in that, right? 8:48There's just no harm in, like, visiting a website, right? 8:51I mean, there's not ... I don't have to worry about anything. Right? What could go wrong? 8:53Yeah, yeah. What could possibly go wrong? 8:55Well, 8:56incorrect answer. 8:58Thanks for playing. No, 8:59it turns out that there are cases 9:01where just visiting a website can be dangerous. 9:04And that kind of ties into the previous question about the dark web, 9:06which is one of the reasons I advise most people 'Don't go there.' 9:10In fact, there are things called, a whole class of attacks, 9:13that we call zero-click attacks. 9:15So, all you did was just go to the website and view it. 9:18You didn't click on anything there. And bang! I didn't do anything wrong. You sys ... 9:22Yeah. Your system is infected. Sorry. 9:24You went into a ... a bad neighborhood 9:26and now you're going to wish you didn't. Uh ... 9:29How do you think ... How could that happen? Because, 9:30again, a lot of people say that's not possible. 9:32I'll tell you, it is possible. 9:34Plug-ins. We've got plug-ins into browsers. 9:36We've got extensions and things like that. 9:38That stuff is not perfect. 9:40It could have bugs in it. 9:42And therefore, if one of those bugs leaks out and gets onto your system, 9:47then that could cause problems to you. 9:49Also, active content like JavaScript can be an issue. So, 9:53we have this—a lot of people who are not familiar with this—but 9:55it's usually enabled in most people's browsers to uh ... uh 9:59... to be able to show this kind of active content, videos and things like that. 10:04Well, all of that's running. That's code that's running in your browser. 10:08You went to the internet, you didn't install anything. 10:10But just by visiting that site, you have effectively downloaded code. Now, 10:14in theory, it's sitting inside the browser and should not break out of that. 10:19However, there's a difference between theory and practice, 10:22and the ... that difference has to do with browser bugs. 10:26So, browsers are complex software. 10:29All software of any complexity has bugs, 10:32and some percentage of those bugs are going to be security related. 10:35And we've had a number of cases where browsers had bugs 10:39that would allow something to escape the sandbox of the browser 10:42and start running actual code on a person's system. 10:46So, until we get perfect software, 10:48which I'm not holding my breath on, 10:50then it's always a certain amount of risk. So, 10:52you should be careful about which sites you visit, 10:55because sometimes just visiting 10:57can ... can rub off on you on ways that you didn't intend. Wow! 11:01Well, I'll tell you, leaking bugs would be a great name 11:03for a band comprised completely of back-end developers. 11:06Alright, you get another check box. 11:08You ... you've got it. Martin, 11:10we're doing well here. 11:11These are good answers. I'm learning a lot from you, and I love to learn. 11:15But here comes another one. Alright. So, 11:17is AI going to take my job away from me? 11:23Well, I think it's probably safe to say 11:26AI is probably going to take our jobs away from us at any point. Right? 11:28At some point, they're going to take us off the channel, replace us. 11:32Studio is mine! 11:32No! Oh-oh! Replace us with digital avatars ... Oh! ... 11:35and we'll have to see if anybody who knows notices us 11:37and they ... uh ... can tell the difference. 11:39Probably be better. 11:40I'm sure it would. So, 11:42look, I mean, that ... this is the big question 11:45that AI is taking a lot of tasks that we could do 11:49and uh ... being able to do them pretty well, completely autonomously. 11:53You know, we mentioned agents earlier. But I think one of the arguments is 11:57that AI will really transform jobs 11:59more than replace them, like ATMs, 12:02they didn't kill banking jobs; 12:04they just shifted tellers into, kind of, a different role. 12:07They weren't actually just handing out the money. So, 12:09I think, if you look at, uh 12:11... what all of our jobs are, they're 12:13like a bundle of different tasks. 12:16And I think AI today and in the immediate future 12:19could probably automate some of those tasks, 12:21but maybe not the entire job. 12:24It depends, though, on the job. Like, 12:26AI is good at certain things, 12:28so it's very good at repetitive tasks—we 12:31were just kind of doing the same thing over and over—because 12:33it can learn the patterns. 12:35AI is great for pattern recognition. 12:37It's also very good at data processing, 12:39and it does a pretty plausible job at first draft. 12:43So, if you need to write a document, AI 12:46can spit out a document. 12:48And that large language model will do a reasonable job of a first draft. 12:53But, you know, if you were just to use that 12:56as the final document, everyone's going to know it's AI. 12:59So, right now, it's good at that stuff, 13:01but it's not necessarily able to replace people in those jobs. 13:05What AI is not so good at right now, and this may change over time, 13:08but right now, creativity. 13:10Maybe not empathy. I would say not especially complex reasoning. Well, 13:15we're starting to see some reasoning improving 13:18with certain models, but complex reasoning, 13:21I think humans do still have the edge. 13:23But then other stuff like physical dexterity, 13:25like a large language model is not going to hold this lightboard pen, is it? 13:28So, we've got that at least. 13:30And just kind of adapting to novel situations. 13:33But I ... you know, if ... if I had to say 13:35what are the kind of the signs that your job is ... is vulnerable to 13:38AI, I would say there's probably three. So, 13:40number one is 13:43if what you're doing is very rule-based or deterministic, 13:46that is something that a large language model can be trained on. 13:49It might be doing it quicker than you can at some point. 13:52Secondly, if a lot of your work is like kind of doing 13:57lots of documentation or stuff that doesn't require a lot of judgment, 14:01it's a sort of uh ... writing simple knowledge-based articles, something like that, you 14:04could see how today's models even could do a plausible job with that. 14:09And then thirdly, I would say kind of low context tasks. So, 14:13if you think of, uh ... 14:15asking it to create a stock photo of a person typing on a laptop, 14:19well, AI image generation can do a pretty good job of that today. 14:23Uh ... but not necessarily such a good job 14:26if it needed to take into consideration all of the context around that, that 14:30a human would intuitively know, 14:32like the last language model isn't going to know unless you tell it. 14:35So, where there is, uh ... not a lot of context 14:38that needs to be considered, there maybe AI can help. 14:41So, if your day job kind of scores 14:43highly in all those three areas, well, 14:45maybe it's a good time to start upskilling. 14:48I like your story about the uh ... ATM. 14:50Can you ...can you hand me some money? 14:52Well, unfortunately, it's all been automated, so no. 14:54Hopefully your digital twin can do a little better. 14:56We'll see about that. Okay. 14:57We're coming over to you now. Um, now we 14:59... we're going to be hopefully we're three for three here. 15:02And this one, this one I ... I ... I really like. No pressure. No, 15:06no, there's a lot of pressure here. So, 15:07you should feel the pressure. Okay. 15:10How do I get a career in cybersecurity? 15:14Well, in your case, uh 15:15...I could give a different answer, but I'll give some general answers. 15:19In fact, I did a video on this one 15:21because I have this question so many times. 15:23People will put in the comments, they'll say, cybersecurity is a really cool field. 15:27I'd like to get involved in that 15:28and how should I get started and this sort of thing. 15:30So, I actually did a couple of videos 15:32and one of them was with one of my former students. 15:35I'm an adjunct professor, so I had him join me on one of those videos. 15:39I'd highly recommend everybody take a look at those 15:42and see. Some of the stuff is about cybersecurity careers, but 15:45a lot of the general advice, I think, will apply 15:48to anyone who's interested in IT. 15:50And links to those are down in the description below. 15:55But that, that ... that's a ... that's a great answer. 15:58I'll ... I'll give it to you, That was not a great answer. 16:01All he did was just promote his other video. 16:03I ... I know, but ... but it's the follow-up that matters. Look, 16:05here's the thing that I ... I really need to know is, can ... 16:07can you just ... can you get me a job? 16:09Uh ... yeah, 16:10yeah. So, this is one of the questions I get all the time. 16:13And the answer is 'no'. 16:14Please don't send me your resumes. 16:16I'm not involved in hiring. 16:17They wouldn't trust me with that. 16:18Come on, are you serious? No. 16:20Uh ... so, if you do want to look for jobs, there are places to go. 16:24ibm.com/jobs is where we post all of those. 16:28But I thought you already had a job, Graeme. 16:31I .. I think I do have a job. 16:33think you do. Oh, that's right. 16:35Uh, TechXchange, the conference in October in Orlando 16:40for technologists and developers. 16:43It's a learning conference. 16:44It's all coming back to me now. This is why I like learning. 16:46You know what? You guys, you guys 16:48should go to TechXchange. 16:50Uh ... Maybe we should. There is an idea, yeah. 16:53Wait, wait, wait, wait, wait. Here's another idea. 16:55Let's bring the lightboard. 16:57The lightboard? Okay. Well, 16:59will that fit in your carry-on? 17:00not sure it will, but, you know, 17:02it would allow us to answer one important question, 17:06which is what everybody asks us in the comments. 17:09How do we write backwards? 17:10Oh, come on, that's not really that hard. Just watch. 17:13See? 17:15Not a big deal. 17:17I don't know why everybody wants to ask. 17:19I just wrote 'backwards'. 17:21Alright, I'm ... I'm going to go do my job. 17:23But this is what I want you to do. 17:24If ... if ... if you want an opportunity to meet Jeff and meet Martin, join us at TechXchange in October. 17:31The link is also—you 17:32don't mind if I promote another link, do you? Please. 17:34The link is also in the description where you can learn 17:37about the conference and register to attend. 17:39Uh ... Alright, I ... I got to go. This is amazing. 17:41I can't believe I actually got to be in here with you, 17:43but I'm ... I'm going to go do my job now, 17:45so just ... I'm going to go. Great to Alright. 17:47I got to ... 17:49Guys, how do you ... how do you get out of here? 17:51Uh ... where ... 17:52I got in, but ... 17:53Yeah, good luck with that. 17:55We've been in here for years. 17:56We've never found a way out. 17:57So, if you find it, please tell us. 17:59Anyone know the exit?