Learning Library

← Back to Library

AI Threats: Impending Vulnerability Cataclysm

Key Points

  • AI is a powerful tool that can strengthen defenses if applied correctly, but it also inherits the good, bad, and ugly from its human users, creating new exploitation risks.
  • The panel warned that many defenders are lagging behind attackers in adopting AI, while enterprises rapidly deploy AI solutions without a “secure‑by‑design” approach, increasing vulnerability.
  • Gatti Evron of Gnostic predicts an AI‑driven “vulnerabilities cataclysm” within six months, where AI‑accelerated exploitation could outpace existing cyber defenses.
  • Real‑world examples such as the resurgence of the Scattered Spider group, misconfiguration issues, and the Hybrid Petya malware illustrate how quickly AI‑enhanced attacks can emerge.
  • The discussion highlighted the need to move beyond outdated “dumb” security rules and adopt proactive, AI‑aware strategies to keep pace with evolving threats.

Sections

Full Transcript

# AI Threats: Impending Vulnerability Cataclysm **Source:** [https://www.youtube.com/watch?v=7y4cYOdf0Y4](https://www.youtube.com/watch?v=7y4cYOdf0Y4) **Duration:** 00:42:39 ## Summary - AI is a powerful tool that can strengthen defenses if applied correctly, but it also inherits the good, bad, and ugly from its human users, creating new exploitation risks. - The panel warned that many defenders are lagging behind attackers in adopting AI, while enterprises rapidly deploy AI solutions without a “secure‑by‑design” approach, increasing vulnerability. - Gatti Evron of Gnostic predicts an AI‑driven “vulnerabilities cataclysm” within six months, where AI‑accelerated exploitation could outpace existing cyber defenses. - Real‑world examples such as the resurgence of the Scattered Spider group, misconfiguration issues, and the Hybrid Petya malware illustrate how quickly AI‑enhanced attacks can emerge. - The discussion highlighted the need to move beyond outdated “dumb” security rules and adopt proactive, AI‑aware strategies to keep pace with evolving threats. ## Sections - [00:00:00](https://www.youtube.com/watch?v=7y4cYOdf0Y4&t=0s) **AI Security Concerns on Security Intelligence** - In the opening of IBM’s Security Intelligence podcast, the host introduces the panel and probes each expert about their top security worry regarding AI, framing it as a powerful yet potentially exploitable tool. - [00:03:17](https://www.youtube.com/watch?v=7y4cYOdf0Y4&t=197s) **AI's Limits in Exploit Chaining** - The speakers argue that AI can automate simple vulnerability discovery but still relies on human expertise for complex, multi‑step exploits, making scale the primary risk rather than immediate sophisticated attacks. - [00:06:32](https://www.youtube.com/watch?v=7y4cYOdf0Y4&t=392s) **AI‑Assisted Coding Fuels Insecure Software** - The speakers argue that AI‑driven “Vibe coding” speeds development at the cost of critical security checks, resulting in vulnerable applications such as the zero‑security Tapp example. - [00:10:24](https://www.youtube.com/watch?v=7y4cYOdf0Y4&t=624s) **AI Development Requires Secure Human Oversight** - The speakers stress that building scalable AI‑driven enterprise apps demands solid security fundamentals, clear vendor responsibilities, and keeping humans in the loop rather than relying on AI to replace engineers. - [00:13:56](https://www.youtube.com/watch?v=7y4cYOdf0Y4&t=836s) **LLM-Powered Vishing Tactics Revealed** - The speaker discusses the comeback of the Shiny Hunters group, highlighting their sophisticated use of large‑language‑model‑orchestrated vishing attacks with synthetic voices to target financial institutions, and notes that while unsurprising, these methods represent a new twist on classic social‑engineering. - [00:17:00](https://www.youtube.com/watch?v=7y4cYOdf0Y4&t=1020s) **AI-Powered Vishing and Employee Recruitment** - The discussion highlights AI‑generated vishing attacks, stresses two‑factor authentication as a simple defense, and notes that groups like Shiny Hunters also attempt to recruit insiders within targeted companies. - [00:21:44](https://www.youtube.com/watch?v=7y4cYOdf0Y4&t=1304s) **Detecting Insider Threats with AI** - The speakers discuss how hard it is to spot anomalous behavior by privileged insiders using traditional monitoring, note that existing security products fall short, and suggest that AI‑driven analytics may help uncover these hidden threats. - [00:26:04](https://www.youtube.com/watch?v=7y4cYOdf0Y4&t=1564s) **Misconfigurations Rise as Security Staff Cuts** - The speakers explain how cost‑driven reductions in dedicated security teams push insecure misconfigurations onto general IT staff, who prioritize functionality over protection, leading to compounded security gaps. - [00:29:21](https://www.youtube.com/watch?v=7y4cYOdf0Y4&t=1761s) **Misconfigurations vs Vulnerabilities Debate** - The speakers argue that misconfigurations are a larger, often hidden security risk than known vulnerabilities, emphasizing the need for AI‑driven detection, early checks, and basic inventory controls to prevent shipping insecure systems. - [00:33:36](https://www.youtube.com/watch?v=7y4cYOdf0Y4&t=2016s) **Copycat NotPetya Exploits UEFI** - The speakers discuss a new NotPetya‑style malware leveraging UEFI boot vulnerabilities, noting its lack of novelty but stressing the gap in security tooling that rarely monitors firmware layers. - [00:37:41](https://www.youtube.com/watch?v=7y4cYOdf0Y4&t=2261s) **Ensuring Business Resilience and Backup** - The speaker advises businesses to define a minimum viable operation, keep immutable off‑site backups and cloud‑based storage, adopt a holistic risk‑management approach instead of layering more technology, and emphasize resilience as a core, not just an IT, concern. - [00:41:10](https://www.youtube.com/watch?v=7y4cYOdf0Y4&t=2470s) **Educating Users Against Phishing Scams** - A conversation emphasizing the need to teach both technical and non‑technical people to verify suspicious emails, encourage asking questions without embarrassment, and recognize that anyone can fall for phishing. ## Full Transcript
0:01AI isn't magic. It's just another tool that if you 0:04apply it correctly into your defenses, will make you that 0:07much more stronger. All that and more on security Intelligence. 0:16Hello, and welcome to Security Intelligence, IBM's weekly cybersecurity podcast, 0:21where we break down the most important stories in the 0:24field with the help of our panel of expert practitioners. 0:28Matt. I'm your host, Matt Kaczynski. Joining me today to 0:31break it all down, Chris Thomas, X Force Global lead 0:34of technical eminence and part of the not the Situation 0:37Room podcast. Suja Viswasen, VP of security products and Troy 0:42Betancourt, global partner and head of X Force. Our stories 0:46this week, the prophesied return of Scattered Spider and Friends 0:50misconfigurations Hybrid Petya. And our panel shares dumb cybersecurity rules. 0:56But first, the AI vulnerability apocalypse. Sounds pretty scary. So 1:05I want to start us with a quick round the 1:07horn question today. What's the thing that worries you most 1:10about AI from a security perspective? Suja, let's start with 1:14you. Well, AI is learning from us, right? The good, 1:17bad and ugly. And then it is when we start 1:21using it, then the bad and ugly are also part 1:24of it. And then you can use AI to exploit 1:27that as well. So that's what. What is me. Absolutely. 1:30Chris, how about you? Well, I'm not really worried about 1:33AI, but if I was going to say, I would 1:35say that defenders aren't really picking up AI quick enough 1:38and using it in their defenses. Absolutely. And Troy, how 1:42about you? For me, it's the rapid deployment of AI 1:45solutions across enterprises without a secure by design approach. You 1:49know, we've already seen the dangers where just assistants like 1:51co pilots are exploited to, you know, extract data from 1:55organizations. Now imagine if that was fully autonomous agentic AI 1:58with no human oversight at all, massive potential for risk. 2:02Absolutely. And the reason I posed this question is because 2:05our very first story of this week has to do 2:08with a somewhat dire prediction from Gatti Evron, CEO of 2:12AI security company Gnostic. Now, Gatti predicts that we are, 2:16and this is a direct quote, six months away from 2:18the upcoming vulnerabilities cataclysm. AI could make exploitation so fast, 2:24it breaks cyber defenses down. Now, all three of you 2:27kind of touched on exactly what Gatti's getting at here, 2:30right? Like Chris said, defenders are not really picking it 2:33up as fast as the attackers might be. Like Troy 2:35said, we're rolling it out very quickly, maybe before we 2:38even have security in place. And like Suja said, these 2:41things are learning from us. And they can evolve faster 2:43than our security practices can evolve. So I wanted to 2:45start by asking whether you agree with the premise, are 2:49we rushing headlong into AI disaster? And I want to 2:53start with you, Chris, because you brought this to my 2:55attention when it was first posted. I want to get 2:57your reactions to this prediction. Well, I mean, attackers are 3:01using AI to automate vulnerable discovery. And they've, they're already 3:04doing that. They've been doing that for a while. It's 3:07not a new thing now. The fear here though, is 3:10that they're going to get so good at it and 3:11it's going to be so fast it's going to overwhelm 3:13us. But I really don't think that's going to happen. 3:17I mean, AI isn't magic. AI really still struggles with 3:20the nuances of like AI internals, memory, layouts, bypassing mitigations. 3:26You still need human expertise at the end of the 3:28day to really exploit something. Right now, the low end 3:33of the spectrum, the easy vulnerabilities, the easy exploits, AI 3:36is really good at that. The more expansive things, the 3:40things that require chaining of multiple exploits together, that becomes 3:43more and more difficult for AI to do. And so 3:46you still, not only is it AI, but the people 3:49still need to know how to use the AI to 3:51exploit these vulnerabilities. So the real risk here is scale. 4:04have protections and mitigations in place, if properly deployed to 4:09sort of solve that. Absolutely. Troy, any thoughts on either 4:12this prediction or Chris's take? I think there is some 4:14FUD there. Six months I think is wildly unreasonable. That 4:18said, I think over time AI tool ATTCK tools will 4:22get better. You know, we've evaluated a bunch of the 4:24commercial tools that are available right now for use potentially 4:28in our own engagements and have really found, to Chris's 4:32earlier point, they're really good at the easy stuff, the 4:34stuff that lets you scale. That takes a lot of 4:37human effort scanning for vulnerabilities and picking which ones you 4:40want to exploit, but actually chaining them together like you 4:43do in red teaming, what the threat actors are really 4:45doing, they're just not there yet. And I don't think 4:48they're really close, to be quite frank. So I think 4:51it's a little mix of maybe his timeframe's a little 4:54aggressive. I think he's right. If we roll out a 4:57couple years. Out, gotcha and sucha. How about you? What 5:00are you thinking about? This prediction, yes. AI is really 5:03good at figuring out what are the easy exploits. So 5:07similarly, are we adopting AI to figure these easy exploits 5:11ahead of time? Right. As a defense mechanism so that 5:14these are not out there? The most complicated one. It's 5:17work in progress. But how do we adapt and then 5:19get that? Yes, and same thing with Troy. I agree 5:22the timeline seems too aggressive for six months, but we 5:25need to be playing ahead of the game. There's no 5:27two ways about it. As AI kind of stands, it's 5:30really good at the easy stuff. Some of that more 5:31complicated stuff not quite there yet. I'm wondering, though, if 5:35you think that it will get there at some point. 5:38And I want to start with you, Troy, because you 5:40were saying maybe a year, two years out. Do you 5:43think it's actually going to reach that level in that 5:45timeframe? Prognosis is really hard to really stick to. I 5:49think we've all been wrong. When we've made forecasts around 5:52AI, I think they'll get there. But keep in mind, 5:55that's a year or two years of defender improving their 5:57own defenses, whether that's through security products building in AI 6:01or leveraging just better deployments of security technology across their 6:05enterprise. Absolutely. Chris, I saw you start to maybe make 6:09a little reaction. Do you have thoughts on that one? 6:11Well, yeah. I mean, everybody talks about how the attackers 6:14are going to advance and get better by using AI 6:16and they forget the defenders are also using AI and 6:19they're going to advance right along with the attackers. So 6:22this, you know, vulnerability, AI apocalypse thing, I'm not sure 6:27I would use such strong language. But the attackers are 6:30going to get better, but so are the defenders. Gotcha. 6:32It's like Suja said, right? They learn the good, the 6:34bad, and the ugly. So it's really picking up all 6:36that stuff. And it's going to help both the defenders 6:38and the attackers. It depends on who is using it. 6:41That's something that comes up for basically every time we 6:43record this show. Right. All these tools depend on who's 6:46using it and what are they using it for. Now, 6:48I want to dig a little bit, though, into some 6:50of the specific kind of vulnerability trends that Gatti calls 6:54out in his post, because I think they're working worth 6:55discussing. And one of the first ones is this. And 6:57this is a quote again from the post. Vibe coding 7:00boosts velocity but removes critical checks, producing insecure code at 7:06scale. And Troy, you had mentioned, you know, some of 7:10the copilot stuff and the coding assistant stuff we've seen 7:12so far. So I wanted to get your take. Do 7:14you agree that Vibe coding can kind of introduce some 7:17of these vulnerabilities by removing critical checks? Oh, absolutely. We've 7:21already seen examples of that. The Tapp, if everyone recalls 7:25that, that was news a few months back. It was 7:28supposedly a Vibe coded app. I can't say it had 7:32poor security. It had zero security. And that was a 7:35really great example, and I think we'll see more of 7:37that. My concern, as I mentioned, was development without security 7:42being as part of it, like enterprise development. But Vibe 7:44coding is even worse, right? That is the true wild 7:47west of no security in development. So, yeah, it is 7:50definitely concerning. Suja, did you have any thoughts on that? 7:53Because I know in the past we've talked quite a 7:54bit about vibe hacking, Vibe coding, Vibe security, as you 7:57called it. So I want to get your takes on 7:59the risks of Vibe coding so. Somebody who doesn't know 8:02anything about software can build an app today. I think 8:05that's what happened with that Troy was talking about, because 8:09you cannot fix something that you don't know that you 8:11are causing. So definitely it does introduce. That is why 8:15when people say, hey, this is a copilot, or it 8:19works alongside a senior engineer who knows what they're doing. 8:22If you have a tool and you don't know what 8:24you're doing, of course you can poke your eye with 8:26it. And that's what happened there. So we need to 8:28be educating everybody to say, hey, this is a tool 8:32and how are you going to use it? What are 8:33the guardrails? So that is why these tools need to 8:37evolve, so that the security becomes part of it rather 8:41than after the fact. I think this is something that 8:44security needs to be baked in, not bolted in later. 8:47I like that you point out, you know, that the 8:48tool can poke your own eye out if you don't 8:50know what you're doing with it. Right? Because again, it's 8:52not just who's using it, but how you're using it. 8:54Right? So even if you're using it for ostensibly benign 8:58reasons, if you're not doing it right, you know, you'll 9:00shoot your eye out. Chris, do you have any takes 9:03on Vibe coding and where it fits into the cybersecurity 9:05landscape? We're looking at the LLMs and the models that 9:08we're using today, as they stand today. Right? These things 9:11are constantly evolving, constantly getting better, constantly adding new data. 9:16And so as the need for secure coding becomes more 9:21apparent, those features will be built into the newer models, 9:25right? As customers demand. Hey, if I'm going to do 9:28vibe coding. I need it to be secure. They're going 9:30to demand that the coding that's output is secure. So 9:33yeah, we have a problem today. I think that will 9:36become less of a problem tomorrow. Got it. And so 9:39yeah, again we all kind of agree here that it 9:42sounds like anyway that the six month timeframe a little 9:45aggressive. There are certainly some issues, but we gotta keep 9:48up with what's happening here. And so I wanna kind 9:51of end this segment on a kind of future forward 9:53looking take. Right. What kinds of advice would you have 9:56for organizations right now who are maybe a little bit 9:59worried about this kind of talk about an AI vulnerability 10:01cataclysm. How can they start to position themselves for the 10:05best security in the future with this stuff? Let's start 10:08with Suja on that. The bigger thing is for doing 10:11a poc. It's great. So start with whatever you have 10:15and then learn from it. As Troy mentioned earlier, when 10:18you are deploying it for production, make sure the right 10:20set of guardrails are set. Don't jump into it thinking. 10:24I believe that especially for development purposes, task based, I 10:29need this task completed. AI is really, really good. If 10:32you want to build a scalable enterprise app, then you 10:35need to be thinking about all the security. Are the 10:37vendors providing it or they're expecting you to provide it. 10:41So make sure that you understand it and then go 10:44from there instead of jumping into the bandwagon, oh my 10:46God, I can replace all my engineers with by coding. 10:49That's not going to happen. Yeah, no, absolutely not. You 10:52got to keep that human in the loop. Right Troy, 10:54what are your thoughts there? Yeah, I'd like to expand 11:03to cloud years ago and then found out oh, the 11:05cloud provider wasn't responsible for security like I thought were 11:09surprised. Same thing with AI. But I think stepping back 11:12a little, it's really security fundamentals. AI adds a little 11:15bit to it, but it's role based access control. It's 11:19ensuring that your APIs or in this case maybe A2A 11:22or MCP Communications are secured. It's making sure that the 11:26front end app you're developing that's going to leverage the 11:28AI uses best coding practices, secure by design. So really 11:32it's about doing all of that, understanding your assets, your 11:34data flows, ensuring they're protected. Unfortunately, history has shown we 11:38haven't been that good about that across the enterprise hygiene 11:41from a security perspective. And AI is now Just rapidly 11:44exposing that. I like that you brought up that kind 11:47of analogy to the cloud apps because you're foreshadowing something 11:50we're going to talk about a little bit later on, 11:52which is the persistent issue with misconfigurations in cloud apps. 11:56But before we get to that, Chris, what are your 11:58thoughts on how organizations can best position themselves to kind 12:02of deal with this cataclysm? Well, you know, like Troy 12:05said, focus on the fundamentals, right? Those have not changed. 12:09Just because we have a brand new shiny thing over 12:11here. The fundamental security practices that we've been using for 12:14the last 20, 30 years still apply. AI isn't magic. 12:18It's just another tool that if you apply it correctly 12:22into your defenses, will make you that much more stronger. 12:25Yeah. One of the things I really like about security 12:27is that you do have this set of bedrock principles 12:29that you can kind of adapt and apply to almost, 12:31you know, every situation. And rather than getting distracted by 12:35that shiny new object, you just keep that stuff in 12:37mind. All right, let's move on to our next story, 12:41which is involving a cast of characters that we have 12:44seen time and time again on this show. Scattered Spider. 12:48Shiny Hunters. They are back. Now, anybody who listened to 12:55our last episode would know that we discussed the supposed 12:58retirement announcement of Shiny Lapsis Hunters, and our panel unanimously 13:03declared that it was absolutely bunk. And it turns out 13:06that they were right. And they were so right, in 13:09fact, that it was the very same day our episode 13:11went live that they were back in business. And you 13:14know what, if anything annoys me, is we couldn't even 13:16have a week. We couldn't even have a week without 13:17them. It takes all the fun out of prognosticating when 13:20they just show up that day. But anyway, my hurt 13:23feelings aside, they're back and they're doing some new stuff. 13:26And so before we get into some of the new 13:29things I want to discuss, I would also just. I 13:30would like to get some reactions to the return of 13:34Scattered Spiders and Shiny Hunters. Suja, how are you feeling 13:38about this? Once you know that you can get away 13:41with things, you're going to keep on trying. I think 13:44that's. That's what it is. Makes perfect sense. Chris, what 13:47about you? I mean, once a criminal, always a criminal, 13:49right? I mean, that's where the money is. That's how 13:51they make their money. They're not going to abandon that 13:53just because for whatever, right? Unless the money goes away. 13:57Unfortunately, the money has not dried up. Troy, what about 13:59you? Well, hopefully this doesn't violate any Brand permission, but 14:02shocked Pikachu faced here. Did anybody expect that they were 14:05really gonna retire? I think that the shocked Pikachu meme 14:09is an especially apt one here, given that Shiny Hunters 14:12does take its name from a Pokemon reference. So we 14:14got some synergy going on here. All right? So, yeah, 14:17obviously nobody is surprised. Everyone's like, of course they're back. 14:20We all saw this coming a mile away. But what's 14:23interesting to me is that again, they're doing some new 14:25stuff in this round, at least stuff we haven't seen 14:27them doing before. For they're targeting more financial institutions, which 14:32that's obvious. And they're doing it, though, with a lot 14:34of LLM orchestrated vishing. By that, I mean, they've got 14:38operators sitting with LLMs and using them to kind of 14:42play out some of these vishing calls that they're making. 14:45And they'll even use generated synthesized voices to kind of 14:49operate on the calls to pose as certain people. It 14:51sounds like very sophisticated stuff. And what they often do 14:55is they begin by calling the target's IT help desk 14:59and claiming to be an employee who is locked out 15:01of their account and asking to reset their password. And 15:04the reason I mention this specifically is because I had 15:06a conversation a long time ago with Stephanie Carruthers of 15:09IBM X Force, and she had mentioned that when she 15:12does social engineering engagements, the one trick that works almost 15:15every single. Actually not even almost. She said every single 15:17time they do it is the IT help desk password 15:20reset ploy. It works every time they do it. And 15:22so it's a little scary to me that they're using 15:24this ploy and they've got LLMs doing it. So I 15:27wanted to ask through to the panel, this AI powered 15:30vishing, have you seen this kind of stuff before? Let's 15:33start with you, Troy. Yeah, we've seen a little bit 15:35you see it in open source intelligence about some of 15:38these groups. It's not really a surprise. Especially the scattered 15:42spider group. They're not known for really being technically advanced. 15:46Right. They've always focused more on the social engineering side, 15:49which is the easiest one where you can operationalize that 15:52with support from LLMs and AI. So to me it 15:55makes sense. Right. They are finding value in using AI 15:58to make them more effective in what they do. Sudra, 16:00what about you? Any thoughts on this LLM vishing? Have 16:02you seen stuff like this before? Becomes very easy to 16:05personalize and then go from there. Because previously you have 16:08to do lot of work to get all these information. 16:12Now with LLMs, it Becomes much easier to come make 16:16it very personalized to people. So I do see why 16:19they are doing it because it's like coming from me 16:24to you. You're going to the IT department about an 16:26employee. This is not somebody coming from outside. So it's 16:29very, very belie so I totally see this happening. Yeah, 16:34that's a good point. Right. It makes spear phishing super 16:36easy because you can have the LLM gather that information 16:39too. And so many people post things online openly on 16:43their, you know, social media accounts. That makes the attackers 16:46jobs really easy. I remember even in the pre LLM 16:49days there was some stat about how like attackers could 16:51spend 45 minutes on Google and get all the information 16:53they need for a spear phishing attack. Imagine how much 16:56shorter it is now that we've got LLMs doing this 16:58stuff. Chris, what about you? Any thoughts on this tactic? 17:01It's not new, right? We've seen vishing before audio and 17:04video, so that's, that's not a new thing. I think 17:09what it does do is highlights the importance of something 17:11like two factor authentication. Right. Or confirmation of identity through 17:16a second channel. These are standard basic practices that companies 17:20can use to defeat the sort of vishing and phishing 17:22attacks that we see today. Yeah, talk about a kind 17:24of simple solution to a high tech problem, right? You 17:26got fake voices, AIs, all this kind of stuff. Throw 17:29a second factor on there, folks. Make sure you got 17:31that stuff set up. Set up a passkey, do something. 17:33Right. The other thing though that I found interesting was 17:37that they're not just doing this LLM orchestrated vishing. They're 17:39also actively trying to recruit employees of the organizations they're 17:44targeting to their side. Right. And as we've seen this 17:47specifically with Shiny Hunters, I'm not sure if Scattered Spider 17:50is doing it, but you know, the overlap, it's hard 17:52to tell where one begins, one ends. But anyway, Shiny 17:54Hunters has been actively trying to recruit employees of the 17:57organizations they're targeting. Have you seen this kind of thing 18:00before and does it work? Troy, I'm going to throw 18:03to you first. Yeah, we've actually seen a few articles 18:06in the news and then for anybody that's following the 18:09comm, which is a wider group of cybercriminals generally skew 18:15younger. Scattered Spider. Shiny Hunters apparently came out of there 18:19as well as lapses. You know, they've been doing that 18:22for a while. You know, they were doing that for 18:24sim swapping. They would find mobile provider employees and pay 18:27them money to do it and then they shifted away 18:30from, from that it's not a surprise. The insider threat 18:33aspect is really the most interesting thing to me about 18:36this. You know, an insider is one of the most 18:38difficult threats to really protect against because they've already got 18:41permissions, they've got roles, and as long as they're staying 18:43within the behavior you'd expect, it's very hard to identify 18:46that they're doing something like that. Now then couple that 18:49with the unstable state of world affairs, employment and cost 18:52of living challenges in many nations and now a really 18:55continued decline in the strength of the employer, employee relationship 18:58or social contract over the last few years and I 19:00really think it's a ripe target for exploitation. That's a 19:03really good point. Right. There are a lot of social 19:05forces that make right now, you know, it might be 19:07a pretty good time to go recruit some of those 19:09disgruntled folks. Right. Chris, your thoughts on this kind of 19:13tactic? Have you seen it before? Does it work? Yeah, 19:15again, I mean this has been around for years, especially 19:17as Troy mentioned, with mobile phone companies, insiders getting paid. 19:21And again, the social aspects, economics out there mean that 19:26you have very low paid employees with very high levels 19:28of access and are very susceptible to a little bit 19:32of extra cash and the criminal element has a lot 19:36of extra cash so that they can make even more 19:38money. So yeah, I'm actually a little surprised that the 19:41insider threat isn't a bigger issue today than it actually 19:45is. Be careful what you wish for there. Suja, what 19:48are your thoughts? There is still some basic human decency 19:51that people still have. That's what is preventing us from 19:54doing it. But the challenge is the social economic pressure 20:03nailed it really well. In the last five years we 20:05have seen the loyalty, the corporate employee, employer loyalty is 20:09also eroding away and all this becomes an easy bait 20:13for somebody who is in the fence. It's easier to 20:15jump one way or another or the other. So I'm 20:17not surprised. But it's a reality that we need to 20:22work on. One of the things that we have been 20:24thinking about, security as a person having an access, it's 20:28about workflow. In order for you to do a workflow, 20:31do you need access? And once you are done just 20:33enough access and then it goes away. I think we 20:36need to be rethinking how at least we are thinking 20:38about how do we rethink security, Even identity and access 20:42management? Absolutely. And again, basic human decency. I like that 20:45you brought that up. That so often the human factor 20:48is Our first line of defense. And that factor can 20:50just be, you know, what I have. I don't want 20:53to sell out my employer like this. But something that 20:55you brought up, Troy, and I want to go back 20:57to this, is that it can be extremely hard to 20:59tell when you have an insider threat. Right. Especially if 21:03they're just using the permissions they already have, but for 21:06like illegitimate purposes. And so I was wondering if you 21:09have thoughts on what do you look out for as 21:12an organization? Right. How do you catch these kinds of 21:14insider threats? Is it possible? I think it is. There's 21:18been a lot of work that's been taken from sort 21:21of the counterintelligence space and then brought over into security 21:24products. There are some of the standards. Look for activity 21:28outside of normal working hours. Look for potential remote access 21:33from not approved places in the event they shared their 21:37credentials. Let's say data flows that don't seem to be 21:42accurate with what you'd expect the employee to be doing. 21:45But I don't think anyone's really cracked it because it 21:47is so hard. It's so independent by the individual, their 21:50role, to Suja's point, what their necessary access is. You 21:54know, if it's an admin especially, it gets so difficult 22:03to see on a system and that they'd use in 22:04their daily basis. So how do you really find what 22:07one is anomalous there? It's very challenging. There's a whole 22:11product space around this and quite frankly, it hasn't lived 22:14up to expectations, I think, for many security buyers. No, 22:17that was fantastic and I absolutely agree with that. Troy 22:19and I wanted to throw to Chris if you have 22:22any thoughts there as well, anything to add about how 22:25you can kind of catch some of these insider threats 22:27when they blend in so well? Yeah, like Troy said, 22:29it's really about continuously monitoring your network and trying to 22:32identify people based on traffic patterns, which is difficult. Right. 22:36It really requires some specialized software. AI can actually help 22:39here, depending on what packages you're using. But it's difficult, 22:44especially with your more senior employees with higher access who 22:47are, you know, generally need to get into all places 22:50of the business. It becomes more and more difficult to 22:53sort of to find them and isolate them based on 22:56their traffic alone. Absolutely. And Suja, your thoughts? I think 23:05Because that's very much so what you can say what 23:08the anomalies are. So we talked about the difficult things 23:11about AI, because today with AI we can get to 23:14those answers much faster, which is false positive, which is 23:17happening, which might be a threat, and then try to 23:20catch it. But you cannot definitely catch it all because 23:24it's constantly evolving. Especially like what Troy was talking about. 23:27If you're an admin, you do have blanket access to 23:31everything at that point. If you are that person. How 23:33do we find out? That's a tricky one. The edge 23:36cases are the tricky one. Thank you for that. And 23:38let's move on to our next story. Misconfigurations. To put 23:47it meanly, I called it when hacking, when getting hacked 23:50is your own dang fault. In a new blog post, 23:53researchers from the Wiz cloud security platform discuss three instances 23:58of application misconfigurations allowing threat actors to do real damage. 24:02And just this, this morning I was reading a new 24:05report about a massive mal spam attack that used 13,000 24:09misconfigured routers to create a botnet to send just a 24:12bunch of phishing emails. And all they did was exploit 24:15default security settings that were never changed. Right. People just 24:18didn't change the default password so you could get right 24:20in there and do it. So this is a real 24:21persistent problem that keeps on happening. And some of the 24:25common misconfigurations that Wiz called out specifically were public exposure. 24:29Right. Databases that shouldn't be public are exposed to the 24:32public facing Internet not changing those default credentials like we 24:35just talked about and giving people excessive permissions, which I 24:38think a lot of this ties very much into exactly 24:40what we were talking about with that insider threat angle 24:42as well. Right. The difficulty of getting those permissions. Right. 24:45So Troy, I want to start with you first because 24:47all the way back in that first segment you were 24:49bringing up this idea of app misconfigurations. Do you see 24:52this kind of thing a lot? App misconfigurations causing security 24:56problems for organizations. I've been doing this a While and 25:03investigation into the largest hack of U.S. government systems to 25:07that date. And it was blank admin passwords on systems 25:11exposed to the Internet. Can't say much has changed, except 25:14Microsoft doesn't allow blank admin passwords on their OS anymore, 25:18which is nice. But this is what keeps us in 25:20business from an instant response perspective is if you can 25:26socially engineer passwords or buy them on the Internet or 25:29somebody misconfigures something, you don't have to break in, you 25:31don't have to be a technical wizard, you Just have 25:34to look for the open doors and walk right through 25:36them. So I think we're going to continue to see 25:38this. Hopefully, as AI does develop and get better, it'll 25:41do better identifying this because it's a scale problem. Enterprises 25:45are huge, they're complex. There's lots of moving parts that 25:48change, and humans just can't grapple with that. So I'm 25:51hopeful that AI, as it progresses, will become a really 25:54good defensive mechanism for it in this space. Yeah, I 25:57think, you know, not allowing those blank admin passwords, that's 26:00one of those dumb rules, right? One of those things 26:02you got to stop doing because someone was using it. 26:05But yeah. So, Chris, your thoughts on this take. Have 26:07you seen misconfigurations causing problems for organizations before? Yeah, I 26:11mean, misconfigurations are as old as security, right? You click 26:15the wrong button, you set the wrong thing, and you 26:17open the door. I think what we're seeing, though, or 26:21what's happening is that as companies try to save money, 26:24they're kind of cutting some of their security personnel and 26:27pushing the security tasks off onto the regular IT people. 26:30And that's fine, except the IT people don't really know 26:33what buttons to click to make things secure, and they 26:36just want to click the right buttons to make things 26:38work. And making things work and making things secure are 26:41two different things, which is where you come up with 26:42a lot of misconfigurations. And that's not even getting into 26:45the complex stuff where you have a chain of misconfigurations 26:49that worked in concert to open up a hole. This 26:52is where, you know, just one thing. Because the IT 26:55guy wanted to make it work and wasn't able to 26:58actually make it secure and work. Absolutely. You know, often 27:02there's that tension, right, between just making the thing work 27:04and making sure it works securely. And we don't always 27:06do that. Second part. Sudra, your thoughts? It's a tough 27:10one because when you. And when you are developing and 27:12if you hit a problem, the first thing is, let 27:14me turn off security and see if it works, okay? 27:17That's how we debug. And then when you deploy it, 27:20did you make sure that. Did you turn it on 27:21back again? It's a very simple human error, sometimes very 27:26unintered, and then it happens. So it's extremely important that 27:29the tools are available to do proper checks and balances. 27:34We are cutting cost everywhere. So let's reduce the humans. 27:37Let's have the tools do it. Earlier mentioned, the tools 27:39are as good as the good, bad, and ugly of 27:41human. They can miss things too, just like us. So 27:44who's policing the police to make sure things are working 27:48fine but then it keeps the security professionals in business, 27:52like Troy said. I think that's what keeps the business. 27:54Going, you know, silver lining, right? There's a job for 27:58us to do. But you know, Troy, something that you 28:01had said, right, was that part of the issue with 28:03these misconfigurations is scale, right. There can be so many 28:06of them in a massive enterprise with all these apps 28:09configured and set up. And so I was wondering if 28:13that if it's harder to find some of these misconfigurations 28:16maybe than it is to find like your typical, you 28:18know, vulnerability, right. When there's some kind of bug or 28:20flaw in the code, is it harder to surface these 28:23things where it's like it's not a bug in the 28:25code, it's just someone clicked the wrong button? Yeah, I 28:27think it is because, you know, I don't know if 28:31it's harder. In fact, I would think it might be 28:33easier from a threat actor perspective. Trying to find insecure 28:37code requires a lot more skill than trying to find 28:39an open door. And I think the scale is much 28:43larger. There's only so many applications being deployed, whereas there 28:47are so many different access points across hyperscalers. So I 28:51think misconfigurations are probably a greater threat than insecure software, 28:54quite frankly. And I think that goes to Space Rogue's 28:57earlier point. If it's responsibility is to make things work 29:02and they're the ones that are deploying these or deploying 29:05whether it's applications or hyperscalers, et cetera, you're going to 29:09have less security attuned folks making these decisions, which is 29:13likely to cause more misconfigurations. Absolutely. Misconfigurations can be a 29:17bigger problem than vulnerabilities. I want to say Suju, do 29:19you agree with that? Do you feel like that's true? 29:21It is definitely true because you don't at least vulnerabilities, 29:26you know, these are the vulnerabilities and everything else you 29:29see a report and make a conscious decision to ship 29:31something with vulnerability because you know that this cannot be 29:34accessed or whatever reason that you might have. But with 29:37misconfiguration you don't even know how do you fix things 29:39that you don't know? I think I do see that 29:42it's a bigger problem and these misconfigurations are easily detectable 29:47using AI today. But on the other hand it can 29:51also be. That is what A lot of companies are 29:53working on, I think Wiz talking about it, IBM concert 29:56talking about is how do we make sure that there 29:59are checks and balances on figuring out these misconfiguration earlier 30:04so we don't really ship it. Absolutely. And Chris, what 30:07about you? How do you feel about the kind of 30:08misconfiguration versus vulnerability thing? I'm going to say the same 30:11thing I said with the insider threat. I'm surprised it's 30:14not a bigger issue. It really is a big deal 30:18and I think there probably are more misconfigurations out there 30:22than we realize. They're just not being exploited yet. Absolutely. 30:26And so how do we start to be more vigilant 30:29about misconfigurations? Is there a way to do that? Chris, 30:32I'll start with you again right there. Do you have 30:34any thoughts on how organizations can maybe have fewer of 30:36these things? Things? Well, we go back to fundamentals, right. 30:38And we look at inventory and people laugh when I 30:41say this sometimes because inventory is what we did back 30:43in 1998, making sure we knew what we had. But 30:47if you do your inventory properly and you know what 30:49you have and you know how it's configured, that's part 30:51of the inventory. Right. Back to the fundamentals, checking all 30:55your stuff. And it can be difficult when you're talking 30:57about a global enterprise with millions of endpoints and thousands 31:01of routers and whatnot, but you gotta do it. And 31:05as Suja's point, this is where AI can be very 31:07helpful in continuously monitoring your endpoint and your network devices 31:12and checking that configuration and making sure the configuration file 31:15matches what it's supposed to be. Absolutely. Sujay, anything to 31:18add there? I think, see one of the things that 31:20we're talking about, Selenium grid, where hey, do not put 31:24it up there in production like we are in tech 31:28space. Can we build some things that if you have 31:30this, you cannot deploy? Like how do we make sure 31:34that we protect ourselves some easy, stupid mistakes. I think 31:38that's where the Microsoft one where, okay, you cannot have 31:41blank passwords anymore. So then people came with admin. Admin. 31:43That's a different problem that we need to now solve. 31:46But at least we are making progress. Not perfection, but 31:49we need to keep making progress with these things. Absolutely. 31:52One small step at a time. Troy, any thoughts on 31:54your end? I think Suja and Chris covered it pretty 32:00AI really has a chance of making a near term 32:03difference. There are many ways to misconfigure stuff, but I 32:07think that's somewhat limited. Whereas the ability to create insecure 32:11apps through development, it's almost limitless. The ways you could 32:15do that. And AI does well with these large scale 32:18problems. And at the risk of doing a product pitch, 32:21IBM Consulting's actually built out tools for this already that 32:24they brought to market to help do it. And we 32:26know Compare are doing the same. Absolutely. The limitless potential 32:29to make bad apps. I like ending on that. Let's 32:32move on to our next story. Hybrid Petya adds a 32:40new twist to an old ransomware. Now, I assume that 32:44you folks remember that name Petya. It was a ransomware 32:47that made some waves in the mid 2010s, and especially 32:50not Petya, which came out in 2017 and was responsible 32:54for one of the most destructive cyber attacks in history, 32:57causing over $10 billion in damages. So 10 billion, that's 33:02a B. Against a host of organizations throughout Europe. Now, 33:06recently, new ransomware samples discovered on Virus Total appear to 33:10be a copycat of these infamous strains, but with some 33:14new tricks. So researchers have dubbed them hybrid Petia. Now, 33:19we don't know if this is actually being used in 33:21the wild yet. No one's actually seen that. And it 33:23could be just more proof of code. Somebody was toying 33:27around, messing with stuff. But still, to me, it's a 33:29little bit concerning to see that name Petya pop up 33:31again. So let's start with just some basic reactions to 33:34this news. Chris, how are you feeling about seeing that? 33:36Well, it's not the first copycat of Notpetya that we've 33:39seen. I doubt it will be the last. I think 33:42the biggest thing I was reading about this one is 33:44that it adds a vulnerability or takes advantage of UEFI 33:48boot, which not the first time we've seen malware do 33:50that either. So yeah, this is important. We need to 33:53pay attention. But not novel. Got it. Suja, how about 33:56you? Yes, not novel. And you have to have the 33:58basic hygiene, right? Like again, it's like wash your hands, 34:02like, don't touch, don't click on links that you don't 34:04know. Don't pick up the phone if you don't know 34:06the people. Like all those. Basic hygiene is what is 34:08needed there. Absolutely. And Troy, how about you? Yeah, I'll 34:11pivot off of something Space Rogue said earlier. I'm surprised 34:15we're not seeing more of this. If I was a 34:17threat actor, this is where. Now granted, it's harder to 34:20do. Maybe that's why we're not seeing it. But most 34:22of our Security tools focus on the operating system, the 34:25user space, or applications, right? That's really where we focus. 34:28Or the cloud. They don't touch the BIOS or the 34:32uefi. There's almost nothing there that monitors that or protects 34:35against that, yet it is writable and accessible. So that's 34:38a huge problem. And then if you mess up the 34:40mft, which they're doing, which is what is basically the 34:43underlying file system that allows the system to work. Now, 34:46you can't even boot the system. So how do you 34:49fix it? Well, most tools can't do that remotely. So 34:53now on a widely dispersed enterprise, you have to have 34:55people going around with boot CDs or boot thumb drives 34:58and booting these systems up to fix it. So the 35:00ability to recover is very limited in many organizations. To 35:04Chris's earlier point, I don't know why we don't see 35:07more of this. Yeah, I'm glad you both brought up 35:11that UEFI kind of system compatibility, right. And the way 35:14that it can write to that partition. And now, look, 35:16I could try to explain that, but I am nowhere 35:18near as seasoned in this field as you folks are. 35:21So I'd love to throw it to the experts to 35:22talk a little bit about why that particular thing is 35:25troubling and why that caught attention. Chris, do you want 35:28to maybe say some words on that? Well, if you 35:30look at UEFI as sort of like the BIOS of 35:32days gone by, right? It's the code in the hardware 35:36that controls the system. And if you can control that, 35:39you control the machine, regardless of what operating system or 35:41security stuff you have have on top of it, because 35:43it's below that right now, we do have security things 35:47in place that we can use to protect the uefi, 35:50right? Tpm, Trusted Platform Module or something secure boot. Also, 35:57and believe it or not, manufacturers issue patches for UEFI 36:01on occasion, and almost nobody patches that ever. So if 36:06you keep up with that and monitor your firmware integrity, 36:11you can sort of try to protect yourself against this 36:13sort of attack. So this is another lesson, why you 36:15should patch your dang systems. When the patches come out, 36:18patch them, please. Suja, any thoughts on advice for organizations 36:25in terms of what they should do to kind of 36:26set themselves up to combat a threat like this one. 36:29See, this is a tough one. Like, because it's in 36:32the OS boot, not even OS level, right? It's a 36:35hardware level. It's like what we saw with CrowdStrike. We 36:38couldn't fix it because each machine needs to be addressed 36:40separately. To make sure that you're fixing it. So this 36:43is the same level of challenge that you will have 36:46when you get in there. So keeping things up to 36:48date and making sure apparently all the phishing education and 36:52everything doesn't work as much as it should. So I'm 37:03and then get that going for that. Making sure that 37:05things are a secure boat. You are not able to 37:08go mess with it. Like what are the go firewalls, 37:10sorry guard raiders that you can put. It's not even 37:12a firewall thing because you're going to the hardware directly 37:15to not get to that level of access. If you 37:18can figure out how to make people use common sense, 37:20you will crack a mystery of the human condition that 37:23nobody has cracked yet. Suja. So get to work on 37:25that name, which is very common, but it's not at 37:28all common. And Troy, any thoughts on your end in 37:32terms of security practitioner takeaways here? What do you do 37:36to set yourself up in the face of a threat 37:37like this? Yeah, so I think fundamentals. Let's step back. 37:41If you're a business, what is your minimum viable business? 37:44What do you need to keep the business running so 37:46you can prioritize on that? All of those systems should 37:49have immutable backups stored elsewhere so you can quickly restore 37:53them, get them functioning when you get to the end 37:56user workstation type of stuff. Don't store anything on the 38:00local system or if you do at least have box 38:04OneDrive, whatever that cloud based storage is. So you know, 38:07maybe their laptops don't work anymore, but maybe they continue 38:09limping along with their phones while you fix things. Right? 38:12Again, try to figure out what is it going to 38:14take to keep this place running until we get everything 38:17fixed and take that approach. Don't just layer more and 38:19more and more technologies and security stuff. As much as 38:22we'd love to sell you that really take a holistic 38:25view around your entire enterprise and what it means. It 38:27should be part of really your risk management program or 38:29your disaster program and not strictly an IT problem. Resiliency 38:33becomes tough top top of mind for everybody along with 38:36security because it's not a matter of question if when 38:40it happens, how do you make sure that you are 38:43resilient? Moving on then to our final topic for this 38:50episode. Something a little more fun, right? Anybody who tuned 38:53in to the last episode will know we talked about 38:56dumb rules you gotta put in place because somebody did 38:58something dumb. Maybe Dumb's mean. But you know what we're 39:01talking about. We all have dumb moments. The example that 39:05we were talking about specifically was, you know, the do 39:07not eat label on a silica gel packet? Right. And 39:09you had to put that on there because somebody ate 39:11that packet at some point. And so that got me 39:13thinking about what are the dumb cybersecurity rules you've seen 39:17instituted or that you maybe have instituted yourself because someone 39:20did something maybe a little bit dumb, like those blank 39:23admin passwords Troy had mentioned. So I'd love to give 39:26everybody just a moment to share their story with us. 39:29And let's start with Chris. You got anything for us? 39:33I got a couple. I'll give you probably one that 39:36my biggest pet peeve is people or companies, organizations that 39:40actually fire people for clicking on links. I think that's 39:44dumb and stupid and is a failure of your education 39:46model in your organization and not a failure of the 39:49employee for clicking on something because that's, that's their job, 39:52to click on links. That's what they get paid for. 39:54They're opening resumes, they're opening POs or whatever. You're paying 40:02No, don't do that. It's really. You're blaming the user 40:07for your own poor security implementation. Ooh, I like that 40:10one. Very good. It's kind of the opposite, right? Instead 40:12of don't click on things, click on things and understand 40:14that that's part of people's job. They're clicking on things. 40:16You just gotta teach them, help them click on fewer 40:19bad things. I like that one. Suja, what about you? 40:22Any thoughts on dumb cybersecurity rules? The thing is, the 40:25passwords are not 1, 2, 3, 4, right? Like making 40:28sure that people are able to think through it or 40:31in those cases, use passkeys. What are the other options 40:34to go do it? Rather than your birthday, your wife's 40:38birthday, whatever that comes into mind. That is the main 40:42role. Absolutely. And Troy, how about you? Well, I already 40:44gave up my admin admin one, so I'm not sure 40:48there. Matt, you know, one thing, I'd like to sort 40:51of pivot on it because, you know, we again, to 40:54Chris's point, the person's wrong or they're dumb. I get 41:02I do. Right, Right. And all. Rather than telling them 41:05what to do or telling them they're Stupid for falling 41:08for it. I just try to walk them through it. 41:10You know, it's from the bank. Well, do you bank 41:12with that bank? No. Well, then you can assume it's 41:15a spam. Yes, I do. Okay, but is this the 41:18email address you have associated with your bank? No, it's 41:21not. Okay. It's spam. And really, just walk them through 41:24that. That common sense we talked about, it's not common 41:27for non technical people. They shouldn't be expected to. So 41:29we need to educate them to really work through the 41:31problem solving around it. And usually by the second or 41:35question they're like, I was a dummy, I shouldn't have 41:36called you. And I'm like, no, no, keep calling. That's 41:38what I'm here for. No, absolutely. And I think it's 41:40important to point out, right, that like every single one 41:42of us, no matter how tech savvy we are, we 41:44can fall for those kinds of things. Right. The whole 41:46conversation we had that, that sort of sparked this topic 41:49last time was talking about how a very accomplished developer 41:54on NPM, somebody who is responsible for 20 packages, so 41:56somebody who knows what they're doing, they got hit by 41:58a phishing email at the wrong time and they clicked 42:01the link that they shouldn't have called clicked. So anybody 42:03this can happen to. And again, teaching people that. You 42:06know what, I like this idea of teaching people it's 42:08okay to, you know, call, ask questions, don't feel bad 42:11about it. There's no such thing as a dumb question. 42:13We all make mistakes. And if we can be that 42:16common sense for other people, then then the world is 42:18a better place for that. All right, I thank you 42:20all so much then. That's all the time that we 42:23have for today. Thank you, Chris. Thank you, Suja. Thank 42:27you, Troy. Thank you to the all audience at home 42:29for spending time with us. Make sure to subscribe to 42:32security intelligence wherever podcasts are found. Stay safe out there 42:36and practice that common sense a little bit.