AI Threats: Impending Vulnerability Cataclysm
Key Points
- AI is a powerful tool that can strengthen defenses if applied correctly, but it also inherits the good, bad, and ugly from its human users, creating new exploitation risks.
- The panel warned that many defenders are lagging behind attackers in adopting AI, while enterprises rapidly deploy AI solutions without a “secure‑by‑design” approach, increasing vulnerability.
- Gatti Evron of Gnostic predicts an AI‑driven “vulnerabilities cataclysm” within six months, where AI‑accelerated exploitation could outpace existing cyber defenses.
- Real‑world examples such as the resurgence of the Scattered Spider group, misconfiguration issues, and the Hybrid Petya malware illustrate how quickly AI‑enhanced attacks can emerge.
- The discussion highlighted the need to move beyond outdated “dumb” security rules and adopt proactive, AI‑aware strategies to keep pace with evolving threats.
Sections
- AI Security Concerns on Security Intelligence - In the opening of IBM’s Security Intelligence podcast, the host introduces the panel and probes each expert about their top security worry regarding AI, framing it as a powerful yet potentially exploitable tool.
- AI's Limits in Exploit Chaining - The speakers argue that AI can automate simple vulnerability discovery but still relies on human expertise for complex, multi‑step exploits, making scale the primary risk rather than immediate sophisticated attacks.
- AI‑Assisted Coding Fuels Insecure Software - The speakers argue that AI‑driven “Vibe coding” speeds development at the cost of critical security checks, resulting in vulnerable applications such as the zero‑security Tapp example.
- AI Development Requires Secure Human Oversight - The speakers stress that building scalable AI‑driven enterprise apps demands solid security fundamentals, clear vendor responsibilities, and keeping humans in the loop rather than relying on AI to replace engineers.
- LLM-Powered Vishing Tactics Revealed - The speaker discusses the comeback of the Shiny Hunters group, highlighting their sophisticated use of large‑language‑model‑orchestrated vishing attacks with synthetic voices to target financial institutions, and notes that while unsurprising, these methods represent a new twist on classic social‑engineering.
- AI-Powered Vishing and Employee Recruitment - The discussion highlights AI‑generated vishing attacks, stresses two‑factor authentication as a simple defense, and notes that groups like Shiny Hunters also attempt to recruit insiders within targeted companies.
- Detecting Insider Threats with AI - The speakers discuss how hard it is to spot anomalous behavior by privileged insiders using traditional monitoring, note that existing security products fall short, and suggest that AI‑driven analytics may help uncover these hidden threats.
- Misconfigurations Rise as Security Staff Cuts - The speakers explain how cost‑driven reductions in dedicated security teams push insecure misconfigurations onto general IT staff, who prioritize functionality over protection, leading to compounded security gaps.
- Misconfigurations vs Vulnerabilities Debate - The speakers argue that misconfigurations are a larger, often hidden security risk than known vulnerabilities, emphasizing the need for AI‑driven detection, early checks, and basic inventory controls to prevent shipping insecure systems.
- Copycat NotPetya Exploits UEFI - The speakers discuss a new NotPetya‑style malware leveraging UEFI boot vulnerabilities, noting its lack of novelty but stressing the gap in security tooling that rarely monitors firmware layers.
- Ensuring Business Resilience and Backup - The speaker advises businesses to define a minimum viable operation, keep immutable off‑site backups and cloud‑based storage, adopt a holistic risk‑management approach instead of layering more technology, and emphasize resilience as a core, not just an IT, concern.
- Educating Users Against Phishing Scams - A conversation emphasizing the need to teach both technical and non‑technical people to verify suspicious emails, encourage asking questions without embarrassment, and recognize that anyone can fall for phishing.
Full Transcript
# AI Threats: Impending Vulnerability Cataclysm **Source:** [https://www.youtube.com/watch?v=7y4cYOdf0Y4](https://www.youtube.com/watch?v=7y4cYOdf0Y4) **Duration:** 00:42:39 ## Summary - AI is a powerful tool that can strengthen defenses if applied correctly, but it also inherits the good, bad, and ugly from its human users, creating new exploitation risks. - The panel warned that many defenders are lagging behind attackers in adopting AI, while enterprises rapidly deploy AI solutions without a “secure‑by‑design” approach, increasing vulnerability. - Gatti Evron of Gnostic predicts an AI‑driven “vulnerabilities cataclysm” within six months, where AI‑accelerated exploitation could outpace existing cyber defenses. - Real‑world examples such as the resurgence of the Scattered Spider group, misconfiguration issues, and the Hybrid Petya malware illustrate how quickly AI‑enhanced attacks can emerge. - The discussion highlighted the need to move beyond outdated “dumb” security rules and adopt proactive, AI‑aware strategies to keep pace with evolving threats. ## Sections - [00:00:00](https://www.youtube.com/watch?v=7y4cYOdf0Y4&t=0s) **AI Security Concerns on Security Intelligence** - In the opening of IBM’s Security Intelligence podcast, the host introduces the panel and probes each expert about their top security worry regarding AI, framing it as a powerful yet potentially exploitable tool. - [00:03:17](https://www.youtube.com/watch?v=7y4cYOdf0Y4&t=197s) **AI's Limits in Exploit Chaining** - The speakers argue that AI can automate simple vulnerability discovery but still relies on human expertise for complex, multi‑step exploits, making scale the primary risk rather than immediate sophisticated attacks. - [00:06:32](https://www.youtube.com/watch?v=7y4cYOdf0Y4&t=392s) **AI‑Assisted Coding Fuels Insecure Software** - The speakers argue that AI‑driven “Vibe coding” speeds development at the cost of critical security checks, resulting in vulnerable applications such as the zero‑security Tapp example. - [00:10:24](https://www.youtube.com/watch?v=7y4cYOdf0Y4&t=624s) **AI Development Requires Secure Human Oversight** - The speakers stress that building scalable AI‑driven enterprise apps demands solid security fundamentals, clear vendor responsibilities, and keeping humans in the loop rather than relying on AI to replace engineers. - [00:13:56](https://www.youtube.com/watch?v=7y4cYOdf0Y4&t=836s) **LLM-Powered Vishing Tactics Revealed** - The speaker discusses the comeback of the Shiny Hunters group, highlighting their sophisticated use of large‑language‑model‑orchestrated vishing attacks with synthetic voices to target financial institutions, and notes that while unsurprising, these methods represent a new twist on classic social‑engineering. - [00:17:00](https://www.youtube.com/watch?v=7y4cYOdf0Y4&t=1020s) **AI-Powered Vishing and Employee Recruitment** - The discussion highlights AI‑generated vishing attacks, stresses two‑factor authentication as a simple defense, and notes that groups like Shiny Hunters also attempt to recruit insiders within targeted companies. - [00:21:44](https://www.youtube.com/watch?v=7y4cYOdf0Y4&t=1304s) **Detecting Insider Threats with AI** - The speakers discuss how hard it is to spot anomalous behavior by privileged insiders using traditional monitoring, note that existing security products fall short, and suggest that AI‑driven analytics may help uncover these hidden threats. - [00:26:04](https://www.youtube.com/watch?v=7y4cYOdf0Y4&t=1564s) **Misconfigurations Rise as Security Staff Cuts** - The speakers explain how cost‑driven reductions in dedicated security teams push insecure misconfigurations onto general IT staff, who prioritize functionality over protection, leading to compounded security gaps. - [00:29:21](https://www.youtube.com/watch?v=7y4cYOdf0Y4&t=1761s) **Misconfigurations vs Vulnerabilities Debate** - The speakers argue that misconfigurations are a larger, often hidden security risk than known vulnerabilities, emphasizing the need for AI‑driven detection, early checks, and basic inventory controls to prevent shipping insecure systems. - [00:33:36](https://www.youtube.com/watch?v=7y4cYOdf0Y4&t=2016s) **Copycat NotPetya Exploits UEFI** - The speakers discuss a new NotPetya‑style malware leveraging UEFI boot vulnerabilities, noting its lack of novelty but stressing the gap in security tooling that rarely monitors firmware layers. - [00:37:41](https://www.youtube.com/watch?v=7y4cYOdf0Y4&t=2261s) **Ensuring Business Resilience and Backup** - The speaker advises businesses to define a minimum viable operation, keep immutable off‑site backups and cloud‑based storage, adopt a holistic risk‑management approach instead of layering more technology, and emphasize resilience as a core, not just an IT, concern. - [00:41:10](https://www.youtube.com/watch?v=7y4cYOdf0Y4&t=2470s) **Educating Users Against Phishing Scams** - A conversation emphasizing the need to teach both technical and non‑technical people to verify suspicious emails, encourage asking questions without embarrassment, and recognize that anyone can fall for phishing. ## Full Transcript
AI isn't magic. It's just another tool that if you
apply it correctly into your defenses, will make you that
much more stronger. All that and more on security Intelligence.
Hello, and welcome to Security Intelligence, IBM's weekly cybersecurity podcast,
where we break down the most important stories in the
field with the help of our panel of expert practitioners.
Matt. I'm your host, Matt Kaczynski. Joining me today to
break it all down, Chris Thomas, X Force Global lead
of technical eminence and part of the not the Situation
Room podcast. Suja Viswasen, VP of security products and Troy
Betancourt, global partner and head of X Force. Our stories
this week, the prophesied return of Scattered Spider and Friends
misconfigurations Hybrid Petya. And our panel shares dumb cybersecurity rules.
But first, the AI vulnerability apocalypse. Sounds pretty scary. So
I want to start us with a quick round the
horn question today. What's the thing that worries you most
about AI from a security perspective? Suja, let's start with
you. Well, AI is learning from us, right? The good,
bad and ugly. And then it is when we start
using it, then the bad and ugly are also part
of it. And then you can use AI to exploit
that as well. So that's what. What is me. Absolutely.
Chris, how about you? Well, I'm not really worried about
AI, but if I was going to say, I would
say that defenders aren't really picking up AI quick enough
and using it in their defenses. Absolutely. And Troy, how
about you? For me, it's the rapid deployment of AI
solutions across enterprises without a secure by design approach. You
know, we've already seen the dangers where just assistants like
co pilots are exploited to, you know, extract data from
organizations. Now imagine if that was fully autonomous agentic AI
with no human oversight at all, massive potential for risk.
Absolutely. And the reason I posed this question is because
our very first story of this week has to do
with a somewhat dire prediction from Gatti Evron, CEO of
AI security company Gnostic. Now, Gatti predicts that we are,
and this is a direct quote, six months away from
the upcoming vulnerabilities cataclysm. AI could make exploitation so fast,
it breaks cyber defenses down. Now, all three of you
kind of touched on exactly what Gatti's getting at here,
right? Like Chris said, defenders are not really picking it
up as fast as the attackers might be. Like Troy
said, we're rolling it out very quickly, maybe before we
even have security in place. And like Suja said, these
things are learning from us. And they can evolve faster
than our security practices can evolve. So I wanted to
start by asking whether you agree with the premise, are
we rushing headlong into AI disaster? And I want to
start with you, Chris, because you brought this to my
attention when it was first posted. I want to get
your reactions to this prediction. Well, I mean, attackers are
using AI to automate vulnerable discovery. And they've, they're already
doing that. They've been doing that for a while. It's
not a new thing now. The fear here though, is
that they're going to get so good at it and
it's going to be so fast it's going to overwhelm
us. But I really don't think that's going to happen.
I mean, AI isn't magic. AI really still struggles with
the nuances of like AI internals, memory, layouts, bypassing mitigations.
You still need human expertise at the end of the
day to really exploit something. Right now, the low end
of the spectrum, the easy vulnerabilities, the easy exploits, AI
is really good at that. The more expansive things, the
things that require chaining of multiple exploits together, that becomes
more and more difficult for AI to do. And so
you still, not only is it AI, but the people
still need to know how to use the AI to
exploit these vulnerabilities. So the real risk here is scale.
have protections and mitigations in place, if properly deployed to
sort of solve that. Absolutely. Troy, any thoughts on either
this prediction or Chris's take? I think there is some
FUD there. Six months I think is wildly unreasonable. That
said, I think over time AI tool ATTCK tools will
get better. You know, we've evaluated a bunch of the
commercial tools that are available right now for use potentially
in our own engagements and have really found, to Chris's
earlier point, they're really good at the easy stuff, the
stuff that lets you scale. That takes a lot of
human effort scanning for vulnerabilities and picking which ones you
want to exploit, but actually chaining them together like you
do in red teaming, what the threat actors are really
doing, they're just not there yet. And I don't think
they're really close, to be quite frank. So I think
it's a little mix of maybe his timeframe's a little
aggressive. I think he's right. If we roll out a
couple years. Out, gotcha and sucha. How about you? What
are you thinking about? This prediction, yes. AI is really
good at figuring out what are the easy exploits. So
similarly, are we adopting AI to figure these easy exploits
ahead of time? Right. As a defense mechanism so that
these are not out there? The most complicated one. It's
work in progress. But how do we adapt and then
get that? Yes, and same thing with Troy. I agree
the timeline seems too aggressive for six months, but we
need to be playing ahead of the game. There's no
two ways about it. As AI kind of stands, it's
really good at the easy stuff. Some of that more
complicated stuff not quite there yet. I'm wondering, though, if
you think that it will get there at some point.
And I want to start with you, Troy, because you
were saying maybe a year, two years out. Do you
think it's actually going to reach that level in that
timeframe? Prognosis is really hard to really stick to. I
think we've all been wrong. When we've made forecasts around
AI, I think they'll get there. But keep in mind,
that's a year or two years of defender improving their
own defenses, whether that's through security products building in AI
or leveraging just better deployments of security technology across their
enterprise. Absolutely. Chris, I saw you start to maybe make
a little reaction. Do you have thoughts on that one?
Well, yeah. I mean, everybody talks about how the attackers
are going to advance and get better by using AI
and they forget the defenders are also using AI and
they're going to advance right along with the attackers. So
this, you know, vulnerability, AI apocalypse thing, I'm not sure
I would use such strong language. But the attackers are
going to get better, but so are the defenders. Gotcha.
It's like Suja said, right? They learn the good, the
bad, and the ugly. So it's really picking up all
that stuff. And it's going to help both the defenders
and the attackers. It depends on who is using it.
That's something that comes up for basically every time we
record this show. Right. All these tools depend on who's
using it and what are they using it for. Now,
I want to dig a little bit, though, into some
of the specific kind of vulnerability trends that Gatti calls
out in his post, because I think they're working worth
discussing. And one of the first ones is this. And
this is a quote again from the post. Vibe coding
boosts velocity but removes critical checks, producing insecure code at
scale. And Troy, you had mentioned, you know, some of
the copilot stuff and the coding assistant stuff we've seen
so far. So I wanted to get your take. Do
you agree that Vibe coding can kind of introduce some
of these vulnerabilities by removing critical checks? Oh, absolutely. We've
already seen examples of that. The Tapp, if everyone recalls
that, that was news a few months back. It was
supposedly a Vibe coded app. I can't say it had
poor security. It had zero security. And that was a
really great example, and I think we'll see more of
that. My concern, as I mentioned, was development without security
being as part of it, like enterprise development. But Vibe
coding is even worse, right? That is the true wild
west of no security in development. So, yeah, it is
definitely concerning. Suja, did you have any thoughts on that?
Because I know in the past we've talked quite a
bit about vibe hacking, Vibe coding, Vibe security, as you
called it. So I want to get your takes on
the risks of Vibe coding so. Somebody who doesn't know
anything about software can build an app today. I think
that's what happened with that Troy was talking about, because
you cannot fix something that you don't know that you
are causing. So definitely it does introduce. That is why
when people say, hey, this is a copilot, or it
works alongside a senior engineer who knows what they're doing.
If you have a tool and you don't know what
you're doing, of course you can poke your eye with
it. And that's what happened there. So we need to
be educating everybody to say, hey, this is a tool
and how are you going to use it? What are
the guardrails? So that is why these tools need to
evolve, so that the security becomes part of it rather
than after the fact. I think this is something that
security needs to be baked in, not bolted in later.
I like that you point out, you know, that the
tool can poke your own eye out if you don't
know what you're doing with it. Right? Because again, it's
not just who's using it, but how you're using it.
Right? So even if you're using it for ostensibly benign
reasons, if you're not doing it right, you know, you'll
shoot your eye out. Chris, do you have any takes
on Vibe coding and where it fits into the cybersecurity
landscape? We're looking at the LLMs and the models that
we're using today, as they stand today. Right? These things
are constantly evolving, constantly getting better, constantly adding new data.
And so as the need for secure coding becomes more
apparent, those features will be built into the newer models,
right? As customers demand. Hey, if I'm going to do
vibe coding. I need it to be secure. They're going
to demand that the coding that's output is secure. So
yeah, we have a problem today. I think that will
become less of a problem tomorrow. Got it. And so
yeah, again we all kind of agree here that it
sounds like anyway that the six month timeframe a little
aggressive. There are certainly some issues, but we gotta keep
up with what's happening here. And so I wanna kind
of end this segment on a kind of future forward
looking take. Right. What kinds of advice would you have
for organizations right now who are maybe a little bit
worried about this kind of talk about an AI vulnerability
cataclysm. How can they start to position themselves for the
best security in the future with this stuff? Let's start
with Suja on that. The bigger thing is for doing
a poc. It's great. So start with whatever you have
and then learn from it. As Troy mentioned earlier, when
you are deploying it for production, make sure the right
set of guardrails are set. Don't jump into it thinking.
I believe that especially for development purposes, task based, I
need this task completed. AI is really, really good. If
you want to build a scalable enterprise app, then you
need to be thinking about all the security. Are the
vendors providing it or they're expecting you to provide it.
So make sure that you understand it and then go
from there instead of jumping into the bandwagon, oh my
God, I can replace all my engineers with by coding.
That's not going to happen. Yeah, no, absolutely not. You
got to keep that human in the loop. Right Troy,
what are your thoughts there? Yeah, I'd like to expand
to cloud years ago and then found out oh, the
cloud provider wasn't responsible for security like I thought were
surprised. Same thing with AI. But I think stepping back
a little, it's really security fundamentals. AI adds a little
bit to it, but it's role based access control. It's
ensuring that your APIs or in this case maybe A2A
or MCP Communications are secured. It's making sure that the
front end app you're developing that's going to leverage the
AI uses best coding practices, secure by design. So really
it's about doing all of that, understanding your assets, your
data flows, ensuring they're protected. Unfortunately, history has shown we
haven't been that good about that across the enterprise hygiene
from a security perspective. And AI is now Just rapidly
exposing that. I like that you brought up that kind
of analogy to the cloud apps because you're foreshadowing something
we're going to talk about a little bit later on,
which is the persistent issue with misconfigurations in cloud apps.
But before we get to that, Chris, what are your
thoughts on how organizations can best position themselves to kind
of deal with this cataclysm? Well, you know, like Troy
said, focus on the fundamentals, right? Those have not changed.
Just because we have a brand new shiny thing over
here. The fundamental security practices that we've been using for
the last 20, 30 years still apply. AI isn't magic.
It's just another tool that if you apply it correctly
into your defenses, will make you that much more stronger.
Yeah. One of the things I really like about security
is that you do have this set of bedrock principles
that you can kind of adapt and apply to almost,
you know, every situation. And rather than getting distracted by
that shiny new object, you just keep that stuff in
mind. All right, let's move on to our next story,
which is involving a cast of characters that we have
seen time and time again on this show. Scattered Spider.
Shiny Hunters. They are back. Now, anybody who listened to
our last episode would know that we discussed the supposed
retirement announcement of Shiny Lapsis Hunters, and our panel unanimously
declared that it was absolutely bunk. And it turns out
that they were right. And they were so right, in
fact, that it was the very same day our episode
went live that they were back in business. And you
know what, if anything annoys me, is we couldn't even
have a week. We couldn't even have a week without
them. It takes all the fun out of prognosticating when
they just show up that day. But anyway, my hurt
feelings aside, they're back and they're doing some new stuff.
And so before we get into some of the new
things I want to discuss, I would also just. I
would like to get some reactions to the return of
Scattered Spiders and Shiny Hunters. Suja, how are you feeling
about this? Once you know that you can get away
with things, you're going to keep on trying. I think
that's. That's what it is. Makes perfect sense. Chris, what
about you? I mean, once a criminal, always a criminal,
right? I mean, that's where the money is. That's how
they make their money. They're not going to abandon that
just because for whatever, right? Unless the money goes away.
Unfortunately, the money has not dried up. Troy, what about
you? Well, hopefully this doesn't violate any Brand permission, but
shocked Pikachu faced here. Did anybody expect that they were
really gonna retire? I think that the shocked Pikachu meme
is an especially apt one here, given that Shiny Hunters
does take its name from a Pokemon reference. So we
got some synergy going on here. All right? So, yeah,
obviously nobody is surprised. Everyone's like, of course they're back.
We all saw this coming a mile away. But what's
interesting to me is that again, they're doing some new
stuff in this round, at least stuff we haven't seen
them doing before. For they're targeting more financial institutions, which
that's obvious. And they're doing it, though, with a lot
of LLM orchestrated vishing. By that, I mean, they've got
operators sitting with LLMs and using them to kind of
play out some of these vishing calls that they're making.
And they'll even use generated synthesized voices to kind of
operate on the calls to pose as certain people. It
sounds like very sophisticated stuff. And what they often do
is they begin by calling the target's IT help desk
and claiming to be an employee who is locked out
of their account and asking to reset their password. And
the reason I mention this specifically is because I had
a conversation a long time ago with Stephanie Carruthers of
IBM X Force, and she had mentioned that when she
does social engineering engagements, the one trick that works almost
every single. Actually not even almost. She said every single
time they do it is the IT help desk password
reset ploy. It works every time they do it. And
so it's a little scary to me that they're using
this ploy and they've got LLMs doing it. So I
wanted to ask through to the panel, this AI powered
vishing, have you seen this kind of stuff before? Let's
start with you, Troy. Yeah, we've seen a little bit
you see it in open source intelligence about some of
these groups. It's not really a surprise. Especially the scattered
spider group. They're not known for really being technically advanced.
Right. They've always focused more on the social engineering side,
which is the easiest one where you can operationalize that
with support from LLMs and AI. So to me it
makes sense. Right. They are finding value in using AI
to make them more effective in what they do. Sudra,
what about you? Any thoughts on this LLM vishing? Have
you seen stuff like this before? Becomes very easy to
personalize and then go from there. Because previously you have
to do lot of work to get all these information.
Now with LLMs, it Becomes much easier to come make
it very personalized to people. So I do see why
they are doing it because it's like coming from me
to you. You're going to the IT department about an
employee. This is not somebody coming from outside. So it's
very, very belie so I totally see this happening. Yeah,
that's a good point. Right. It makes spear phishing super
easy because you can have the LLM gather that information
too. And so many people post things online openly on
their, you know, social media accounts. That makes the attackers
jobs really easy. I remember even in the pre LLM
days there was some stat about how like attackers could
spend 45 minutes on Google and get all the information
they need for a spear phishing attack. Imagine how much
shorter it is now that we've got LLMs doing this
stuff. Chris, what about you? Any thoughts on this tactic?
It's not new, right? We've seen vishing before audio and
video, so that's, that's not a new thing. I think
what it does do is highlights the importance of something
like two factor authentication. Right. Or confirmation of identity through
a second channel. These are standard basic practices that companies
can use to defeat the sort of vishing and phishing
attacks that we see today. Yeah, talk about a kind
of simple solution to a high tech problem, right? You
got fake voices, AIs, all this kind of stuff. Throw
a second factor on there, folks. Make sure you got
that stuff set up. Set up a passkey, do something.
Right. The other thing though that I found interesting was
that they're not just doing this LLM orchestrated vishing. They're
also actively trying to recruit employees of the organizations they're
targeting to their side. Right. And as we've seen this
specifically with Shiny Hunters, I'm not sure if Scattered Spider
is doing it, but you know, the overlap, it's hard
to tell where one begins, one ends. But anyway, Shiny
Hunters has been actively trying to recruit employees of the
organizations they're targeting. Have you seen this kind of thing
before and does it work? Troy, I'm going to throw
to you first. Yeah, we've actually seen a few articles
in the news and then for anybody that's following the
comm, which is a wider group of cybercriminals generally skew
younger. Scattered Spider. Shiny Hunters apparently came out of there
as well as lapses. You know, they've been doing that
for a while. You know, they were doing that for
sim swapping. They would find mobile provider employees and pay
them money to do it and then they shifted away
from, from that it's not a surprise. The insider threat
aspect is really the most interesting thing to me about
this. You know, an insider is one of the most
difficult threats to really protect against because they've already got
permissions, they've got roles, and as long as they're staying
within the behavior you'd expect, it's very hard to identify
that they're doing something like that. Now then couple that
with the unstable state of world affairs, employment and cost
of living challenges in many nations and now a really
continued decline in the strength of the employer, employee relationship
or social contract over the last few years and I
really think it's a ripe target for exploitation. That's a
really good point. Right. There are a lot of social
forces that make right now, you know, it might be
a pretty good time to go recruit some of those
disgruntled folks. Right. Chris, your thoughts on this kind of
tactic? Have you seen it before? Does it work? Yeah,
again, I mean this has been around for years, especially
as Troy mentioned, with mobile phone companies, insiders getting paid.
And again, the social aspects, economics out there mean that
you have very low paid employees with very high levels
of access and are very susceptible to a little bit
of extra cash and the criminal element has a lot
of extra cash so that they can make even more
money. So yeah, I'm actually a little surprised that the
insider threat isn't a bigger issue today than it actually
is. Be careful what you wish for there. Suja, what
are your thoughts? There is still some basic human decency
that people still have. That's what is preventing us from
doing it. But the challenge is the social economic pressure
nailed it really well. In the last five years we
have seen the loyalty, the corporate employee, employer loyalty is
also eroding away and all this becomes an easy bait
for somebody who is in the fence. It's easier to
jump one way or another or the other. So I'm
not surprised. But it's a reality that we need to
work on. One of the things that we have been
thinking about, security as a person having an access, it's
about workflow. In order for you to do a workflow,
do you need access? And once you are done just
enough access and then it goes away. I think we
need to be rethinking how at least we are thinking
about how do we rethink security, Even identity and access
management? Absolutely. And again, basic human decency. I like that
you brought that up. That so often the human factor
is Our first line of defense. And that factor can
just be, you know, what I have. I don't want
to sell out my employer like this. But something that
you brought up, Troy, and I want to go back
to this, is that it can be extremely hard to
tell when you have an insider threat. Right. Especially if
they're just using the permissions they already have, but for
like illegitimate purposes. And so I was wondering if you
have thoughts on what do you look out for as
an organization? Right. How do you catch these kinds of
insider threats? Is it possible? I think it is. There's
been a lot of work that's been taken from sort
of the counterintelligence space and then brought over into security
products. There are some of the standards. Look for activity
outside of normal working hours. Look for potential remote access
from not approved places in the event they shared their
credentials. Let's say data flows that don't seem to be
accurate with what you'd expect the employee to be doing.
But I don't think anyone's really cracked it because it
is so hard. It's so independent by the individual, their
role, to Suja's point, what their necessary access is. You
know, if it's an admin especially, it gets so difficult
to see on a system and that they'd use in
their daily basis. So how do you really find what
one is anomalous there? It's very challenging. There's a whole
product space around this and quite frankly, it hasn't lived
up to expectations, I think, for many security buyers. No,
that was fantastic and I absolutely agree with that. Troy
and I wanted to throw to Chris if you have
any thoughts there as well, anything to add about how
you can kind of catch some of these insider threats
when they blend in so well? Yeah, like Troy said,
it's really about continuously monitoring your network and trying to
identify people based on traffic patterns, which is difficult. Right.
It really requires some specialized software. AI can actually help
here, depending on what packages you're using. But it's difficult,
especially with your more senior employees with higher access who
are, you know, generally need to get into all places
of the business. It becomes more and more difficult to
sort of to find them and isolate them based on
their traffic alone. Absolutely. And Suja, your thoughts? I think
Because that's very much so what you can say what
the anomalies are. So we talked about the difficult things
about AI, because today with AI we can get to
those answers much faster, which is false positive, which is
happening, which might be a threat, and then try to
catch it. But you cannot definitely catch it all because
it's constantly evolving. Especially like what Troy was talking about.
If you're an admin, you do have blanket access to
everything at that point. If you are that person. How
do we find out? That's a tricky one. The edge
cases are the tricky one. Thank you for that. And
let's move on to our next story. Misconfigurations. To put
it meanly, I called it when hacking, when getting hacked
is your own dang fault. In a new blog post,
researchers from the Wiz cloud security platform discuss three instances
of application misconfigurations allowing threat actors to do real damage.
And just this, this morning I was reading a new
report about a massive mal spam attack that used 13,000
misconfigured routers to create a botnet to send just a
bunch of phishing emails. And all they did was exploit
default security settings that were never changed. Right. People just
didn't change the default password so you could get right
in there and do it. So this is a real
persistent problem that keeps on happening. And some of the
common misconfigurations that Wiz called out specifically were public exposure.
Right. Databases that shouldn't be public are exposed to the
public facing Internet not changing those default credentials like we
just talked about and giving people excessive permissions, which I
think a lot of this ties very much into exactly
what we were talking about with that insider threat angle
as well. Right. The difficulty of getting those permissions. Right.
So Troy, I want to start with you first because
all the way back in that first segment you were
bringing up this idea of app misconfigurations. Do you see
this kind of thing a lot? App misconfigurations causing security
problems for organizations. I've been doing this a While and
investigation into the largest hack of U.S. government systems to
that date. And it was blank admin passwords on systems
exposed to the Internet. Can't say much has changed, except
Microsoft doesn't allow blank admin passwords on their OS anymore,
which is nice. But this is what keeps us in
business from an instant response perspective is if you can
socially engineer passwords or buy them on the Internet or
somebody misconfigures something, you don't have to break in, you
don't have to be a technical wizard, you Just have
to look for the open doors and walk right through
them. So I think we're going to continue to see
this. Hopefully, as AI does develop and get better, it'll
do better identifying this because it's a scale problem. Enterprises
are huge, they're complex. There's lots of moving parts that
change, and humans just can't grapple with that. So I'm
hopeful that AI, as it progresses, will become a really
good defensive mechanism for it in this space. Yeah, I
think, you know, not allowing those blank admin passwords, that's
one of those dumb rules, right? One of those things
you got to stop doing because someone was using it.
But yeah. So, Chris, your thoughts on this take. Have
you seen misconfigurations causing problems for organizations before? Yeah, I
mean, misconfigurations are as old as security, right? You click
the wrong button, you set the wrong thing, and you
open the door. I think what we're seeing, though, or
what's happening is that as companies try to save money,
they're kind of cutting some of their security personnel and
pushing the security tasks off onto the regular IT people.
And that's fine, except the IT people don't really know
what buttons to click to make things secure, and they
just want to click the right buttons to make things
work. And making things work and making things secure are
two different things, which is where you come up with
a lot of misconfigurations. And that's not even getting into
the complex stuff where you have a chain of misconfigurations
that worked in concert to open up a hole. This
is where, you know, just one thing. Because the IT
guy wanted to make it work and wasn't able to
actually make it secure and work. Absolutely. You know, often
there's that tension, right, between just making the thing work
and making sure it works securely. And we don't always
do that. Second part. Sudra, your thoughts? It's a tough
one because when you. And when you are developing and
if you hit a problem, the first thing is, let
me turn off security and see if it works, okay?
That's how we debug. And then when you deploy it,
did you make sure that. Did you turn it on
back again? It's a very simple human error, sometimes very
unintered, and then it happens. So it's extremely important that
the tools are available to do proper checks and balances.
We are cutting cost everywhere. So let's reduce the humans.
Let's have the tools do it. Earlier mentioned, the tools
are as good as the good, bad, and ugly of
human. They can miss things too, just like us. So
who's policing the police to make sure things are working
fine but then it keeps the security professionals in business,
like Troy said. I think that's what keeps the business.
Going, you know, silver lining, right? There's a job for
us to do. But you know, Troy, something that you
had said, right, was that part of the issue with
these misconfigurations is scale, right. There can be so many
of them in a massive enterprise with all these apps
configured and set up. And so I was wondering if
that if it's harder to find some of these misconfigurations
maybe than it is to find like your typical, you
know, vulnerability, right. When there's some kind of bug or
flaw in the code, is it harder to surface these
things where it's like it's not a bug in the
code, it's just someone clicked the wrong button? Yeah, I
think it is because, you know, I don't know if
it's harder. In fact, I would think it might be
easier from a threat actor perspective. Trying to find insecure
code requires a lot more skill than trying to find
an open door. And I think the scale is much
larger. There's only so many applications being deployed, whereas there
are so many different access points across hyperscalers. So I
think misconfigurations are probably a greater threat than insecure software,
quite frankly. And I think that goes to Space Rogue's
earlier point. If it's responsibility is to make things work
and they're the ones that are deploying these or deploying
whether it's applications or hyperscalers, et cetera, you're going to
have less security attuned folks making these decisions, which is
likely to cause more misconfigurations. Absolutely. Misconfigurations can be a
bigger problem than vulnerabilities. I want to say Suju, do
you agree with that? Do you feel like that's true?
It is definitely true because you don't at least vulnerabilities,
you know, these are the vulnerabilities and everything else you
see a report and make a conscious decision to ship
something with vulnerability because you know that this cannot be
accessed or whatever reason that you might have. But with
misconfiguration you don't even know how do you fix things
that you don't know? I think I do see that
it's a bigger problem and these misconfigurations are easily detectable
using AI today. But on the other hand it can
also be. That is what A lot of companies are
working on, I think Wiz talking about it, IBM concert
talking about is how do we make sure that there
are checks and balances on figuring out these misconfiguration earlier
so we don't really ship it. Absolutely. And Chris, what
about you? How do you feel about the kind of
misconfiguration versus vulnerability thing? I'm going to say the same
thing I said with the insider threat. I'm surprised it's
not a bigger issue. It really is a big deal
and I think there probably are more misconfigurations out there
than we realize. They're just not being exploited yet. Absolutely.
And so how do we start to be more vigilant
about misconfigurations? Is there a way to do that? Chris,
I'll start with you again right there. Do you have
any thoughts on how organizations can maybe have fewer of
these things? Things? Well, we go back to fundamentals, right.
And we look at inventory and people laugh when I
say this sometimes because inventory is what we did back
in 1998, making sure we knew what we had. But
if you do your inventory properly and you know what
you have and you know how it's configured, that's part
of the inventory. Right. Back to the fundamentals, checking all
your stuff. And it can be difficult when you're talking
about a global enterprise with millions of endpoints and thousands
of routers and whatnot, but you gotta do it. And
as Suja's point, this is where AI can be very
helpful in continuously monitoring your endpoint and your network devices
and checking that configuration and making sure the configuration file
matches what it's supposed to be. Absolutely. Sujay, anything to
add there? I think, see one of the things that
we're talking about, Selenium grid, where hey, do not put
it up there in production like we are in tech
space. Can we build some things that if you have
this, you cannot deploy? Like how do we make sure
that we protect ourselves some easy, stupid mistakes. I think
that's where the Microsoft one where, okay, you cannot have
blank passwords anymore. So then people came with admin. Admin.
That's a different problem that we need to now solve.
But at least we are making progress. Not perfection, but
we need to keep making progress with these things. Absolutely.
One small step at a time. Troy, any thoughts on
your end? I think Suja and Chris covered it pretty
AI really has a chance of making a near term
difference. There are many ways to misconfigure stuff, but I
think that's somewhat limited. Whereas the ability to create insecure
apps through development, it's almost limitless. The ways you could
do that. And AI does well with these large scale
problems. And at the risk of doing a product pitch,
IBM Consulting's actually built out tools for this already that
they brought to market to help do it. And we
know Compare are doing the same. Absolutely. The limitless potential
to make bad apps. I like ending on that. Let's
move on to our next story. Hybrid Petya adds a
new twist to an old ransomware. Now, I assume that
you folks remember that name Petya. It was a ransomware
that made some waves in the mid 2010s, and especially
not Petya, which came out in 2017 and was responsible
for one of the most destructive cyber attacks in history,
causing over $10 billion in damages. So 10 billion, that's
a B. Against a host of organizations throughout Europe. Now,
recently, new ransomware samples discovered on Virus Total appear to
be a copycat of these infamous strains, but with some
new tricks. So researchers have dubbed them hybrid Petia. Now,
we don't know if this is actually being used in
the wild yet. No one's actually seen that. And it
could be just more proof of code. Somebody was toying
around, messing with stuff. But still, to me, it's a
little bit concerning to see that name Petya pop up
again. So let's start with just some basic reactions to
this news. Chris, how are you feeling about seeing that?
Well, it's not the first copycat of Notpetya that we've
seen. I doubt it will be the last. I think
the biggest thing I was reading about this one is
that it adds a vulnerability or takes advantage of UEFI
boot, which not the first time we've seen malware do
that either. So yeah, this is important. We need to
pay attention. But not novel. Got it. Suja, how about
you? Yes, not novel. And you have to have the
basic hygiene, right? Like again, it's like wash your hands,
like, don't touch, don't click on links that you don't
know. Don't pick up the phone if you don't know
the people. Like all those. Basic hygiene is what is
needed there. Absolutely. And Troy, how about you? Yeah, I'll
pivot off of something Space Rogue said earlier. I'm surprised
we're not seeing more of this. If I was a
threat actor, this is where. Now granted, it's harder to
do. Maybe that's why we're not seeing it. But most
of our Security tools focus on the operating system, the
user space, or applications, right? That's really where we focus.
Or the cloud. They don't touch the BIOS or the
uefi. There's almost nothing there that monitors that or protects
against that, yet it is writable and accessible. So that's
a huge problem. And then if you mess up the
mft, which they're doing, which is what is basically the
underlying file system that allows the system to work. Now,
you can't even boot the system. So how do you
fix it? Well, most tools can't do that remotely. So
now on a widely dispersed enterprise, you have to have
people going around with boot CDs or boot thumb drives
and booting these systems up to fix it. So the
ability to recover is very limited in many organizations. To
Chris's earlier point, I don't know why we don't see
more of this. Yeah, I'm glad you both brought up
that UEFI kind of system compatibility, right. And the way
that it can write to that partition. And now, look,
I could try to explain that, but I am nowhere
near as seasoned in this field as you folks are.
So I'd love to throw it to the experts to
talk a little bit about why that particular thing is
troubling and why that caught attention. Chris, do you want
to maybe say some words on that? Well, if you
look at UEFI as sort of like the BIOS of
days gone by, right? It's the code in the hardware
that controls the system. And if you can control that,
you control the machine, regardless of what operating system or
security stuff you have have on top of it, because
it's below that right now, we do have security things
in place that we can use to protect the uefi,
right? Tpm, Trusted Platform Module or something secure boot. Also,
and believe it or not, manufacturers issue patches for UEFI
on occasion, and almost nobody patches that ever. So if
you keep up with that and monitor your firmware integrity,
you can sort of try to protect yourself against this
sort of attack. So this is another lesson, why you
should patch your dang systems. When the patches come out,
patch them, please. Suja, any thoughts on advice for organizations
in terms of what they should do to kind of
set themselves up to combat a threat like this one.
See, this is a tough one. Like, because it's in
the OS boot, not even OS level, right? It's a
hardware level. It's like what we saw with CrowdStrike. We
couldn't fix it because each machine needs to be addressed
separately. To make sure that you're fixing it. So this
is the same level of challenge that you will have
when you get in there. So keeping things up to
date and making sure apparently all the phishing education and
everything doesn't work as much as it should. So I'm
and then get that going for that. Making sure that
things are a secure boat. You are not able to
go mess with it. Like what are the go firewalls,
sorry guard raiders that you can put. It's not even
a firewall thing because you're going to the hardware directly
to not get to that level of access. If you
can figure out how to make people use common sense,
you will crack a mystery of the human condition that
nobody has cracked yet. Suja. So get to work on
that name, which is very common, but it's not at
all common. And Troy, any thoughts on your end in
terms of security practitioner takeaways here? What do you do
to set yourself up in the face of a threat
like this? Yeah, so I think fundamentals. Let's step back.
If you're a business, what is your minimum viable business?
What do you need to keep the business running so
you can prioritize on that? All of those systems should
have immutable backups stored elsewhere so you can quickly restore
them, get them functioning when you get to the end
user workstation type of stuff. Don't store anything on the
local system or if you do at least have box
OneDrive, whatever that cloud based storage is. So you know,
maybe their laptops don't work anymore, but maybe they continue
limping along with their phones while you fix things. Right?
Again, try to figure out what is it going to
take to keep this place running until we get everything
fixed and take that approach. Don't just layer more and
more and more technologies and security stuff. As much as
we'd love to sell you that really take a holistic
view around your entire enterprise and what it means. It
should be part of really your risk management program or
your disaster program and not strictly an IT problem. Resiliency
becomes tough top top of mind for everybody along with
security because it's not a matter of question if when
it happens, how do you make sure that you are
resilient? Moving on then to our final topic for this
episode. Something a little more fun, right? Anybody who tuned
in to the last episode will know we talked about
dumb rules you gotta put in place because somebody did
something dumb. Maybe Dumb's mean. But you know what we're
talking about. We all have dumb moments. The example that
we were talking about specifically was, you know, the do
not eat label on a silica gel packet? Right. And
you had to put that on there because somebody ate
that packet at some point. And so that got me
thinking about what are the dumb cybersecurity rules you've seen
instituted or that you maybe have instituted yourself because someone
did something maybe a little bit dumb, like those blank
admin passwords Troy had mentioned. So I'd love to give
everybody just a moment to share their story with us.
And let's start with Chris. You got anything for us?
I got a couple. I'll give you probably one that
my biggest pet peeve is people or companies, organizations that
actually fire people for clicking on links. I think that's
dumb and stupid and is a failure of your education
model in your organization and not a failure of the
employee for clicking on something because that's, that's their job,
to click on links. That's what they get paid for.
They're opening resumes, they're opening POs or whatever. You're paying
No, don't do that. It's really. You're blaming the user
for your own poor security implementation. Ooh, I like that
one. Very good. It's kind of the opposite, right? Instead
of don't click on things, click on things and understand
that that's part of people's job. They're clicking on things.
You just gotta teach them, help them click on fewer
bad things. I like that one. Suja, what about you?
Any thoughts on dumb cybersecurity rules? The thing is, the
passwords are not 1, 2, 3, 4, right? Like making
sure that people are able to think through it or
in those cases, use passkeys. What are the other options
to go do it? Rather than your birthday, your wife's
birthday, whatever that comes into mind. That is the main
role. Absolutely. And Troy, how about you? Well, I already
gave up my admin admin one, so I'm not sure
there. Matt, you know, one thing, I'd like to sort
of pivot on it because, you know, we again, to
Chris's point, the person's wrong or they're dumb. I get
I do. Right, Right. And all. Rather than telling them
what to do or telling them they're Stupid for falling
for it. I just try to walk them through it.
You know, it's from the bank. Well, do you bank
with that bank? No. Well, then you can assume it's
a spam. Yes, I do. Okay, but is this the
email address you have associated with your bank? No, it's
not. Okay. It's spam. And really, just walk them through
that. That common sense we talked about, it's not common
for non technical people. They shouldn't be expected to. So
we need to educate them to really work through the
problem solving around it. And usually by the second or
question they're like, I was a dummy, I shouldn't have
called you. And I'm like, no, no, keep calling. That's
what I'm here for. No, absolutely. And I think it's
important to point out, right, that like every single one
of us, no matter how tech savvy we are, we
can fall for those kinds of things. Right. The whole
conversation we had that, that sort of sparked this topic
last time was talking about how a very accomplished developer
on NPM, somebody who is responsible for 20 packages, so
somebody who knows what they're doing, they got hit by
a phishing email at the wrong time and they clicked
the link that they shouldn't have called clicked. So anybody
this can happen to. And again, teaching people that. You
know what, I like this idea of teaching people it's
okay to, you know, call, ask questions, don't feel bad
about it. There's no such thing as a dumb question.
We all make mistakes. And if we can be that
common sense for other people, then then the world is
a better place for that. All right, I thank you
all so much then. That's all the time that we
have for today. Thank you, Chris. Thank you, Suja. Thank
you, Troy. Thank you to the all audience at home
for spending time with us. Make sure to subscribe to
security intelligence wherever podcasts are found. Stay safe out there
and practice that common sense a little bit.