AI-Driven Vibe Hacking Threats
Key Points
- The new “vibe hacking” technique lets threat actors use generative AI (like Claude) not only to write malicious code but also to make tactical decisions such as data selection and ransom amounts, enabling rapid attacks on multiple organizations.
- HexStrike AI exemplifies an emerging “agentic” cyber‑attack model where autonomous AI agents can conduct large‑scale intrusions with minimal human oversight, raising concerns that AI is lowering the barrier to sophisticated crime.
- A resurgence of Lapsus$‑style actors introduced an unconventional ransom demand strategy, highlighting how hacker groups are experimenting with novel extortion tactics beyond traditional ransomware.
- Remote Access Trojans (RATs) are increasingly favored by attackers as the primary malware choice, driving a surge in RAT‑related incidents and prompting urgent defensive focus.
- Experts from IBM and X‑Force discuss these trends, emphasizing that while AI tools can be weaponized, they also offer defensive potentials, and the cybersecurity community must adapt quickly to the evolving threat landscape.
Sections
- AI-Driven Cybercrime and RAT Threats - The episode opens the Security Intelligence podcast by exploring whether AI is simplifying cybercrime, examines new AI attack tools like HexStrike, unconventional ransom tactics from Lapsus$, and the growing dominance of Remote Access Trojans, with insights from IBM security experts.
- AI Arms Race in Cybersecurity - The speakers discuss how AI-driven defenses and attacks are now an inevitable, evolving arms race, mirroring traditional cybersecurity dynamics.
- AI Lowering Cybercrime Skill Barriers - The participants discuss how AI-driven tools simplify hacking, potentially disrupting ransomware affiliate economics and raising alarms about research code unintentionally becoming weaponized.
- AI Weaponization & Cybercrime Economics - The speakers discuss using the attacker’s AI against them and how AI‑driven tools could reshape the affiliate model of cybercrime, potentially eliminating human hackers.
- Debating Human Ransom in Cyber Extortion - Panelists argue against paying non‑monetary ransoms, likening them to blackmail and emphasizing that yielding only fuels continued attacks.
- Rising RAT Use Over Info Stealers - The speakers examine why cyber attackers are shifting from popular info‑stealer malware to Remote Access Trojans, linking targeting dynamics, recurring scams, and historical insights.
- Hype vs Reality in Malware Naming - The speaker criticizes media and analysts for sensationalizing and renaming existing info‑stealing tools such as RATs, while noting that ubiquitous mobile devices now provide an even broader attack surface.
- Fundamental Cyber Hygiene Over Sophistication - The speakers argue that strengthening basic security practices—patching, phishing awareness, device control, and a “human firewall”—is far more effective than pursuing advanced zero‑day or AI‑driven attacks.
Full Transcript
# AI-Driven Vibe Hacking Threats **Source:** [https://www.youtube.com/watch?v=u-ZRZX2VZh4](https://www.youtube.com/watch?v=u-ZRZX2VZh4) **Duration:** 00:38:05 ## Summary - The new “vibe hacking” technique lets threat actors use generative AI (like Claude) not only to write malicious code but also to make tactical decisions such as data selection and ransom amounts, enabling rapid attacks on multiple organizations. - HexStrike AI exemplifies an emerging “agentic” cyber‑attack model where autonomous AI agents can conduct large‑scale intrusions with minimal human oversight, raising concerns that AI is lowering the barrier to sophisticated crime. - A resurgence of Lapsus$‑style actors introduced an unconventional ransom demand strategy, highlighting how hacker groups are experimenting with novel extortion tactics beyond traditional ransomware. - Remote Access Trojans (RATs) are increasingly favored by attackers as the primary malware choice, driving a surge in RAT‑related incidents and prompting urgent defensive focus. - Experts from IBM and X‑Force discuss these trends, emphasizing that while AI tools can be weaponized, they also offer defensive potentials, and the cybersecurity community must adapt quickly to the evolving threat landscape. ## Sections - [00:00:00](https://www.youtube.com/watch?v=u-ZRZX2VZh4&t=0s) **AI-Driven Cybercrime and RAT Threats** - The episode opens the Security Intelligence podcast by exploring whether AI is simplifying cybercrime, examines new AI attack tools like HexStrike, unconventional ransom tactics from Lapsus$, and the growing dominance of Remote Access Trojans, with insights from IBM security experts. - [00:03:23](https://www.youtube.com/watch?v=u-ZRZX2VZh4&t=203s) **AI Arms Race in Cybersecurity** - The speakers discuss how AI-driven defenses and attacks are now an inevitable, evolving arms race, mirroring traditional cybersecurity dynamics. - [00:07:36](https://www.youtube.com/watch?v=u-ZRZX2VZh4&t=456s) **AI Lowering Cybercrime Skill Barriers** - The participants discuss how AI-driven tools simplify hacking, potentially disrupting ransomware affiliate economics and raising alarms about research code unintentionally becoming weaponized. - [00:16:33](https://www.youtube.com/watch?v=u-ZRZX2VZh4&t=993s) **AI Weaponization & Cybercrime Economics** - The speakers discuss using the attacker’s AI against them and how AI‑driven tools could reshape the affiliate model of cybercrime, potentially eliminating human hackers. - [00:21:06](https://www.youtube.com/watch?v=u-ZRZX2VZh4&t=1266s) **Debating Human Ransom in Cyber Extortion** - Panelists argue against paying non‑monetary ransoms, likening them to blackmail and emphasizing that yielding only fuels continued attacks. - [00:25:42](https://www.youtube.com/watch?v=u-ZRZX2VZh4&t=1542s) **Rising RAT Use Over Info Stealers** - The speakers examine why cyber attackers are shifting from popular info‑stealer malware to Remote Access Trojans, linking targeting dynamics, recurring scams, and historical insights. - [00:28:50](https://www.youtube.com/watch?v=u-ZRZX2VZh4&t=1730s) **Hype vs Reality in Malware Naming** - The speaker criticizes media and analysts for sensationalizing and renaming existing info‑stealing tools such as RATs, while noting that ubiquitous mobile devices now provide an even broader attack surface. - [00:34:56](https://www.youtube.com/watch?v=u-ZRZX2VZh4&t=2096s) **Fundamental Cyber Hygiene Over Sophistication** - The speakers argue that strengthening basic security practices—patching, phishing awareness, device control, and a “human firewall”—is far more effective than pursuing advanced zero‑day or AI‑driven attacks. ## Full Transcript
Are we making cybercrime too easy? Would you rather get hacked by a human or an AI agent? Would
you fire someone to stop a data leak? What's up with cybersecurity RAT problems? All that and more
on Security Intelligence. Hello and welcome to Security
Intelligence, IBM's new weekly podcast where we break down the most important cybersecurity news
with help from a panel of expert practitioners in the field. I'm your host, Matt Kozinski. This week's
stories vibe hacking. Is AI making cybercrime too easy? HexStrike AI, is this the
beginning of the agentic revolution in cyber attacks? Scattered Lapsus$ Hunters are back with an
unconventional ransom demand and RATs, RATs everywhere recorded future says Remote Access
Trojans are becoming attackers' malware of choice. Joining me today to break it all down. First up, if
you've ever been to the IBM technology YouTube channel, you've seen this man before. Jeff Crume, IBM
Distinguished Engineer. Master inventor. AI and Data security. Jeff, thanks for joining us. My
pleasure. Glad to be here. Thank you. Nick Bradley of X-Force Incident Command and one of the three
hosts of the Not the Situation Room podcast. Which folks do me a favor. When you're done with this
episode, go to their YouTube channel, click like and subscribe. They're fantastic. Nick, thank you
for being here. Thanks for having me. And the illustrious Suja Viswesan, VP of Security
Products. Suja, have you picked out a hacker name yet? I'm still working on it. Matt. Still working on
it? Well, I'm glad to hear your thinking about it. All right. So let's get into our topics for today.
First up, Anthropic August 2025 Threat Intelligence report introduces the world to vibe
hacking. Now we've all heard of vibe coding, right? You describe what you want your software to do to
a generative AI assistant, and it spits out the code for you. You don't have to write a single
line yourself. Vibe hacking applies this same workflow to cyber attacks. Now, the report that
we're talking about identifies a few different cases, but I want to hone in on one in particular,
because in this case, the threat actor didn't just use Claude Code to write malicious scripts. They
also used it to make tactical and strategic decisions, including asking it which data to
exfiltrate and how much a ransom to charge for that data. at a certain point, it's almost like
Claude was carrying out the attack and this guy was just there to push some buttons. All in all,
the attacker hit 17 organizations before being shut down. So I want to start by just getting some
initial reactions to this development of vibe hacking. And I think I'll start by asking you, how
do you feel about this? We are learning as we go, but it's like any tool, right? Any weapon can be
used to protect as well as to basically be offensive to people. And one thing you know,
even with organized, um, crimes is a little bit easier to know than guerrilla warfare because
there is nothing organized about it. So this is really, really tough one to catch. So we need to be
just like vibe hacking, vibe coding. We have vibe security as well. So in
when you think about it, you have red agents, blue agents. How do they learn from each other and
start fighting each other and then get there? That's the only way I got that. So you thinking of
vibe security? If I correct me if I'm wrong, it's kind of like an experiential approach to security.
Almost. Right. We're just kind of learning as we go and taking those best lessons and applying them.
Does that make sense? Absolutely. Because you saw that on the anthropic one they are looking at,
okay, what kind of prompts are being asked what we can prevent before it becomes a problem. That
means the model is learning for which one it shouldn't be answering because this can lead to
something bad. Jeff, any thoughts to head there? Yeah. So what you just talked about is basically
the inevitable space that we've been heading on for a while. I could foresee this coming. I think a
lot of people could foresee this coming. It's disappointing that it's already here now because
I don't think we're all fully ready for it, but it was absolutely inevitable and it now has put us
in the world of AI versus AI. It's a question of is my AI better at defending than yours is
at attacking or the other way around? And that's the arms race. It's moved now to the AI
battlefield, and now it's a question of who's going to have the best tech to to deal with this,
and who's going to keep it the most updated, because it will constantly be changing, as we've
always had. So in many senses, it's not new. It's what we've seen. It's always security.
Cybersecurity has always been an arms race of the good guys versus the bad guys. And we get a tool,
they get a tool, they get a tool, we get a tool. And you know, it's it's whoever, you know deploys the
tool the best. An AI, as you said, is exactly that. It's what we refer to as a dual use technology
where it can do good or it can do bad, and it's got equal potential to do either depending or
even both at the same time, depending on whose hands it's in. So what that means is organizations
who maybe weren't so robust in their deployment of AI didn't really have a plan on how to use it.
Well, they're going to be the ones that lose in this battle. So I recommend get out in front of
this and make sure your AI is better than theirs. There's no such thing as a kind of neutral tool.
It all depends on whose hands it's in, right? Nick, any thoughts on vibe hacking on your end? I would
love to try and offer a counterpoint or argue against either one of you, but I literally can't
because the foreshadowing here is more obvious than that of a B rate horror flick. I mean, the
second we all started seeing AI taking the world by storm, those of us in security just went, oh,
wait for it. The weaponization of AI was, I think one of you already said it. It was inevitable. And
not only was it inevitable, but the bigger challenge here is the fact that it's going to.
It's going to lower the Bard what it takes to be a bad guy. Right? Because now I don't even have to
be a programmer. I mean, we already had, you know, malware as a service and things of the sort, but
now I don't even need that. Now I can just, you know, say, you know, I didn't want to say the name
of the, uh, the assistant because everybody listening will fire up, but assistant name. Write
me some malware. Okay. What would you like to do today? Right? I mean, we're we're right there. Yeah. I
remember 20 years ago when people started observing that you didn't even have to be a
really great coder to generate malware. It was a click here to hack kind of situation. Well, now you
don't even have to click. You can just almost think it into existence, which is, you know, a very
different kind of thing. And as you said it, it's one of those things I remember the first time I
woke up, couldn't sleep on one particular night, and I did what all good nerds do turned on
YouTube and started watching tech channel stuff and and I'm and I'm watching and there's this
thing they're talking about called ChatGPT. And I thought, wait a second, AI is not doing that right
now. Is it really? And of course, this was November 2022 when it had first launched. So the
world was just starting to get out and I thought, this is the coolest thing in the world. I'm not
going to sleep tonight because I am too excited about what the possibilities are. I took that
imaginary hat off and put my security hat on and said, oh my gosh, I'm not going to sleep for a few
nights. Because now the potential of this is is mind boggling and we
haven't scratched the surface yet. So in terms of good or bad, I can't see this going bad at all.
Yeah, right. Right. What could possibly go wrong? You know, I'm glad you bring up that issue of de-skilling
sort of Nick. Right, in making it even easier. And this is a trend that we've seen for a
long time. Right. That the trend in hacking has always been it gets easier and easier to be a
cybercriminal. And I'm sort of wondering, actually, on that note, do you think that this development
is going to put any pressure on that affiliate model that we're so used to seeing that Jeff
mentioned, right, where you if you're somebody who doesn't really have a lot of skills, you got to go
find a ransomware gang, ransom ransomware from them or whatever, and make a cut of their profit.
Do you really need malware as a service? If you could just ask an AI to come up with a scheme for
you. I guess I'm just wondering if this affects the economics of cybercrime. Any thoughts on that?
It could eventually. I don't think we're at that point yet, but that's just a turn of the page. It's
another one of those inevitability. I read an article in the Register just just today was
talking about some researchers that came out with their version of this just before the version you
just cited, and they were doing it as a research project. They didn't intend it to be released. And
then all of a sudden they checked it out on VirusTotal and found out that it was now being
discovered by other people. And so even though they intended some good from it. You know this.
This is where it goes. And the the ability for this thing to get smarter and smarter. The version
they had, they said was polymorphic, which we've had polymorphic viruses for ages. But just imagine
the speed if literally every instance of this looks different than every other instance of it,
then. Oh, good luck in detecting. Uh, not impossible, but certainly the degree of difficulty just
went way up, and AI would be really good at hiding. Yeah, I feel like I feel like we're watching an
episode of Star Trek with the board. Right? They've adapted. Absolutely. Yes. On an episode of, uh, Black
Mirror that keeps me up at night. Let's move on, then,
to the next story for this week, talking about HexStrike AI and how it might help attackers launch
their own armies of AI agents. Right. So HexStrike is positioned as a legitimate security tool. And
just like those LLMS we were just talking about, and it's an offensive security framework that
serves as an orchestration and abstraction layer to control large numbers of AI agents. The idea is
that a framework like this would help you automate red teaming or penetration testing by
getting a bunch of agents to operate these tools automatically for you, which, you know, obviously
there's some real value to a technology like that. But of course, again, hackers got their hands on it.
Threat actors saw it and thought, how can we use this for our own gain And the dark web now forms
are just full of chatter about HexStrike and how it can be weaponized and used to marshal their
own kind of evil. AI agents and people are even using it to to sort of start developing exploits
for some difficult vulnerabilities that might have taken a lot longer to develop exploits for.
So I'm sort of wondering if this, you know, you always have to be careful with the way these
things are covered in the media, because we hear a lot of sort of apocalyptic talks sometimes. But
I'm wondering if this is an important moment in the weaponization of AI agents. Jeff, let's start
with you. Any thoughts on this one? The short version is same song second verse we just talked
about. Click here to to hack. Well, this is yet another tool that accelerates that and enables
that kind of capability. It reminds me again, if you've been around in this space long enough and
gathered the gray hairs that I have it in, in many senses, the things that are new don't
seem so new. They all seem like variations on a theme. And I immediately, when I saw this, harkened
back to a technology that probably will go over most people's head. They don't remember this, but
it was called SATAN. It was the it. SATAN was an acronym. It was like System Analyst Tool for
Analyzing Networks or something along those lines. was an acronym, but it was one of the first
network vulnerability scanners to come out. And when it came out, it was highly controversial. This
was probably 25 years or so ago, maybe even longer. And the idea was that the tool was
released as a way to test your network to see if you had vulnerabilities, because the idea is you
want to find them before the bad guys do. And so it was kind of the predecessor to all the network
vulnerability scanners and all these other kinds of things that we had. But again, highly
controversial. A lot of security people said you just automated the attack process for the bad
guys. This is the end of the world. Okay. Well, I don't know. We're still here. I think we're still
here. Uh, this is not an alternate universe. We've continued to exist. So part of me gets worked up
when I see this, because we just made the job harder for defense. But part of me also says, yeah,
but we've been here before. Yes, this one is more difficult. It always gets more difficult as we
move forward in time. Isn't that part of our lives, though? Jeff is surviving apocalypse after
apocalypse. I mean, absolutely, absolutely, yes. Here's one thing I'll say that everyone who's
ever predicted the end of the world or the end of technology, They all have exactly one thing in
common. You know what that is? They've all been wrong, so I hope that continues to be the case I'm
going to be an optimist. We still haven't disproven simulation theory. We could all be in a
computer. But, you know, I'll leave that for a different episode. Um, I also sort of feel like if
you name your tool SATAN, you're asking for trouble. But again, I think so. Yeah, a lot of people
kind of fall through that. Oh, well, it's a double edged sword, right? You can't limit what you
do with the idea in mind that, oh, the bad guys might use this for bad things, because if you do,
then you just stop doing everything and then they'll be the first movers on it, Nick. They'll be
the ones that do it before we do. So I, I don't think the the answer is don't do this stuff. I
think the answer is do it and do it better and faster than the bad guys do. It's just that it's a
rapid evolution. We had a lot of time before now. We had time for regulations and everything to
catch up with data that social media we saw that we didn't actually have time. A lot of damage was
already done and now we are learning. I'm just hoping we are reacting much faster because it's
still going to be reaction. It's because as the technology comes in, we have to react much faster
because the reaction time needs to be shorter and shorter so that we can get better at it. And I am
an optimist, just like Jeff. I do believe that we'll see how before we see heaven. But it's going
to be a journey. Definitely a journey. So we've got two optimists on the panel. Nick, where would you
place yourself on that? I am not an optimist or a pessimist. I consider myself a realist that plans
for the worst and hopes for the best. You know what? I think it's a good approach in
cybersecurity. Um, I want to I want to play a quick game with you folks real quick. A little round of
would You rather? Right. We've been talking a lot about AI attacks and whatnot. And I want to ask
you folks as defenders, if you had to go up mano a mano one on one against either a human hacker or
an AI agent. Who are you picking and why? Let's start with you. Who would you rather get attacked
by? I was leaning towards human, but I changed my mind to AI agent because it is learning from a
lot of humans. And humans are completely unpredictable. And when you're learning from so
many of those humans I don't have. I have no idea what it's going to do. So yes, I'm going
to have a tough time dealing with. I would rather deal with one human. At least. It's very easy to
figure it out. That was my kind of thought, too. Nick, what about you? I'm sticking with going
against human, not AI. Because I guess maybe I'm just too much of a sci fi junkie, and I've seen
too much terrifying AI and the things that it can do, and as you say, learning from humans. So yes, it
learns from us, but then it learns how to do what we do, but better and faster. And so at
least with with a human, they have to eat, sleep, bio, things like that. The AI
to sound like a nightmare. It never sleeps, it never stops. It will always keep coming. I'll go
ahead and just. Just stir the pot and say the opposite. Just because it's more interesting if I
do, um, I'm going to say for the moment I might choose AI. And the only reason is because AI has a
problem with hallucinations. So if I could trick it into doing the wrong thing, I might even be
able to turn it back on itself and point the the gun back in the other direction. I don't know if I
can, but I'm going to hope that I can. And, uh, but now that's a moment in time. If you ask me this, a
couple of years from now, I might not say the same thing, uh, because the other points that were made
were absolutely valid as well. Well, said. I, I can agree with that because I have dealt with enough
AI hallucination. And my other favorite word, confabulation are that, oh yeah, I could see where
you're coming from with that. There we go. I would love to see a kind of meta weaponization. Right.
You weaponize the attacker's AI against the attacker. That's a fantastic little move there.
It's kind of one of those Elmer Fudd turned the the gun barrel back on him and let him shoot. We
get a lot of classic cartoon references on this show. I'm very happy about that. Uh, but, you know,
something else just occurred to me, and I know I just brought this up in the last conversation, but
I also want to talk about I want to talk about the economics of cybercrime again, a little bit,
because it almost seems to me like this exerts another pressure on that affiliate model, but from
the opposite direction. Right. In terms of of your cybercrime gang, do you need human affiliates if
you can just outsource your work to AI agents? Right. So it's almost like I'm looking at that,
that that vibe hacking and thinking it lets somebody who is not that good, uh, go out and do
their own attacks without a gang. And I'm almost looking at this strike AI stuff as if it's like
it lets a gang dudes attacks without affiliates But then you still need an affiliate group that's
going to manage the LLM and the AI to do the affiliates work. I think it's a real tragedy that
we're talking about putting hackers out of out of work, that AI in their jobs. And I think we rise
up in defense of that. Yeah. No, no, I'm not going to lose a moment's sleep over that if they get
replaced by agents. Let me be clear. I'm not. I don't feel bad for them. I'm just, you know, I'm
just kind of wondering how things are going to work on the dark web if it's going to be dead.
Internet theory for them to. Everyone's a bot now. You know, we're using the word agents now. So now
all I can think of is Mr. Anderson. Yeah. Yeah, exactly. On the on the next episode,
we'll talk about how realistic The Matrix is. Prepare. We're in it. Oh.
Scattered Lapsus$ Hunters are back in the news, this time with a new kind of extortion technique
firing people. So this unholy collaboration between three of the most notorious cybercrime
gangs. Today, we're talking, of course, about Scattered Spider, "Lapsus$ " with a dollar sign. And
ShinyHunters popped up a couple weeks ago with a telegram channel that claimed they were working
on a new ransomware strain, and they are now back because they claim to be in possession of
internal Google data, and they're threatening to leak it unless Google terminates two specific
security employees. We don't know who these employees are, and I'm frankly skeptical myself
that they would even Google whatever do that. So I want to start by asking you, Nick, what's the
thought process here? Do they think this is actually going to work or is this something else?
This is such a deliciously diabolical story And I, I really
think this is someone overplaying their hand. I think they they feel like they have a stronger
hand than they do. Uh, and so to, to put a little backstory on this. So the reason that I
enjoyed, I've enjoyed this story so much is one on our other podcast, Not The Situation Room. We've
already we've done two episodes on this so far. One when it first debuted that they were going to
create their their little triumvirate of power, and then second one when they decided they were
going to try to strongarm Google. So a lot of this plays out from the, uh, the Salesforce
Salesloft breach, right. Because that's allegedly where the data from Google they have came from
and Google had. I think it was for researchers that put out a research paper on
this and on shiny. Well, we got all kinds of different names for em. Scattered Lapsus$ Hunters,
Shiny Happy Hunters. I mean, just because the more we talk about it, the more we kind of just talk
about it however we want. However, oh and, the shiny thing, if you miss that part, that's a Pokemon
reference. you'll have to look up that when you're so. I got that one. Well, ShinyHunters, just
think about it. But anyway, they supposedly have this data from Google, and Google has done the
research. So it's it feels like it's a detective story. Right? The bad guy has got info or dirt on
the good guy. The good guy is researching the bad guy. And so now they're trying to strong arm and
say, stop investigating us or we're going to we're going to release the, you know, release the
cracking on you. We'll let all your data loose or fire these two people and stop investigating us
right now. And I don't know why they think they have that level of leverage. I mean, we have seen
so many times in the past when clients data does a company's data does get compromised, does get
leaked, makes its way to the dark web, and it's out. It's done. You're not going to shame me
to do more, because what happens when I say, okay, I'll capitulate, I'll fire these two people. Then
you go, is there anything else you need me to do? Because that's what's going to happen. Absolutely.
I'm with Nick on that because there is no end to blackmailing, right? There is no end to it once you
give in. So the only way we have to put a stop to it is like, accept the shame, whatever it is, and
then go from there. We saw it from the credit report companies from which we very, very valuable
data is being leaked. And then that's it. And we see more and more companies
stopping from paying ransom because it doesn't it doesn't work. Uh, right now there is. There is an
automotive company going through this as we speak. Right. Productions are stopped. These things are
happening. But I don't believe the companies are going to give in, and they shouldn't. Jeff, I know
you've done like a video before on whether or not you should ever pay a ransom. So I'm wondering
about your thoughts on paying a kind of human ransom. Yeah, sure. Exactly. My thoughts on this is
this is this is essentially a different kind of ransomware, because Ransom ransomware is basically
an extortion attack. And in this case, rather than asking for money, they're asking for a particular
action. So I guess at least in this case, there's going to be no bitcoins exchanged in
this. But but the reality is this is this mafioso style stuff. You know, it would be a shame if this
stuff got out. You know, if somebody if a window got broken, you know, that's the kind of ham handed
stuff they're doing here. I feel like it's not likely to succeed I do agree with with what
Nick and Suja have said that, you know, I hope they don't give in. I can't imagine that they
would. Because where does this end? I mean, then every if you if you give in on one
of these kind of cases, every little, you know, hacker collective is going to start demanding
things of every single company. it's a really bad precedent and it never ends. I do think when
it comes to to paying ransoms, look, I know people have business decisions they have to make. And, and
I talked to a CISO of a hospital one time, and he said, when it comes to ransomware, we have three
priorities patient safety, patient safety and patient safety. I get it. Okay. But
I think in general, the problem with paying is you make yourself a welcome soft target for
future. So okay, this person paid and the bad guys are going to see them as a sucker. Let's, let's
you just painted a bigger bullseye on yourself. You might not have. Might have dodged this bullet.
Only to catch a bigger one later. And so this one. This one doesn't seem like a really smart idea.
I agree with with what Nick said. I think they've overplayed their hand. I think they're going to
find that out. Um, and, you know, release, release the the info. You know, we'll have to deal with it and
see if they have anything. Yeah, that makes a lot of sense to me because again, I for myself, I could
not figure out what they were thinking. But I think, Nick, your theory that they're just they're
gotten too big for their britches. They really think they have more leverage than they actually
do. That makes the most sense to me. And so the next thing I was going to ask, and I kind of feel
like I know the answer already from everybody, but could you ever foresee a ploy like this actually
working, or is this this just complete nonsense? Any thoughts on that? I think in this context, not
I think it could happen if it was on a much greater scale. If we were talking about not just
two unnamed employees at a tech company. But what if we're talking about the CEO of the company
where there's some sort of ransom extortion attack. In that case, the CEO is making the
decision. CEO says, okay, you know, whatever you say, we'll do. If we're talking about a head of state,
okay, that's a wholly different kind of deal. But I think at this level, you know, we're talking about
two employees. It's it just doesn't seem like it benefits them to follow through. I was leaning in
the other direction, to be honest. I was thinking that, I mean, I could definitely see what you're
saying, but I think it would be easier to make this work on a smaller company that basically has
no way to survive this ransom attacks. Like, we got to capitulate or we got to close our doors. So
it's like, Susie, Bobby, I'm sorry. We gotta let you go or we gotta just close the doors and bankrupt
the whole company. At the end of the day, like Nick and Jeff mentioned, depends on what is a blast
radius. Is our lives in danger? Those things are going. Yes. It's very easy to say, hey, we don't
negotiate with terrorists. But if it's your children, if it's you, you're going to make very,
very different decisions. So I understand. So in some of those it will it will change based on
what it is. For the most part I don't believe. Once you start negotiating there is no end to it. Yeah.
And that makes a lot of sense, right? Any time you give in to those demands that you get a target on
your back, right? You see the same thing even in, like, interpersonal scams, right? People who fall
for it once they they get targeted again and again and again. Same thing for an organization.
Let's move on then to our final story for today. So recorded future finds attackers are
shifting away from info stealers and using more RATs. Right. In Recorded Futures H1
2025 Malware and Vulnerability Trends report, they found that the use of Remote Access Trojans
was increasing in the first half of the year quite significantly. that info stealers, which
for the past couple of years have been quite popular. We're a little bit on the decline, so I'm
wondering, first off, what kinds of factors might be fueling a shift like this? I'll throw it to you
first. Jeff, any thoughts on why we might be seeing a shift like this right now? Yeah, again, I feel
like I'm the old man in the room talking about, well, back in my day, because literally back in my
day, 25 years ago, I wrote a book called What Hackers Don't Want You to Know And I wrote about
RATs back then, and, and I, they were pretty new at that point. And I thought this stuff could be a
big deal. And I'm shocked that, I mean, good news and bad news, good news for my publisher. We still
haven't solved the problem. The bad news for us is we still haven't solved the problem. So here we
are. What's old is new again, and I look at RATs as just a more sophisticated, more
capable, I guess I would say form of info stealer, because now I'm not just stealing your passwords
and your keystrokes and that sort of stuff. I mean, I'm turning on the camera, turning on the the
audio. I'm recording everything. I'm stealing your image, your likeness, your everything. Maybe I make
a deepfake out of that. Uh, I'm sure somebody is going to consider that possibility. Um, getting
material for extortions. So. Yeah, this is a bigger, badder, you know, version
of info stealing where I'm getting more than just info. So I, I'm assuming that that might be the
case. Or maybe they just got bored with the other stuff Who knows? Nick, what were you thinking on
this one? I feel like we're having a in these reports like this. First off, I, I'm a I'm a fan of
recorded future, so this isn't a negative on them. This is just a general oversight thing is that we
are having a battle of naming conventions. Right. It's what are we calling things right in this in
this case, we're talking about, oh, RATs are taking the place of an info stealer. But just like Jeff
said, a RAT is just an info stealer on steroids. Right? And then another one that kind of fell into
this category and I, We were going to talk about it, I don't know. We're going to talk about it yet
on another episode. But, uh, the, you know, rant or encryption? Ransomware. And it's another
naming convention problem. like encryption less ransomware than it isn't ransomware. Okay,
encryption less ransomware is a stealer. It's not ransomware. Ransomware specifically called
ransomware because it encrypts the data. So if we're not encrypting, it's something else, right?
And so an info stealer and a RAT. RATs are also info stealers, so it's easy
to put these reports together and say the decline of this, the rise of that. you're you're
playing a shell game with the naming conventions. At least that's my opinion. That makes a lot of
sense, right? It's kind of like, uh, you know, if you kidnap someone and call it real world ransomware,
you know what I mean? It's not exactly the same thing if you think about it. If you're trying to
publish a report or you're a media outlet, you've got to do something to get more clicks, more
eyeballs So call something, something else. Come up with a new jazzy term do something to
get people you know everything you knew. Uh, now it's it's a thousand times worse. It's the end of
the world. Yeah, we told you that last week, but we really mean it this week. It's really real this
time. The realest real we've ever realed. Another apocalypse? Yes. Oh, what do
you think? I agree with both of them. It's another form of info stealing. But the world has changed,
right? In the sense that everybody is dealing with a mobile phone these days. Uh, with a, with a
transacting most secure thing where dropping off kids, picking up kids to your bank accounts, to
your most intimate private details, everything is available in there. So now it becomes a much
easier target because we talked about that's why I said we talked about, hey, um, we won't negotiate
for, uh, negotiate with terrorists. That can happen in a large organizational way. Think of it as an
individual who's who's, who's who can be small time blackmailed into doing things that they are
not supposed to do, right? Or they are giving up money and everything else. These these can happen
because most people who just don't understand technology, but they are equipped with technology
and that is what they live day in and day out. So this has become very, very dangerous because I'm
always educating my parents. Okay. Don't click that link. And even if it's come from me, check with me
before you click on it. So all these things because this is entertainment, this is life, this
is everything. Now in a mobile phone or because it's not a laptop that people used to go work on
it and then get there. And it's a small number of population now everybody is using tech, and with
AI it has become very easy to infiltrate. So that is why the instilling of the info in a different
way. Just like you said, now you are able to get their biometrics there. You are able to go look at
most intimate details that they didn't have access before. So as we've all kind of said here,
and this makes a lot of sense to me, the thing that the RAT kind of gives you that info is done
is you can do more than just steal, right? you can steal different kinds of things. And so I'm
wondering how this sort of changes how you as defenders sort of relate to the threat landscape
in terms of, you know, if you know that RATs are on the rise instead of info stealers, are there
changes you'd be making or changes you'd be recommending people make to kind of defend
themselves against these things? Um, we'll start again with you, Jeff. Any thoughts on that? I think
a lot of the basic blocking and tackling is still just as relevant as it has been. Going back to
that book that I told you, the interesting thing, even though the book's 25 years old and I'm not
trying to sell copies because I it's even hard to find anymore. I can't get it. Shameless plug away
Jeff I would yeah, exactly. Okay. All right. If you can find it. Yeah. It was written on, on on, uh,
parchment. But, uh, anyway, it's that old and and the here's the thing is that 90% of what what
I wrote there is still true today, which, again, is the good news for the publisher. The bad news for
us, because I was talking about here are the things that we need to be doing here are the ways
that people compromise. Systems here are what we prevent them to do that with. And yet here we
still are. So, you know, obviously when it comes to to RATs or info stealers keeping your system
patched, uh, keeping, um, if you need any virus, malware scanning this kind of
stuff, look for behavior behavior detection anomalies in the network because these things
will tend to exfiltrate information out. we have the tools to kind of do a lot of this kind
of detection EDR tools, all this kind of stuff. Um, we're just not applying them all the time. And um,
and of course, sometimes we are applying them and we're doing them perfectly, and there's a zero day
that comes out that somebody takes advantage of. And that means obviously, you know, we've got to
find ways to, to, to bring those windows down, uh, where we're patching faster and that
the vendors are responding with patches faster So it's a lot of the same stuff that that we've
it doesn't, you know, there's not going to be some brand new technology that comes in and you
sprinkle this over and then all of a sudden you don't deal with RATs. Here's your, here's your, your your
RAT trap. And just put some cheese in that and it will kill all the RATs. It's never going to
be that simple. I like that you brought this back kind of almost full circle, because we were
talking in the very beginning about how security is kind of inherently dynamic. It's always an arms
race, right? And it's like you said, you know, this is another situation where it's just we're in a
bit of an arms race. And even if there are things we're doing perfectly, sometimes there's still a
zero day that's there and you have no choice but to react to that. So I like that we have a little
circle full circle moment like that. Um, Suja, how about you? Any thoughts on on this? I think I've
mentioned this before, which is like very much like a pandemic. Right? You need basic hygiene. You
have you can wait for vaccination, right? At the same time, you have to wash your hands. You have to
have basic hygiene. That is very, very important. Look, if you have at IBM, we have gone to
passwordless. Right. Because if you don't have a password, you cannot steal it. So you need to go
into certain basic hygiene. Where are you making sure that your data is secure. Right. So that even
in a when when it happens, it's not about if when it happens or is your data secure that you are
able to close shop and not make it accessible to people when it comes to pass? So are you keeping
your secrets in a vault instead of keeping it out? Because it could be in git? People put it all
those things because previously you had to go mined for it. You have to search for it. Today, with
agents, this has become very easy to figure out that zero to vulnerability in minutes, not in days.
Previously it took days. That is why the way we were building product, where we are thinking about
it is, are we making sure our products are built with resilience? Are they making sure the basic
hygiene is there so that these zero day vulnerabilities become less and less and less?
Nick, what about you? Any thoughts there? I want to offer some astounding revelation that nobody's
got to. Yet. But I can't, because the the problem has already been stated. It's we don't
need a new zero day. We don't need some brand new complex AI developed
exploit to to take advantage of something. Because I don't want to say we because it's not we. But
the basic blocking and tackling is still getting in our way, right? We were missing the forest for
the trees. Like you've still got employees clicking on phishing emails. You've still got
employees bringing devices to work they aren't supposed to have and loading software they're not
supposed to have installing cracked software. That's just letting people right in. And I mean, I
could just keep going down the list, not patching all of the things that are the basic things that
we think we should just know they're still there. I mean, Salesforce Salesloft, as far as I know, was
not even a compromise done through any sophisticated attack. It was social engineering. So
what are you going to put in place to stop social engineering? A firewall for the human mind. Yeah.
You know, it goes back to something Suja has said before, which is that, um, it's. is
often as much about human psychology as it is about, you know, your actual technical controls,
right? And that's the slippery thing, right? You can't you can't put, uh, you know, access management
tools on on your employees. That's that's just the person who's going to do stuff. You know, you can't
stop them. You can't stop them from clicking on things, which I think is the thing you've said
before, Nick. They're always going to keep clicking on things. You get a virus, you get a virus, you get
a virus. We all get virus. You can't make anything foolproof because they keep making better fools.
Yes, I know there was some news today where they had said, uh, you that phishing, uh, the training,
that cybersecurity training has no value, especially when it comes to phishing, because
people are just doing it automatically. But the phishing still happens. I think that is why it
becomes very important. How do you automate some of these things so that the humans don't have to.
Even if they inadvertently click on it, they're not. That's why I talked about passwordless,
because that's one way of thinking that you're not typing, even when you're clicking or typing a
password, because it's asking for something else. So those are all the things that we can make sure
that we stop it. Because the other part is we are getting low and low attention span. Jeff, don't you
think we're just like, while we are talking, I have to check my phone. What is happening here or there?
What did you say I forgot already. Exactly so. And I clicked the link Oh my God, what did I what do I do?
Okay, that's all the time we have for today. Thank you so much, Jeff, Suja and Nick for joining us.
Thank you, listeners and viewers for hanging out with us. Make sure to subscribe to Security
Intelligence wherever podcasts are found. And stay safe out there. And remember stop clicking on
things.