Weaponized AI Agents Threat Landscape
Key Points
- Attackers can evade keystroke‑based detection by randomizing the timing between key presses, a simple tactic that should have been implemented years ago.
- Recent proof‑of‑concept attacks demonstrate malicious AI agents: Datadog’s “Kofish” exploits Microsoft Copilot Studio to covertly harvest OAuth tokens, and Palo Alto’s “agent session smuggling” hijacks agent‑to‑agent communication to issue hidden malicious commands.
- These incidents illustrate a broader trend where legitimate AI tools are repurposed for illicit activities, highlighting a deepening flaw in the trust model of AI‑enabled platforms.
- Experts predict a surge in similar AI‑driven attacks, citing earlier examples like Gemini and Echo Leaks, and warn that failing AI governance will leave organizations increasingly vulnerable.
- The podcast also touches on related security concerns, including social‑engineering schemes that manipulate stock prices and the rapid growth of bug‑bounty payouts.
Sections
- Evasion Tactics and Malicious AI Agents - The podcast hosts discuss simple keystroke‑timing evasion, the AI governance gap, and recent proofs‑of‑concept such as Datadog’s “Kofish” attack that exploits Microsoft Copilot Studio to stealthily steal OAuth tokens.
- Social Engineering of AI Agents - Panelists explore the emerging risk of agent‑to‑agent manipulation, emphasizing the need for finely scoped constraints (“blinders”) to prevent malicious actors from socially engineering autonomous AI systems.
- Treating Agents Like Human Identities - The speaker highlights that attackers prioritize easy access via valid credentials and urges organizations to apply the same identity and authentication controls to machine agents as they do to human users, ensuring proper scoping and least‑privilege access.
- Innovation Outpaces AI Governance - Rapid AI deployment driven by business innovation repeatedly outpaces the development of governance frameworks, creating a widening gap similar to previous cloud adoption cycles.
- Shifting Security Mindset to Enablement - A discussion on transforming security culture from a gate‑keeping stance to a collaborative, shared‑responsibility approach that enables secure innovation rather than simply denying risky actions.
- Reinventing Security Training for AI Threats - The speaker argues that traditional phishing‑focused training is obsolete against AI‑generated attacks and calls for a gamified, risk‑centric program that makes security everyone’s responsibility and serves as one tool among many in a comprehensive defense strategy.
- Slow‑Keystroke Evasion Tactics - A non‑technical host asks a security expert whether typing payload characters one by one with random delays to mimic human keyboard behavior constitutes a clever bypass of behavioral detection systems or a simple, long‑overdue technique.
- Rise of Humanized Malware - The speaker argues that attackers are increasingly deploying “humanized” malware and automated red‑agent tools to exploit organizations that fail to adopt multidimensional risk assessments like MFA, treating cybercrime as a business driven by ROI.
- Compromised Credentials Fuel Market Manipulation - The speaker argues that stealing passwords to hijack brokerage accounts provides attackers a fast, high‑ROI method to manipulate markets, making it more appealing than prolonged ransomware campaigns.
- Beyond MFA: Behavioral Analytics Future - The speaker argues that while MFA is essential, attackers exploit weak points and MFA fatigue, so the industry should shift toward behavioral analytics and shared signal standards to improve risk evaluation.
- High‑Stakes Bug Bounties Explained - The speaker outlines why only exceptionally hard, state‑level exploits earn massive bounty payouts—driven by security concerns, publicity hype, and the grind required—while noting that AI‑generated bug submissions are flooding programs and that true earnings remain limited for most hunters.
- AI‑Driven Automated Purple Teaming - The speaker describes how AI can autonomously perform red‑team activities, learn exploits on the fly, require blue‑team agents to counter AI attacks, and still need human review for complex edge‑case vulnerability chaining.
Full Transcript
# Weaponized AI Agents Threat Landscape **Source:** [https://www.youtube.com/watch?v=iaZS1jer8MY](https://www.youtube.com/watch?v=iaZS1jer8MY) **Duration:** 00:41:22 ## Summary - Attackers can evade keystroke‑based detection by randomizing the timing between key presses, a simple tactic that should have been implemented years ago. - Recent proof‑of‑concept attacks demonstrate malicious AI agents: Datadog’s “Kofish” exploits Microsoft Copilot Studio to covertly harvest OAuth tokens, and Palo Alto’s “agent session smuggling” hijacks agent‑to‑agent communication to issue hidden malicious commands. - These incidents illustrate a broader trend where legitimate AI tools are repurposed for illicit activities, highlighting a deepening flaw in the trust model of AI‑enabled platforms. - Experts predict a surge in similar AI‑driven attacks, citing earlier examples like Gemini and Echo Leaks, and warn that failing AI governance will leave organizations increasingly vulnerable. - The podcast also touches on related security concerns, including social‑engineering schemes that manipulate stock prices and the rapid growth of bug‑bounty payouts. ## Sections - [00:00:00](https://www.youtube.com/watch?v=iaZS1jer8MY&t=0s) **Evasion Tactics and Malicious AI Agents** - The podcast hosts discuss simple keystroke‑timing evasion, the AI governance gap, and recent proofs‑of‑concept such as Datadog’s “Kofish” attack that exploits Microsoft Copilot Studio to stealthily steal OAuth tokens. - [00:03:48](https://www.youtube.com/watch?v=iaZS1jer8MY&t=228s) **Social Engineering of AI Agents** - Panelists explore the emerging risk of agent‑to‑agent manipulation, emphasizing the need for finely scoped constraints (“blinders”) to prevent malicious actors from socially engineering autonomous AI systems. - [00:07:12](https://www.youtube.com/watch?v=iaZS1jer8MY&t=432s) **Treating Agents Like Human Identities** - The speaker highlights that attackers prioritize easy access via valid credentials and urges organizations to apply the same identity and authentication controls to machine agents as they do to human users, ensuring proper scoping and least‑privilege access. - [00:10:40](https://www.youtube.com/watch?v=iaZS1jer8MY&t=640s) **Innovation Outpaces AI Governance** - Rapid AI deployment driven by business innovation repeatedly outpaces the development of governance frameworks, creating a widening gap similar to previous cloud adoption cycles. - [00:14:46](https://www.youtube.com/watch?v=iaZS1jer8MY&t=886s) **Shifting Security Mindset to Enablement** - A discussion on transforming security culture from a gate‑keeping stance to a collaborative, shared‑responsibility approach that enables secure innovation rather than simply denying risky actions. - [00:18:15](https://www.youtube.com/watch?v=iaZS1jer8MY&t=1095s) **Reinventing Security Training for AI Threats** - The speaker argues that traditional phishing‑focused training is obsolete against AI‑generated attacks and calls for a gamified, risk‑centric program that makes security everyone’s responsibility and serves as one tool among many in a comprehensive defense strategy. - [00:21:18](https://www.youtube.com/watch?v=iaZS1jer8MY&t=1278s) **Slow‑Keystroke Evasion Tactics** - A non‑technical host asks a security expert whether typing payload characters one by one with random delays to mimic human keyboard behavior constitutes a clever bypass of behavioral detection systems or a simple, long‑overdue technique. - [00:24:39](https://www.youtube.com/watch?v=iaZS1jer8MY&t=1479s) **Rise of Humanized Malware** - The speaker argues that attackers are increasingly deploying “humanized” malware and automated red‑agent tools to exploit organizations that fail to adopt multidimensional risk assessments like MFA, treating cybercrime as a business driven by ROI. - [00:28:06](https://www.youtube.com/watch?v=iaZS1jer8MY&t=1686s) **Compromised Credentials Fuel Market Manipulation** - The speaker argues that stealing passwords to hijack brokerage accounts provides attackers a fast, high‑ROI method to manipulate markets, making it more appealing than prolonged ransomware campaigns. - [00:31:12](https://www.youtube.com/watch?v=iaZS1jer8MY&t=1872s) **Beyond MFA: Behavioral Analytics Future** - The speaker argues that while MFA is essential, attackers exploit weak points and MFA fatigue, so the industry should shift toward behavioral analytics and shared signal standards to improve risk evaluation. - [00:34:25](https://www.youtube.com/watch?v=iaZS1jer8MY&t=2065s) **High‑Stakes Bug Bounties Explained** - The speaker outlines why only exceptionally hard, state‑level exploits earn massive bounty payouts—driven by security concerns, publicity hype, and the grind required—while noting that AI‑generated bug submissions are flooding programs and that true earnings remain limited for most hunters. - [00:38:42](https://www.youtube.com/watch?v=iaZS1jer8MY&t=2322s) **AI‑Driven Automated Purple Teaming** - The speaker describes how AI can autonomously perform red‑team activities, learn exploits on the fly, require blue‑team agents to counter AI attacks, and still need human review for complex edge‑case vulnerability chaining. ## Full Transcript
This seems like a really simple way to evade detection.
You put in a random time in between keystrokes. This
should have been done 10 years ago. Right? But on
the other hand, why is the detection software looking at
speed of key inputs as a metric to determine human
versus not human? All that and more on security intelligence.
Hello, and welcome to Security Intelligence, IBM's weekly cybersecurity podcast
where we break down the most interesting stories stories in
the field with the help of our panel of experts.
I'm your host, Matt Kaczynski. And joining me today, Chris
Thomas, AKA Space Rogue X Force Global lead of technical
eminence and part of the not the Situation Room podcast,
and sridharmapidi, IBM fellow cto IBM Security. Thanks for being
here with me, folks. Today we are talking about the
AI governance gap, malware that acts like a person, how
social engineers are manipulating stock prices and ballooning bottom bug
bounties. But first, let's talk about malicious AI agents. Now,
there's been a lot of talk about how attackers could
weaponize AI agents, and we're finally starting to see it
happen for real, or at least in proofs of concept.
And two in particular came out last week that I'd
like to talk about. The first is from researchers at
Datadog who identified a technique they call Kofish because it
takes advantage of Microsoft Copilot Studio. Attackers can basically use
it to build malicious AI agents that secretly steal oauth
tokens in the background. The second was from researchers at
Palo Alto who reported on what they call agent session
smuggling. This uses the agent to agent communication protocol to
secretly transmit malicious commands to a target agent. Basically, the
protocol allows two agents to talk to each other and
the user doesn't necessarily see what they're saying. So if
you've got a malicious one, you it can secretly say
some nasty stuff that makes the other agent do some
bad things. So I want to start by with you,
actually, Chris. Is this a case of legitimate tools being
put to illegitimate ends or is it a deeper flaw?
What do you think's going on? I think it's a
big part of that. I mean, the criminals are going
to use whatever tools they have available, right? Because criminals
are going to crime and if they have AI tools
available, they're going to use those AI tools. The advantage
that the criminals have here is that they can just
sort of experiment and play around with stuff and throw
stuff at the wall and see what works. And some
of it works. And I'm sure there's a whole bunch
of stuff they've tried that hasn't worked. That we haven't
seen because it didn't work. Makes sense. Sridhar, you have
any thoughts to add on that? We are going to
see a number of such attacks in the future. Right.
If you look at, if you rewind a little bit
from Black Hat timeframe, we saw Gemini, which has got
very similar attack as this agent session smuggling. And we
also saw Echo Leaks which had a very similar kind
think if I step back, I kind of look at
agents being autonomous and that creates problems of oversight. Agents
can be coerced into doing something like, you know, social
engineering of humans, social engine of agents. And also agents
are non deterministic in nature. And attackers, you'll see will
take advantage of these principles more and more as we
look forward in the next few years to come. Yeah,
I'm glad you brought up the social engineering angle because
I thought of that. That too. Right. And what's interesting
is that both of these kind of illustrate social engineering
but in slightly different ways. Right. The first is a
little more classic, the Kofish anyway because it, it preys
on users trust of Microsoft. Right. You think, oh, an
agent hosted on Microsoft, that's got to be perfectly legitimate.
Right. And of course somebody could be using it for
illegitimate ends. The other one that's really interesting though is
that the, the agent to agent attack, the, the agent
session smuggling. It's almost like socially engineering an AI agent.
Right. Like you're kind of tricking the good agent with
your malicious agent. I was wondering if you folks had
any thoughts about this new frontier when it comes to
being able to now socially engineer some of our technology
maybe in a way that we couldn't in the past.
And let's start with you Sridhar, because you brought up
the social engineering thread. Do you have any thoughts on
that? I think it's about making sure that we scope
the agent. Right. I mean agents cannot be doing everything
you have to put blinders. We talked about a good
analogy of social engineering. So the question is, how do
you put blinders on the agent such that it is
only doing certain things for certain individuals, either on behalf
of an individual or in behalf of an agent, or
autonomously, but being able to scope it extremely fine tune
it to either time, either resource, actions, either scope in
terms of location that will limit the agent from getting
coerced into doing things that he's not supposed to do
through social engineering. Absolutely. Chris, what about you any thoughts
there? I haven't seen anything yet where one agent is
specifically manipulating another agent, but I can totally see that
that's where that where things are headed, right? Like I
said, criminals are going to crime, they're going to use
whatever tools they have. And if they have an agent
as a tool that they can use, they will absolutely
apply that against another agent to try to manipulate it,
coerce it, get it to do things that maybe it's
not supposed to do. So we have to have those
blinders in place where we have to be able to
create those agents so that they can't get out of
their little sandbox. And no matter how much the attackers
try to manipulate them, they still can't give up the
information that the attacker wants. I think I do want
to pause you for a second. Right. And before we
leave this topic, I think this is the tip of
the iceberg, right? If I look at the agent behavior,
the attackers exploiting this autonomous behavior for additional threat privileges
because that is the easiest thing to attack, right? Chris,
what did you guys publish? 30% of the attacks that
you're seeing it through valid credentials. So it is so
easy to go and get these things, valid credentials, and
launch an attack. But as you go beyond that, right,
go one level, beyond the tip of the iceberg to
the next level. This is where you can see red
teaming and blue teaming have to come together. This is
somewhere you see agents being non deterministic. So they will
drift and attackers will take advantage of this drift. So
this to me is a beginning of a new class
of attacks that we will see. And you are seeing
this only because it is so much easier to use
valid credentials to attack versus trying to do something, which
is rocket science. Yeah, I think that's a really good
point. I'm glad you brought that up because that ties
into conversations we've had in the past, doesn't it, Sridhar,
about how when AI agents enter the equation, how you
approach identity and access management kind of has to change
a little bit, right? Yeah. And so I'm glad you
brought that up because this is one of those cases
where you're taking that credential based attack which we've seen
launched against human employees and now you're launching it against
a new kind of employee, a mechanical one, basically, right?
I mean, criminals and hackers alike are both very lazy
people at the root core. Right? We're lazy. And anything
we can do to make our job easier is what
we do. And having valid credentials and makes the job
so much Easier. So anything that can get the attacker
the valid credentials is definitely where they're going to go
first. So do, is it, do you think it's easier
then to get valid credentials from an agent than it
is from a human employee? Or is it just a
similar level of challenge, but it's just a different way
of going about it? How do we think about that?
I think we have to think about it. And if
you use the current techniques, I think you're talking about
how we have to fundamentally think about changing some of
our behavior as well as how we do things right.
Today we do a really good job of authenticating human
beings. We do two factor authentication, multi factor authentication, all
the cool things, right? But machines don't have that, so
we tend to use functional IDs, which are basically giving
them over privilege instead. I think what we should be
thinking about is to identify the agent with some level
of identity. Just like you would identify a human being,
right? End of the day, agents are your next level
of insiders. So just like you would identify a human
being, you have to identify an agent. Once you identify,
you have to do the same thing that we do
with humans, authenticate them, right? And then you figure out
how to scope what that agent can do, both the
good and the bad. And while you're doing that, that's
when you can think about a very, very fine level
of granularity, of observability, so that we can then monitor
all the behaviors and be able to detect anomalous behavior
very quickly. Chris, I thought I saw you start to
say something. Do you have anything to add there? No,
I'm just agreeing with basically everything that's being said here.
We have to identify the agents, then authenticate them and
give them the appropriate permissions that they need, just like
we do with our human users, right? So even though
the user's not specifically human, they still need to follow
the same identity and authentication processes or tailored for AI
agents. Not the same, obviously, but they need to be
identified and authenticated properly. So it sounds like the way
we treat human identities and non human identities is going
to get closer and closer over time. Is that accurate
to say? Yeah, and the only thing is the scale
is exponentially different than humans versus non humans. So back
to your point, right? You cannot use all existing tools.
Existing processes may apply, but you're talking about a different
magnitude of scale. So you need to think about automation,
you need to think about being proactive. You know, you
think about a lot of things that are more dynamic
in nature than static in nature. So maybe that then
is a good segue into the AI governance question, because
we have this AI governance gap right now, right? And
this is coming out of both research that IBM has
conducted and an article from IBM's Judith Aquino, kind of
breaking some of that research down for us. Now, organizations
are deploying AI tools faster than they can develop robust
risk governance frameworks for those tools, according to IBM's AI
at the Core 2025 research report. The kind of key
figure here, at least as far as I'm concerned, is
that 72% of businesses surveyed said they have integrated AI
into at least one business function. But only 23.8% of
businesses surveyed said they have extensive governance frameworks in place.
That's a pretty big gap between who's deployed and who's
got extensive governance in place. And so I want to
head throw back to you again here, Sridhar, you know,
because again, this touches on things we just talked about
and conversations we've had before. This about the gap between
what we when we deploy AI and the governance that
needs to catch up. Why does such a gap exist?
Why does governance lag behind deployment pretty much every time
we introduce something new? I think we've seen this movie
so many times, right? We've seen this movie so many
times. Most recent movie has been the cloud playbook. You
know, deploy fast, govern later, get breach in between. Right.
As simple as that. It's the same rinse and repeat
story we've seen like, you know, a few times. Right.
And the reason that that happens is because of the
innovation. End of the day, businesses have to innovate. They
have to go and make themselves relevant. Right? We all
move to cloud for the operational benefits, efficiencies, et cetera.
AI is going to help us with productivity, not just
with the employees, but also consumers, ease of use, et
cetera. And they have to leverage that. As a result,
you've got innovation which is primarily driven by the applications
team accelerating, and then meanwhile you've got the risks trying
to catch up. And if you don't balance it really
well, that's why you keep seeing the gap keep widening
up. Widening, right? Absolutely. Chris, I'm wondering if you have
thoughts on the kind of cybersecurity implications of these gaps
when they pop up? Kind of like Sridhar said, it's
like deploy, get hacked in between and then finally get
to some governance. What do you think about that? This
is a similar playbook. Like we're repeating the same story
that we've repeated with bring your own Device, cloud, work
from home. Now we have AI and we implement the
technology first and then we like, oh wait, we got
to put some rules around this and figure out exactly
how we're supposed to do this. And a lot of
times we look at, okay, just make it work and
then we'll figure it out later. And that it's that
gap in between that the attacker looks for because we,
they see that gap, they, they see an opportunity for
them to leverage that lack of governance so that they
can get into your organization and do what it is
that they do. And this is a common problem that
we've had with new technologies and AI isn't any different
here. So we, it's important to recognize that and realize
that your AI is now a risk and you need
to be. If you don't have the governance in place,
then try to mitigate that risk as much as you
can by network design, authentication, et cetera, other tools that
you have until your governance can catch up. No, that's
it. You cannot ban AI, you cannot ban devices. Right.
You cannot ban, you cannot ban from employees working from
home. Right? Employees will be employees and they'll use it
anyway. So we have to, to Chris's point, I think
we have to the choices to make it secure, enablement
or have a blind exposure, that's the choices that you
have. One of the things that I talk about is
security is becoming more and more distributed and it's become
more and more shared responsibility. It's not the question of,
okay, the security Persona owns security and the application teams
don't own security, and hence let me go run with
it and come back and catch it, but instead be
able to have a mechanism by which we can start
thinking about finally learning from all of these movies that
we've seen into making it a shared responsibility. Right. I
kind of call it like a guardrails versus checkpoints or
gates. When you have a guardrail, fine. You can define
a policy which says, sure, on this speedway you can
go at 55 miles even if it's a 50 mile
speed limit. That's okay, right? As long as you don't
jump the guardrails. But the other hand, if you put
a lot of checkpoints, you know what happens at checkpoints,
right? Like at toll gates, there's a long traffic jam.
So that's a cultural change that we have to think
about it. We have tools for sure, but that's the
cultural gap that I'm hoping that at some point we
will learn. A lot of times people look at security
even as practitioners who. They're the know people. Oh, you
can't do that. It's bad. It's not secure. You can't
do it. And I've always tried to look at it
like, no, that's security's job is to say yes and
figure out how to do it securely. And to Siddharth's
point, it is a cultural change that we have to
look at here and that it's everybody's responsibility to think
of these security, how can I do this and be
secure at the same time? Not just how do I
do it and rush it out the door. So it
is everybody's responsibility, but it should be also everybody's job
to figure out, yes, we can do this and we
can do it securely, not just no, because it's not
secure. Yeah, I'm glad you both brought that up because,
you know, that was kind of what I was going
to ask. Is that like, you know, is it, I
don't know, maybe the word I was looking for is
responsible, right? Is it responsible to deploy this technology before
you have governance? And it seems like that's completely the
wrong question to ask because realistically, like you said, Sridhar,
and you, Chris, you can't ban this stuff. People are
going to use it. So you can either kind of
stand there and try to stop them from doing something
they're going to do or. Or you can enable them
me to another question, and it's kind of a big
one. And, you know, I'm sorry to spring it on
you, but one of the things I often hear from
people when I talk to them about, you know, sort
of enabling everybody to be more secure in an enterprise
context is how you can give all this kind of
security training and half of it just doesn't stick with
people. Right. They just don't follow it. So do you
have any thoughts on what this culture change looks like
to make this kind of distributed shared responsibility model actually
work? Any thoughts there? I think part of it is
understanding the risk, understanding the risk to the business. Right.
I mean, that risk is a, you know, fine, it
is probably a gray word, but at the same point,
if it makes it relevant to the application teams, right.
This is my sensitive data, or I'm holding the sensitive
data for my clients that I'm serving. How do you
understand that? In a manner that shows that they're taking
a risk by not looking at certain security vulnerabilities? For
example, I Think if you understand that cleanly then it's
very similar to saying that sure, I don't want to
buy insurance right now, but if you show the likelihood
of insurance or likelihood of a storm or a flood
next to an ocean versus the likelihood on in a
mountain, right. It may change, it may change the thinking
to say I may want to get flood insurance. Right.
So that awareness is number one and number two I
feel is gamification. Right? Bit of a gamification will help
in terms of like how do we security, including myself.
Right. I will poke holes at myself first. We tend
to make it very complex, right? We tend to make
it very complex. Like we talked about OAuth 10 minutes
ago. Right. It is so hard to set up the
entire delegated flows within OAuth. It's not for the weak
hearted and that's one of the reasons why people don't
embrace it as easily. So how do you make it
simple? How do you gamify a little bit? How do
you make it fun so that you can then say,
hey, using a very simple analogy, here's my risk thermometer.
Here's my mitigation thermometer. Let me show you where that
is and give you some indication of how much risk
you're taking or not taking can probably influence people from
not rushing forward. I agree. I mean to changing the
security mindset or the security culture that we have in
organizations so that it's everybody's responsibility. And you touched on
training. We need to totally revamp our training regimens. We
all have the same multiple guests, how to identify a
phishing email, bad grammar and other things. And in the
age of AI, all that training is really no longer
effective. The techniques and tools that you use as an
individual to try to identify this risk are totally different.
Now AI makes a perfectly sounding email. There's no spelling
mistakes. So trying to use that old multiple guest training
and people just click through it as fast as they
can. We all do. Right? Because we all have the
same training. We got to get back to work. So
making it gamified and making it so that the user
can identify the risk, not necessarily the telltale signs of
an attack, but what's the risk to me? What's the
risk to the organization? How do I mitigate that risk?
That's the sort of training that we need to, to
integrate into our people. And then again, every time I
talk about training, don't rely on it. It can't. It's
the last. It's not the first line of defense. It's
not the last. It's one more tool in the toolbox.
A lot of companies will say, oh, I trained all
my people. We're secure. That's not how it works. Okay,
so just. Yeah, yeah. As somebody, you know, who was
not a sort of cybersecurity professional, Right. And who so
spent, you know, years and years and years of my
life taking those trainings as the employee, it's true, I
didn't pay any attention to them, right. But now that
I've come into this realm where I kind of do
this podcast with folks like you guys and I learn
the concepts behind this stuff, it is so fascinating. And
so I do think that, like, if you actually teach
people the concepts and not just, you know, the scolding,
hey, make sure you change your password. I do think
you start to get somewhere. You know what I mean?
It's like you said, Sridhar, give people an actual understanding
of the level of risk that they are taking. And
then they'll be like, hey, you know what? I understand
this in a real context to do something about it,
you know. Teach him how to fish. Not exactly. Exactly.
Teach him how to fish. Don't give him the fish.
Let's move on then to our next topic. Today we
are going to continue a little bit on this theme
of blurring the lines between people and non human entities,
if you will, with the malware that acts like a
human. Specifically, I'm talking about a newly discovered banking Trojan
nicknamed Herodotus, which evades behavioral detection systems by timing text
inputs to look more like a human being typing. Right?
Now, this comes from threat fabric. They're the ones who
found it. And, you know, in a lot of ways,
Herodotus is very much like your standard banking Trojan. It
gets in, it steals credentials, remote access, yada, yada, yada.
But the one little wrinkle here that I thought that
caught my eye was that, you know, in order to
make it seem. In order to evade some of these
behavioral detection systems, instead of just kind of inputting text
all at once, Herodotus would take the text the hackers
wanted to input, split it into characters, and enter characters
one by one on a timing delay to make it
seem like keyboard, you know, fingers on a keyboard. And,
you know, look, I'm a non technical person largely, but
I thought this was interesting. But I want to ask
more technical people, and I'll start with you, Chris. Is
this impressive? Is this as clever as I think or
is this not really that big a deal? What do
you see here? Both. I'm surprised that it took this
long. Why is this the first one that we're seeing
to do this? This seems like a really simple way
to evade detection. You put in a random time in
between keystrokes. Like, why isn't. I mean, it's 2026. This
should have been done 10 years ago. Right. So in
that case, yeah, this is kind of cool and interesting
because somebody's finally figured it out. But on the other
hand, why is the detection software looking at speed of
key inputs as a metric to determine human versus not
human? Like, I hope there are some other metrics in
there that it's also looking at, because key input, like
that's a known heuristic that you can identify individuals by
is how they type. Like that's a known thing. So,
yes, this is both amazingly amazing and both. Yeah, whatever.
Next. Yeah, I'm actually surprised as well. Right. As we
always think that our adversaries are way ahead of the
defenders because they work together, they're more opportunistic, they're more
we actually have a product in identity space which looks
at a combination of your subject, which is a person,
your action, which is a resource, your network activity, your
environment, your behavior, which is not just the keystrokes and
mouse movement, but also the fact that I'm doing a
$30 transaction versus $3,000 transaction, puts them all through the
ringer, and then tells you whether you want to do
MFA or not. An mfa, it calculates the risk. Right.
So that is, we've had it for six, seven years
right now in production, and IBM uses it. Right. So
I'm surprised that right now we are looking at something
similar, which is probably one dimension. So maybe they will
get exponentially faster. I see that eventually the attackers are
going to start thinking the same way that we've been
thinking, which is space bar or keystroke measurement is one
dimension. They will probably look at other dimensions so that
they are able to then go and provide multiple parameters
to be able to circumvent the fact that this is
a bot versus a human? Yeah, I'm glad that you
kind of brought that up, because that was what I
was wondering. Right. Is this kind of the beginning maybe
of a little bit of your classic kind of arms
race? Right. Like you said, Sridhar, we have these kinds
of behavioral detection systems that are pretty complex, these a
lot of different factors. Maybe the hackers just stumbled onto
one of them, but maybe they'll start to use more
of them and then Are we entering a world where
these things are going to keep escalating? I don't know.
What do you think? Is that where we're headed? Will
we see more of this humanized malware? I think there's
two dimensions over here. One is definitely, definitely more humanized
malware. And not just so humanized malware, but there's a
sister side of it, which is automated red agents. Going
back to the agent discussion that we had a few
minutes ago. Right. But I think that's one dimension. The
other dimension is also, end of the day, the attackers
are running a business. It's a return of investment. Why
are they doing that now? They're not dumb. Right. As
smart as we think we are, we are not. At
least I'm not. Right. I think they're doing it because
not everybody using the multidimensional risk analysis. It's very similar
to mfa. How many people actually use mfa? Not a
new technology, but I'm surprised that not many people use
msa. I see your head nodding. Right. So it's the
same thing. Right. Not many vendors or not many organizations
are using multidimensional way of evaluating if there's human or
not human. Most people are still stuck on captcha or
maybe some traditional ways of doing that, and maybe that's
what they're going after. Right. So there's two dimensions that
I look at. One dimension is where the maturity of
the market is. And again, this is a precursor to
something which is, you know, going to explode as well.
So, yeah, I've been yelling at people to turn on
MFA ever since I started. Ever since I started covering
cybersecurity and learned how bad passwords are at keeping you
safe. I try to yell at everybody in my life,
turn on mfa, and they don't always listen to you.
Chris, any thoughts on your end here on what we
could expect from this humanized malware trend? Well, I mean,
it's the old cat and mouse game, right? We put
in a defense. The bad guys or criminals figure a
way around the defense, and then we put in another
defense, and then the bad guys, criminals figure out another
way around the defense. So it just kind of goes
back and forth. Do they have an advantage? Are they
further ahead than us? A little bit, maybe. But then
we catch up and pass them, and it kind of
goes back and forth. So this is, like I said,
this is both novel and not novel. I'm surprised it
took this long, and I'm kind of interesting to see
what they come up with next to try to bypass
some other humanistic heuristics that we have. Let's move on
then to our next story and talk about a very
interesting smishing attack, one that's happening on a level that
I personally haven't seen before. And this is a smishing
attack that manipulates stock prices. This is a, a campaign
that Fortra uncovered of a pretty large smishing network that
is sending out these messages to basically try to steal
people's brokerage accounts. And then once they get into the
compromised account, they manipulate stock prices to make some money.
Now, I'm going to quote Alexis Obert Fortra to explain
it because again, this is a kind of thing I
have not seen before. So I'm going to use her
words. In these scenarios, the threat actor will liquidate any
existing investments made by the victim and reallocate the funds
to low liquidity stocks, often penny stocks or IPOs. Then
they will artificially inflate the stock price by purchasing large
amounts. And once at a profitable level, they will sell
off the holdings to gain a financial profit before withdrawing
any earnings using mobile wallets. Again, I personally have just
never seen social engineering on this kind of scale before.
You know, I've seen people get their individual bank accounts
hit, but like to manipulate the markets. That's a little
bit scary to me. I don't know. What about you
folks? Any thoughts there, Sridhar? What are your feelings on
this thing? I want to step back right now. Granted.
It sounds really, really cool. Right. I have not seen
this either. Right. But I've seen similar things. But if
I step back, the fundamental thing over here is stealing
passwords. And that is the easiest thing that one can
do today. And it's pains to say that. Right. But
that is a reality. Right. Why go and jump the
walls when we can go and cut the wire fence
with a wire cutter and get into the property? So
I think to me, we are seeing a beginning of
how you can take advantage of compromised passwords, whether through
phishing or whether through buying hundreds of them or hundred
thousands of them at the Dark web. That's easy to
do these days. Once you do that, rather than trying
to go do a ransomware attack, which takes a long
time, the return on investment may not be as much.
I look at this as an opportunity to say, hey,
why don't I go and take care of tens of
thousands of brokerage accounts that gives you a few million
dollars very quickly that I can manipulate the market. So
the return on investment is way higher than having to
live in a network for a year. Before I see
that I'm somewhere. Right. So I'm thinking like an attacker,
right, for a change. But I have to think there
so that I can then start defending. Yeah, I, you
know, that's a really good point. And it's almost like
basically every story we've covered so far is sort of
the same theme that like everything old is new again,
right. It's like you said, this is just, I had
not thought about that. This is another kind of password
attack, right? It's put to really neat ends. But at
the end of the day, what are you doing? You're
stealing a password. You're getting in there. You're stealing a
password and using a person's account. Right. Chris, your thoughts
there? We've seen similar attacks with crypto, right. People trying
to get the passwords, get in the crypto account and
liquidate the account and move the fund somewhere else. And
we have seen some of this with, with brokerage accounts
too, but it's usually liquidation and get out, right? The
manipulate the markets with the penny stocks. That's a new
angle. That's an extra step. But at the, the, it's
still a password attack. It's still, you know, a Trojan
that we're going after. Bank, we're going for the money,
right? Whether it's crypto, bank account, brokerage account, they're going
for the money. The added step that's going on here
is that instead of just liquidation and get out, they're.
They're trying to make even more money before they can
liquidate. So, yeah, this is just again, another step, another
evolution that we're seeing in the criminal mind as they,
they take it to the next level. And more reason
to turn on that MFA again, right? It's like you
said, you want to keep people out, just turn on
do want to mention MFA is not a panacea, right.
It's not a guarantee. There are ways that a really
smart attacker can bypass mfa, but it's, it's another step.
It gets rid of the low level, attacks the ankle
biters, as we call them, and makes it more difficult.
And by making it more difficult, remember I said the
attacker is lazy. They're going to go to someone else
because you've made it. Oh, I'm not dealing with mfa.
I'm going to go to this other account. That's what
you want. Protect yourself, let somebody else be the victim.
MFA is absolutely required. I think there's no question without.
But again, too much of MFA also causes distractions. Right.
I think. And that's one of the reasons and one
of the avenues that attackers use. Right. MFA fatigue. Instead,
I think we should think about in 2026 and 2025
and 2026 moving forward. Right. We need to start thinking
about the behavioral analytics like we talked about in the
person? Whether it's a bot or a human, what device
it's coming from, known or unknown, what is the network
or the environment? Have I seen this before or not
seen this before? And the industry is actually doing a
really good job with a open specification called Shared Signals
over here. Right. As a part of the OpenID, if
you use something like that and collaborate on the shared
signals, whether it's identity related IOCs or IOBs or any
other things, the more data that you have, the better
you can do a risk evaluation to be able to
stop this. So I think it's not just MFA is
a means to an end and it is a result
for sure. But how you get to MFA has to
be behavioral analytics. That's a really good point on both
ends. And yeah, I'm glad you said also Chris, that
you know, the MFA is not a, it's not a
panacea, right. It's not going to fix every, it's not
going to stop everything because to be quite frank, I
believe in this attack. One of the things that's interesting
about it is that it does involve stealing one time
passwords, right? They have. And you know, again it's safer
to have that because it puts an extra obstacle but
it's not totally uncrackable. So you need those like you
said, those behavioral signals, Sridhar, that become. And when you
have a bunch of them, you can't fake those as
easily, you know. Yeah. So let's move on then to
our final story for the day, folks talking about bug
bounties getting bigger. This is a report from Bloomberg that
reports that bug bounty programs are skyrocketing in both popularity
and the amounts they're paying out, hitting some all time
highs. For example, HackerOne paid out 81 million over the
past year, which is its single highest year on record
and a 13% increase over the previous year. So pretty
significant jump. And what's particularly interesting to me here is
that in an era of AI when you have things
like Google Code Mender coming out and all this stuff
where people are like hey, we're going to automate the
ability to find your bugs. To see such a human
driven activity like this taking off even more, I just
thought that was kind of interesting and not necessarily what
I would expect. Chris, I want to start with you,
you know, as a hacker. What are your thoughts on,
on, on this kind of thing right now? I got
a lot of thoughts on bu. Bounties. Let me just,
I'll try to keep it to this particular topic today.
The big numbers that you're seeing for these specific bounties
are for very specific, very difficult to exploit, hard to
find bugs. They're not your run of the mill AI
finds 100 bugs in an hour type bugs. These are
the types of bugs that a state sponsored actor would
pay a lot of money for. And so the reason
for these big bounties are to keep them out of
the hands of the state sponsored actor, right. So that
they're not used against dissidents or on mass surveillance of
that AI is able to find, those do not pay
out anywhere near as much. But these big numbers also
make great headlines which helps the companies that run bug
bounty programs saying that, you know, all these, we're paying
out all this money, come find bugs and become a
millionaire. That's not really how it works. It takes a
lot of hard work to make a decent money at
bug bounty and it's a kind of a grind. But
if you have the skill set, yeah, you can make
some money there. But at the same time you have
companies who now have to, who are paying for this
service as well. And then you have attackers who are
using AI to find bugs and are flooding bug bounty
programs. This is a whole nother topic when it comes
to open source. And now I'm kind of going off
into some of my other topics areas so I'll leave
it at that. Yeah, there's big money here that can
be made, but it's difficult to make it. Yeah, we'll
have to have you come back and talk about this
in more depth with Sridhar. A whole show on bug
bounties. That's good to know. Sridhar, your thoughts on the
kind of state of bug bounties today? I can't seem
to recall that the name of a movie, right, where
this person is paid really, really high amount of money
to hack out of a prison. Right. And I kind
of look at it like that, right, that yes, you
are legitimate burglar, but you can make more money doing
ethically and legally than by being on the other side.
Right. So I kind of look at in two dimensions.
Like you've seen my theme. Now, one dimension is for
the attackers, like I said. Right. I know. Jokes aside,
this is a legitimate and illegal way of getting really
well paid and putting the best minds to that to
say you can actually make a good living out of
this, Use expertise from Space Rogue and all the things
that we do. As an example, the other dimension is
from a company perspective, it is a small sum of
money as insurance to pay for what may be a
much larger right for paying a million dollars in bug
bounty versus $10 million in ransomware. I'll take the first
option any day. So as a result, you see both
of those coming together into a perfect storm to increase
the momentum. There's a desire to do more bug bounty
legally and there's a desire to pay more because that's
an insurance. Right. That's where you see this more and
more on the increase. Now, having said that, for both
sides, bug bounty alone is not sufficient. I think Chris
was also saying that MFA is not the panacea. It's
not the only thing necessary. Bug bounty is part of
the resilience program of the overall resilience program. Basically, you
have to do all these things together other and. And
bug bounty should actually be the last thing you do
for all the stuff that you may have missed that
you did check for. So, yeah, to second. Second your
opinion there, it's not the only thing you should be
doing. It is one more tool in the toolbox. Given
their kind of ability to act as like you said,
this kind of last kind of line of like insurance
of like, hey, we did everything we could to find
this thing. If there's something still out there, we'll give
you a reward for finding it. Do you think we're
going to see the bug bounty programs kind of stick
around or do you see a day that, I don't
know, the AI gets good enough that this kind of
thing goes away? Any thoughts there? I think it's going
to evolve, right? I mean, our attackers are going to
evolve with more and more automated agents for doing attacks.
Right. Think of it as automated red teaming. And it
learn on the fly, it learn all the ttps, it
learn all the vulnerabilities. And while you go across get
a cup of coffee, you'll probably have an exploit and
probably the code generated to leverage that exploit as an
example. Right. So I think what we will see is
probably more and More purple teaming, Right? Not just a
mechanism to go and say, let me go and do
this automated testing and then come back and fix it.
But the speed in which this happens requires you to
have some sort of a blue agent which is able
to go and fight AI versus AI. And then individuals
then have to figure out how to govern those in
a manner that they can keep up with the speed.
I think in the short term it's hard to forecast
out 15, 20 years, but in the short term, I
think there's still going to be a need and a
requirement for manual review of code for the weird chaining
of bugs together and finding that weird edge cases that
AI is just not going to find for now. Right.
I have no idea what's going to happen in 20
teaming and other security aspects, if you want those edge
cases, if you want to find that weird chaining where
you're putting five bugs together to gain access, you really
need a human to do that. If you need a
surface level stuff, yeah, you need to check a box,
get your AI agent in there and do your red
team to check your box. But I hope at least
we're still going to need humans for a little while.
What I'm worried about though is that the AI is
going to take all the low level stuff and we're
going to run out of people expert enough to do
the human stuff. What I say, Chris, is AI is
going to help us with speed and accuracy. No question,
no caution on that. Right. But I think the human
ingenuity and creativity will always remain with us so that
when you combine it, that's when good stuff happens. I
hate to leave you all on that question on that
slightly apocalyptic scenario, but that is all the time we
have for today. So thank you Sridhar and Chris for
being here. Thank you to our list, listeners and viewers
and folks. Don't forget to check out the special episode
we released last week, how to Break into an Office,
which features our very own Stephanie Carruthers. Find it on
Apple, Spotify and audio platforms everywhere. As always, subscribe to
Security Intelligence wherever podcasts are found and stay safe out
there. there.