Learning Library

← Back to Library

Weaponized AI Agents Threat Landscape

Key Points

  • Attackers can evade keystroke‑based detection by randomizing the timing between key presses, a simple tactic that should have been implemented years ago.
  • Recent proof‑of‑concept attacks demonstrate malicious AI agents: Datadog’s “Kofish” exploits Microsoft Copilot Studio to covertly harvest OAuth tokens, and Palo Alto’s “agent session smuggling” hijacks agent‑to‑agent communication to issue hidden malicious commands.
  • These incidents illustrate a broader trend where legitimate AI tools are repurposed for illicit activities, highlighting a deepening flaw in the trust model of AI‑enabled platforms.
  • Experts predict a surge in similar AI‑driven attacks, citing earlier examples like Gemini and Echo Leaks, and warn that failing AI governance will leave organizations increasingly vulnerable.
  • The podcast also touches on related security concerns, including social‑engineering schemes that manipulate stock prices and the rapid growth of bug‑bounty payouts.

Sections

Full Transcript

# Weaponized AI Agents Threat Landscape **Source:** [https://www.youtube.com/watch?v=iaZS1jer8MY](https://www.youtube.com/watch?v=iaZS1jer8MY) **Duration:** 00:41:22 ## Summary - Attackers can evade keystroke‑based detection by randomizing the timing between key presses, a simple tactic that should have been implemented years ago. - Recent proof‑of‑concept attacks demonstrate malicious AI agents: Datadog’s “Kofish” exploits Microsoft Copilot Studio to covertly harvest OAuth tokens, and Palo Alto’s “agent session smuggling” hijacks agent‑to‑agent communication to issue hidden malicious commands. - These incidents illustrate a broader trend where legitimate AI tools are repurposed for illicit activities, highlighting a deepening flaw in the trust model of AI‑enabled platforms. - Experts predict a surge in similar AI‑driven attacks, citing earlier examples like Gemini and Echo Leaks, and warn that failing AI governance will leave organizations increasingly vulnerable. - The podcast also touches on related security concerns, including social‑engineering schemes that manipulate stock prices and the rapid growth of bug‑bounty payouts. ## Sections - [00:00:00](https://www.youtube.com/watch?v=iaZS1jer8MY&t=0s) **Evasion Tactics and Malicious AI Agents** - The podcast hosts discuss simple keystroke‑timing evasion, the AI governance gap, and recent proofs‑of‑concept such as Datadog’s “Kofish” attack that exploits Microsoft Copilot Studio to stealthily steal OAuth tokens. - [00:03:48](https://www.youtube.com/watch?v=iaZS1jer8MY&t=228s) **Social Engineering of AI Agents** - Panelists explore the emerging risk of agent‑to‑agent manipulation, emphasizing the need for finely scoped constraints (“blinders”) to prevent malicious actors from socially engineering autonomous AI systems. - [00:07:12](https://www.youtube.com/watch?v=iaZS1jer8MY&t=432s) **Treating Agents Like Human Identities** - The speaker highlights that attackers prioritize easy access via valid credentials and urges organizations to apply the same identity and authentication controls to machine agents as they do to human users, ensuring proper scoping and least‑privilege access. - [00:10:40](https://www.youtube.com/watch?v=iaZS1jer8MY&t=640s) **Innovation Outpaces AI Governance** - Rapid AI deployment driven by business innovation repeatedly outpaces the development of governance frameworks, creating a widening gap similar to previous cloud adoption cycles. - [00:14:46](https://www.youtube.com/watch?v=iaZS1jer8MY&t=886s) **Shifting Security Mindset to Enablement** - A discussion on transforming security culture from a gate‑keeping stance to a collaborative, shared‑responsibility approach that enables secure innovation rather than simply denying risky actions. - [00:18:15](https://www.youtube.com/watch?v=iaZS1jer8MY&t=1095s) **Reinventing Security Training for AI Threats** - The speaker argues that traditional phishing‑focused training is obsolete against AI‑generated attacks and calls for a gamified, risk‑centric program that makes security everyone’s responsibility and serves as one tool among many in a comprehensive defense strategy. - [00:21:18](https://www.youtube.com/watch?v=iaZS1jer8MY&t=1278s) **Slow‑Keystroke Evasion Tactics** - A non‑technical host asks a security expert whether typing payload characters one by one with random delays to mimic human keyboard behavior constitutes a clever bypass of behavioral detection systems or a simple, long‑overdue technique. - [00:24:39](https://www.youtube.com/watch?v=iaZS1jer8MY&t=1479s) **Rise of Humanized Malware** - The speaker argues that attackers are increasingly deploying “humanized” malware and automated red‑agent tools to exploit organizations that fail to adopt multidimensional risk assessments like MFA, treating cybercrime as a business driven by ROI. - [00:28:06](https://www.youtube.com/watch?v=iaZS1jer8MY&t=1686s) **Compromised Credentials Fuel Market Manipulation** - The speaker argues that stealing passwords to hijack brokerage accounts provides attackers a fast, high‑ROI method to manipulate markets, making it more appealing than prolonged ransomware campaigns. - [00:31:12](https://www.youtube.com/watch?v=iaZS1jer8MY&t=1872s) **Beyond MFA: Behavioral Analytics Future** - The speaker argues that while MFA is essential, attackers exploit weak points and MFA fatigue, so the industry should shift toward behavioral analytics and shared signal standards to improve risk evaluation. - [00:34:25](https://www.youtube.com/watch?v=iaZS1jer8MY&t=2065s) **High‑Stakes Bug Bounties Explained** - The speaker outlines why only exceptionally hard, state‑level exploits earn massive bounty payouts—driven by security concerns, publicity hype, and the grind required—while noting that AI‑generated bug submissions are flooding programs and that true earnings remain limited for most hunters. - [00:38:42](https://www.youtube.com/watch?v=iaZS1jer8MY&t=2322s) **AI‑Driven Automated Purple Teaming** - The speaker describes how AI can autonomously perform red‑team activities, learn exploits on the fly, require blue‑team agents to counter AI attacks, and still need human review for complex edge‑case vulnerability chaining. ## Full Transcript
0:00This seems like a really simple way to evade detection. 0:04You put in a random time in between keystrokes. This 0:06should have been done 10 years ago. Right? But on 0:08the other hand, why is the detection software looking at 0:13speed of key inputs as a metric to determine human 0:16versus not human? All that and more on security intelligence. 0:24Hello, and welcome to Security Intelligence, IBM's weekly cybersecurity podcast 0:30where we break down the most interesting stories stories in 0:32the field with the help of our panel of experts. 0:35I'm your host, Matt Kaczynski. And joining me today, Chris 0:39Thomas, AKA Space Rogue X Force Global lead of technical 0:43eminence and part of the not the Situation Room podcast, 0:46and sridharmapidi, IBM fellow cto IBM Security. Thanks for being 0:51here with me, folks. Today we are talking about the 0:54AI governance gap, malware that acts like a person, how 0:58social engineers are manipulating stock prices and ballooning bottom bug 1:02bounties. But first, let's talk about malicious AI agents. Now, 1:11there's been a lot of talk about how attackers could 1:13weaponize AI agents, and we're finally starting to see it 1:17happen for real, or at least in proofs of concept. 1:20And two in particular came out last week that I'd 1:23like to talk about. The first is from researchers at 1:26Datadog who identified a technique they call Kofish because it 1:30takes advantage of Microsoft Copilot Studio. Attackers can basically use 1:34it to build malicious AI agents that secretly steal oauth 1:39tokens in the background. The second was from researchers at 1:43Palo Alto who reported on what they call agent session 1:46smuggling. This uses the agent to agent communication protocol to 1:51secretly transmit malicious commands to a target agent. Basically, the 1:55protocol allows two agents to talk to each other and 1:57the user doesn't necessarily see what they're saying. So if 1:59you've got a malicious one, you it can secretly say 2:01some nasty stuff that makes the other agent do some 2:04bad things. So I want to start by with you, 2:08actually, Chris. Is this a case of legitimate tools being 2:11put to illegitimate ends or is it a deeper flaw? 2:13What do you think's going on? I think it's a 2:15big part of that. I mean, the criminals are going 2:16to use whatever tools they have available, right? Because criminals 2:19are going to crime and if they have AI tools 2:22available, they're going to use those AI tools. The advantage 2:24that the criminals have here is that they can just 2:26sort of experiment and play around with stuff and throw 2:29stuff at the wall and see what works. And some 2:31of it works. And I'm sure there's a whole bunch 2:33of stuff they've tried that hasn't worked. That we haven't 2:35seen because it didn't work. Makes sense. Sridhar, you have 2:38any thoughts to add on that? We are going to 2:40see a number of such attacks in the future. Right. 2:44If you look at, if you rewind a little bit 2:46from Black Hat timeframe, we saw Gemini, which has got 2:50very similar attack as this agent session smuggling. And we 2:53also saw Echo Leaks which had a very similar kind 3:03think if I step back, I kind of look at 3:05agents being autonomous and that creates problems of oversight. Agents 3:10can be coerced into doing something like, you know, social 3:14engineering of humans, social engine of agents. And also agents 3:18are non deterministic in nature. And attackers, you'll see will 3:22take advantage of these principles more and more as we 3:26look forward in the next few years to come. Yeah, 3:29I'm glad you brought up the social engineering angle because 3:31I thought of that. That too. Right. And what's interesting 3:33is that both of these kind of illustrate social engineering 3:36but in slightly different ways. Right. The first is a 3:38little more classic, the Kofish anyway because it, it preys 3:42on users trust of Microsoft. Right. You think, oh, an 3:45agent hosted on Microsoft, that's got to be perfectly legitimate. 3:48Right. And of course somebody could be using it for 3:50illegitimate ends. The other one that's really interesting though is 3:52that the, the agent to agent attack, the, the agent 3:55session smuggling. It's almost like socially engineering an AI agent. 3:59Right. Like you're kind of tricking the good agent with 4:01your malicious agent. I was wondering if you folks had 4:04any thoughts about this new frontier when it comes to 4:08being able to now socially engineer some of our technology 4:11maybe in a way that we couldn't in the past. 4:13And let's start with you Sridhar, because you brought up 4:16the social engineering thread. Do you have any thoughts on 4:17that? I think it's about making sure that we scope 4:20the agent. Right. I mean agents cannot be doing everything 4:24you have to put blinders. We talked about a good 4:26analogy of social engineering. So the question is, how do 4:29you put blinders on the agent such that it is 4:32only doing certain things for certain individuals, either on behalf 4:36of an individual or in behalf of an agent, or 4:39autonomously, but being able to scope it extremely fine tune 4:43it to either time, either resource, actions, either scope in 4:49terms of location that will limit the agent from getting 4:54coerced into doing things that he's not supposed to do 4:57through social engineering. Absolutely. Chris, what about you any thoughts 5:01there? I haven't seen anything yet where one agent is 5:04specifically manipulating another agent, but I can totally see that 5:08that's where that where things are headed, right? Like I 5:10said, criminals are going to crime, they're going to use 5:12whatever tools they have. And if they have an agent 5:14as a tool that they can use, they will absolutely 5:17apply that against another agent to try to manipulate it, 5:20coerce it, get it to do things that maybe it's 5:22not supposed to do. So we have to have those 5:24blinders in place where we have to be able to 5:26create those agents so that they can't get out of 5:29their little sandbox. And no matter how much the attackers 5:32try to manipulate them, they still can't give up the 5:35information that the attacker wants. I think I do want 5:37to pause you for a second. Right. And before we 5:41leave this topic, I think this is the tip of 5:44the iceberg, right? If I look at the agent behavior, 5:49the attackers exploiting this autonomous behavior for additional threat privileges 5:54because that is the easiest thing to attack, right? Chris, 5:58what did you guys publish? 30% of the attacks that 6:02you're seeing it through valid credentials. So it is so 6:05easy to go and get these things, valid credentials, and 6:10launch an attack. But as you go beyond that, right, 6:14go one level, beyond the tip of the iceberg to 6:16the next level. This is where you can see red 6:19teaming and blue teaming have to come together. This is 6:22somewhere you see agents being non deterministic. So they will 6:27drift and attackers will take advantage of this drift. So 6:32this to me is a beginning of a new class 6:35of attacks that we will see. And you are seeing 6:38this only because it is so much easier to use 6:41valid credentials to attack versus trying to do something, which 6:44is rocket science. Yeah, I think that's a really good 6:47point. I'm glad you brought that up because that ties 6:49into conversations we've had in the past, doesn't it, Sridhar, 6:51about how when AI agents enter the equation, how you 6:55approach identity and access management kind of has to change 6:57a little bit, right? Yeah. And so I'm glad you 7:01brought that up because this is one of those cases 7:03where you're taking that credential based attack which we've seen 7:06launched against human employees and now you're launching it against 7:09a new kind of employee, a mechanical one, basically, right? 7:13I mean, criminals and hackers alike are both very lazy 7:16people at the root core. Right? We're lazy. And anything 7:20we can do to make our job easier is what 7:22we do. And having valid credentials and makes the job 7:25so much Easier. So anything that can get the attacker 7:28the valid credentials is definitely where they're going to go 7:30first. So do, is it, do you think it's easier 7:32then to get valid credentials from an agent than it 7:35is from a human employee? Or is it just a 7:37similar level of challenge, but it's just a different way 7:39of going about it? How do we think about that? 7:41I think we have to think about it. And if 7:43you use the current techniques, I think you're talking about 7:45how we have to fundamentally think about changing some of 7:48our behavior as well as how we do things right. 7:52Today we do a really good job of authenticating human 7:55beings. We do two factor authentication, multi factor authentication, all 7:58the cool things, right? But machines don't have that, so 8:03we tend to use functional IDs, which are basically giving 8:06them over privilege instead. I think what we should be 8:10thinking about is to identify the agent with some level 8:14of identity. Just like you would identify a human being, 8:17right? End of the day, agents are your next level 8:19of insiders. So just like you would identify a human 8:23being, you have to identify an agent. Once you identify, 8:27you have to do the same thing that we do 8:29with humans, authenticate them, right? And then you figure out 8:32how to scope what that agent can do, both the 8:36good and the bad. And while you're doing that, that's 8:40when you can think about a very, very fine level 8:44of granularity, of observability, so that we can then monitor 8:49all the behaviors and be able to detect anomalous behavior 8:52very quickly. Chris, I thought I saw you start to 8:54say something. Do you have anything to add there? No, 8:56I'm just agreeing with basically everything that's being said here. 8:59We have to identify the agents, then authenticate them and 9:04give them the appropriate permissions that they need, just like 9:06we do with our human users, right? So even though 9:08the user's not specifically human, they still need to follow 9:11the same identity and authentication processes or tailored for AI 9:15agents. Not the same, obviously, but they need to be 9:19identified and authenticated properly. So it sounds like the way 9:21we treat human identities and non human identities is going 9:24to get closer and closer over time. Is that accurate 9:26to say? Yeah, and the only thing is the scale 9:30is exponentially different than humans versus non humans. So back 9:37to your point, right? You cannot use all existing tools. 9:40Existing processes may apply, but you're talking about a different 9:43magnitude of scale. So you need to think about automation, 9:46you need to think about being proactive. You know, you 9:48think about a lot of things that are more dynamic 9:51in nature than static in nature. So maybe that then 9:53is a good segue into the AI governance question, because 9:57we have this AI governance gap right now, right? And 10:05this is coming out of both research that IBM has 10:09conducted and an article from IBM's Judith Aquino, kind of 10:12breaking some of that research down for us. Now, organizations 10:15are deploying AI tools faster than they can develop robust 10:18risk governance frameworks for those tools, according to IBM's AI 10:23at the Core 2025 research report. The kind of key 10:26figure here, at least as far as I'm concerned, is 10:28that 72% of businesses surveyed said they have integrated AI 10:33into at least one business function. But only 23.8% of 10:37businesses surveyed said they have extensive governance frameworks in place. 10:40That's a pretty big gap between who's deployed and who's 10:42got extensive governance in place. And so I want to 10:46head throw back to you again here, Sridhar, you know, 10:48because again, this touches on things we just talked about 10:50and conversations we've had before. This about the gap between 10:54what we when we deploy AI and the governance that 10:56needs to catch up. Why does such a gap exist? 10:59Why does governance lag behind deployment pretty much every time 11:02we introduce something new? I think we've seen this movie 11:04so many times, right? We've seen this movie so many 11:07times. Most recent movie has been the cloud playbook. You 11:10know, deploy fast, govern later, get breach in between. Right. 11:13As simple as that. It's the same rinse and repeat 11:16story we've seen like, you know, a few times. Right. 11:19And the reason that that happens is because of the 11:22innovation. End of the day, businesses have to innovate. They 11:25have to go and make themselves relevant. Right? We all 11:30move to cloud for the operational benefits, efficiencies, et cetera. 11:35AI is going to help us with productivity, not just 11:37with the employees, but also consumers, ease of use, et 11:40cetera. And they have to leverage that. As a result, 11:43you've got innovation which is primarily driven by the applications 11:47team accelerating, and then meanwhile you've got the risks trying 11:52to catch up. And if you don't balance it really 11:55well, that's why you keep seeing the gap keep widening 11:59up. Widening, right? Absolutely. Chris, I'm wondering if you have 12:03thoughts on the kind of cybersecurity implications of these gaps 12:06when they pop up? Kind of like Sridhar said, it's 12:08like deploy, get hacked in between and then finally get 12:10to some governance. What do you think about that? This 12:13is a similar playbook. Like we're repeating the same story 12:16that we've repeated with bring your own Device, cloud, work 12:20from home. Now we have AI and we implement the 12:24technology first and then we like, oh wait, we got 12:26to put some rules around this and figure out exactly 12:29how we're supposed to do this. And a lot of 12:31times we look at, okay, just make it work and 12:33then we'll figure it out later. And that it's that 12:36gap in between that the attacker looks for because we, 12:40they see that gap, they, they see an opportunity for 12:43them to leverage that lack of governance so that they 12:47can get into your organization and do what it is 12:51that they do. And this is a common problem that 12:53we've had with new technologies and AI isn't any different 12:57here. So we, it's important to recognize that and realize 13:04that your AI is now a risk and you need 13:07to be. If you don't have the governance in place, 13:11then try to mitigate that risk as much as you 13:13can by network design, authentication, et cetera, other tools that 13:18you have until your governance can catch up. No, that's 13:21it. You cannot ban AI, you cannot ban devices. Right. 13:25You cannot ban, you cannot ban from employees working from 13:28home. Right? Employees will be employees and they'll use it 13:33anyway. So we have to, to Chris's point, I think 13:36we have to the choices to make it secure, enablement 13:42or have a blind exposure, that's the choices that you 13:46have. One of the things that I talk about is 13:51security is becoming more and more distributed and it's become 13:55more and more shared responsibility. It's not the question of, 13:59okay, the security Persona owns security and the application teams 14:03don't own security, and hence let me go run with 14:06it and come back and catch it, but instead be 14:10able to have a mechanism by which we can start 14:14thinking about finally learning from all of these movies that 14:17we've seen into making it a shared responsibility. Right. I 14:22kind of call it like a guardrails versus checkpoints or 14:25gates. When you have a guardrail, fine. You can define 14:29a policy which says, sure, on this speedway you can 14:33go at 55 miles even if it's a 50 mile 14:35speed limit. That's okay, right? As long as you don't 14:38jump the guardrails. But the other hand, if you put 14:42a lot of checkpoints, you know what happens at checkpoints, 14:44right? Like at toll gates, there's a long traffic jam. 14:47So that's a cultural change that we have to think 14:50about it. We have tools for sure, but that's the 14:52cultural gap that I'm hoping that at some point we 14:55will learn. A lot of times people look at security 14:58even as practitioners who. They're the know people. Oh, you 15:02can't do that. It's bad. It's not secure. You can't 15:04do it. And I've always tried to look at it 15:06like, no, that's security's job is to say yes and 15:09figure out how to do it securely. And to Siddharth's 15:12point, it is a cultural change that we have to 15:14look at here and that it's everybody's responsibility to think 15:18of these security, how can I do this and be 15:20secure at the same time? Not just how do I 15:23do it and rush it out the door. So it 15:26is everybody's responsibility, but it should be also everybody's job 15:30to figure out, yes, we can do this and we 15:32can do it securely, not just no, because it's not 15:35secure. Yeah, I'm glad you both brought that up because, 15:37you know, that was kind of what I was going 15:38to ask. Is that like, you know, is it, I 15:41don't know, maybe the word I was looking for is 15:42responsible, right? Is it responsible to deploy this technology before 15:45you have governance? And it seems like that's completely the 15:47wrong question to ask because realistically, like you said, Sridhar, 15:50and you, Chris, you can't ban this stuff. People are 15:52going to use it. So you can either kind of 15:54stand there and try to stop them from doing something 15:56they're going to do or. Or you can enable them 16:02me to another question, and it's kind of a big 16:04one. And, you know, I'm sorry to spring it on 16:06you, but one of the things I often hear from 16:09people when I talk to them about, you know, sort 16:11of enabling everybody to be more secure in an enterprise 16:14context is how you can give all this kind of 16:17security training and half of it just doesn't stick with 16:20people. Right. They just don't follow it. So do you 16:22have any thoughts on what this culture change looks like 16:25to make this kind of distributed shared responsibility model actually 16:29work? Any thoughts there? I think part of it is 16:31understanding the risk, understanding the risk to the business. Right. 16:35I mean, that risk is a, you know, fine, it 16:38is probably a gray word, but at the same point, 16:41if it makes it relevant to the application teams, right. 16:45This is my sensitive data, or I'm holding the sensitive 16:48data for my clients that I'm serving. How do you 16:51understand that? In a manner that shows that they're taking 16:56a risk by not looking at certain security vulnerabilities? For 17:00example, I Think if you understand that cleanly then it's 17:07very similar to saying that sure, I don't want to 17:09buy insurance right now, but if you show the likelihood 17:12of insurance or likelihood of a storm or a flood 17:18next to an ocean versus the likelihood on in a 17:21mountain, right. It may change, it may change the thinking 17:25to say I may want to get flood insurance. Right. 17:27So that awareness is number one and number two I 17:32feel is gamification. Right? Bit of a gamification will help 17:38in terms of like how do we security, including myself. 17:43Right. I will poke holes at myself first. We tend 17:46to make it very complex, right? We tend to make 17:49it very complex. Like we talked about OAuth 10 minutes 17:52ago. Right. It is so hard to set up the 17:55entire delegated flows within OAuth. It's not for the weak 17:58hearted and that's one of the reasons why people don't 18:01embrace it as easily. So how do you make it 18:04simple? How do you gamify a little bit? How do 18:06you make it fun so that you can then say, 18:09hey, using a very simple analogy, here's my risk thermometer. 18:15Here's my mitigation thermometer. Let me show you where that 18:19is and give you some indication of how much risk 18:22you're taking or not taking can probably influence people from 18:26not rushing forward. I agree. I mean to changing the 18:29security mindset or the security culture that we have in 18:32organizations so that it's everybody's responsibility. And you touched on 18:37training. We need to totally revamp our training regimens. We 18:41all have the same multiple guests, how to identify a 18:46phishing email, bad grammar and other things. And in the 18:50age of AI, all that training is really no longer 18:55effective. The techniques and tools that you use as an 18:58individual to try to identify this risk are totally different. 19:02Now AI makes a perfectly sounding email. There's no spelling 19:06mistakes. So trying to use that old multiple guest training 19:12and people just click through it as fast as they 19:13can. We all do. Right? Because we all have the 19:15same training. We got to get back to work. So 19:18making it gamified and making it so that the user 19:21can identify the risk, not necessarily the telltale signs of 19:26an attack, but what's the risk to me? What's the 19:30risk to the organization? How do I mitigate that risk? 19:33That's the sort of training that we need to, to 19:35integrate into our people. And then again, every time I 19:38talk about training, don't rely on it. It can't. It's 19:41the last. It's not the first line of defense. It's 19:43not the last. It's one more tool in the toolbox. 19:46A lot of companies will say, oh, I trained all 19:48my people. We're secure. That's not how it works. Okay, 19:52so just. Yeah, yeah. As somebody, you know, who was 19:56not a sort of cybersecurity professional, Right. And who so 19:59spent, you know, years and years and years of my 20:02life taking those trainings as the employee, it's true, I 20:05didn't pay any attention to them, right. But now that 20:08I've come into this realm where I kind of do 20:10this podcast with folks like you guys and I learn 20:12the concepts behind this stuff, it is so fascinating. And 20:15so I do think that, like, if you actually teach 20:17people the concepts and not just, you know, the scolding, 20:20hey, make sure you change your password. I do think 20:22you start to get somewhere. You know what I mean? 20:24It's like you said, Sridhar, give people an actual understanding 20:26of the level of risk that they are taking. And 20:29then they'll be like, hey, you know what? I understand 20:31this in a real context to do something about it, 20:33you know. Teach him how to fish. Not exactly. Exactly. 20:36Teach him how to fish. Don't give him the fish. 20:43Let's move on then to our next topic. Today we 20:46are going to continue a little bit on this theme 20:47of blurring the lines between people and non human entities, 20:51if you will, with the malware that acts like a 20:54human. Specifically, I'm talking about a newly discovered banking Trojan 20:59nicknamed Herodotus, which evades behavioral detection systems by timing text 21:05inputs to look more like a human being typing. Right? 21:08Now, this comes from threat fabric. They're the ones who 21:10found it. And, you know, in a lot of ways, 21:12Herodotus is very much like your standard banking Trojan. It 21:15gets in, it steals credentials, remote access, yada, yada, yada. 21:18But the one little wrinkle here that I thought that 21:20caught my eye was that, you know, in order to 21:23make it seem. In order to evade some of these 21:25behavioral detection systems, instead of just kind of inputting text 21:28all at once, Herodotus would take the text the hackers 21:32wanted to input, split it into characters, and enter characters 21:35one by one on a timing delay to make it 21:37seem like keyboard, you know, fingers on a keyboard. And, 21:39you know, look, I'm a non technical person largely, but 21:42I thought this was interesting. But I want to ask 21:44more technical people, and I'll start with you, Chris. Is 21:46this impressive? Is this as clever as I think or 21:48is this not really that big a deal? What do 21:50you see here? Both. I'm surprised that it took this 21:54long. Why is this the first one that we're seeing 21:57to do this? This seems like a really simple way 21:59to evade detection. You put in a random time in 22:02between keystrokes. Like, why isn't. I mean, it's 2026. This 22:06should have been done 10 years ago. Right. So in 22:09that case, yeah, this is kind of cool and interesting 22:11because somebody's finally figured it out. But on the other 22:13hand, why is the detection software looking at speed of 22:20key inputs as a metric to determine human versus not 22:24human? Like, I hope there are some other metrics in 22:26there that it's also looking at, because key input, like 22:29that's a known heuristic that you can identify individuals by 22:33is how they type. Like that's a known thing. So, 22:37yes, this is both amazingly amazing and both. Yeah, whatever. 22:43Next. Yeah, I'm actually surprised as well. Right. As we 22:47always think that our adversaries are way ahead of the 22:52defenders because they work together, they're more opportunistic, they're more 23:01we actually have a product in identity space which looks 23:05at a combination of your subject, which is a person, 23:10your action, which is a resource, your network activity, your 23:13environment, your behavior, which is not just the keystrokes and 23:16mouse movement, but also the fact that I'm doing a 23:19$30 transaction versus $3,000 transaction, puts them all through the 23:23ringer, and then tells you whether you want to do 23:25MFA or not. An mfa, it calculates the risk. Right. 23:29So that is, we've had it for six, seven years 23:34right now in production, and IBM uses it. Right. So 23:38I'm surprised that right now we are looking at something 23:41similar, which is probably one dimension. So maybe they will 23:48get exponentially faster. I see that eventually the attackers are 23:53going to start thinking the same way that we've been 23:55thinking, which is space bar or keystroke measurement is one 24:01dimension. They will probably look at other dimensions so that 24:05they are able to then go and provide multiple parameters 24:11to be able to circumvent the fact that this is 24:14a bot versus a human? Yeah, I'm glad that you 24:17kind of brought that up, because that was what I 24:18was wondering. Right. Is this kind of the beginning maybe 24:22of a little bit of your classic kind of arms 24:25race? Right. Like you said, Sridhar, we have these kinds 24:27of behavioral detection systems that are pretty complex, these a 24:30lot of different factors. Maybe the hackers just stumbled onto 24:33one of them, but maybe they'll start to use more 24:35of them and then Are we entering a world where 24:37these things are going to keep escalating? I don't know. 24:39What do you think? Is that where we're headed? Will 24:40we see more of this humanized malware? I think there's 24:42two dimensions over here. One is definitely, definitely more humanized 24:47malware. And not just so humanized malware, but there's a 24:49sister side of it, which is automated red agents. Going 24:53back to the agent discussion that we had a few 24:56minutes ago. Right. But I think that's one dimension. The 24:59other dimension is also, end of the day, the attackers 25:03are running a business. It's a return of investment. Why 25:06are they doing that now? They're not dumb. Right. As 25:09smart as we think we are, we are not. At 25:11least I'm not. Right. I think they're doing it because 25:15not everybody using the multidimensional risk analysis. It's very similar 25:20to mfa. How many people actually use mfa? Not a 25:25new technology, but I'm surprised that not many people use 25:28msa. I see your head nodding. Right. So it's the 25:31same thing. Right. Not many vendors or not many organizations 25:37are using multidimensional way of evaluating if there's human or 25:43not human. Most people are still stuck on captcha or 25:46maybe some traditional ways of doing that, and maybe that's 25:50what they're going after. Right. So there's two dimensions that 25:52I look at. One dimension is where the maturity of 25:56the market is. And again, this is a precursor to 25:59something which is, you know, going to explode as well. 26:02So, yeah, I've been yelling at people to turn on 26:04MFA ever since I started. Ever since I started covering 26:07cybersecurity and learned how bad passwords are at keeping you 26:10safe. I try to yell at everybody in my life, 26:12turn on mfa, and they don't always listen to you. 26:15Chris, any thoughts on your end here on what we 26:17could expect from this humanized malware trend? Well, I mean, 26:20it's the old cat and mouse game, right? We put 26:23in a defense. The bad guys or criminals figure a 26:26way around the defense, and then we put in another 26:28defense, and then the bad guys, criminals figure out another 26:31way around the defense. So it just kind of goes 26:33back and forth. Do they have an advantage? Are they 26:37further ahead than us? A little bit, maybe. But then 26:40we catch up and pass them, and it kind of 26:42goes back and forth. So this is, like I said, 26:44this is both novel and not novel. I'm surprised it 26:48took this long, and I'm kind of interesting to see 26:51what they come up with next to try to bypass 26:53some other humanistic heuristics that we have. Let's move on 26:57then to our next story and talk about a very 27:00interesting smishing attack, one that's happening on a level that 27:03I personally haven't seen before. And this is a smishing 27:06attack that manipulates stock prices. This is a, a campaign 27:14that Fortra uncovered of a pretty large smishing network that 27:19is sending out these messages to basically try to steal 27:22people's brokerage accounts. And then once they get into the 27:25compromised account, they manipulate stock prices to make some money. 27:29Now, I'm going to quote Alexis Obert Fortra to explain 27:32it because again, this is a kind of thing I 27:34have not seen before. So I'm going to use her 27:35words. In these scenarios, the threat actor will liquidate any 27:40existing investments made by the victim and reallocate the funds 27:44to low liquidity stocks, often penny stocks or IPOs. Then 27:48they will artificially inflate the stock price by purchasing large 27:52amounts. And once at a profitable level, they will sell 27:55off the holdings to gain a financial profit before withdrawing 27:58any earnings using mobile wallets. Again, I personally have just 28:03never seen social engineering on this kind of scale before. 28:06You know, I've seen people get their individual bank accounts 28:09hit, but like to manipulate the markets. That's a little 28:12bit scary to me. I don't know. What about you 28:14folks? Any thoughts there, Sridhar? What are your feelings on 28:16this thing? I want to step back right now. Granted. 28:19It sounds really, really cool. Right. I have not seen 28:22this either. Right. But I've seen similar things. But if 28:25I step back, the fundamental thing over here is stealing 28:29passwords. And that is the easiest thing that one can 28:34do today. And it's pains to say that. Right. But 28:38that is a reality. Right. Why go and jump the 28:41walls when we can go and cut the wire fence 28:44with a wire cutter and get into the property? So 28:47I think to me, we are seeing a beginning of 28:52how you can take advantage of compromised passwords, whether through 28:57phishing or whether through buying hundreds of them or hundred 29:01thousands of them at the Dark web. That's easy to 29:03do these days. Once you do that, rather than trying 29:08to go do a ransomware attack, which takes a long 29:10time, the return on investment may not be as much. 29:14I look at this as an opportunity to say, hey, 29:16why don't I go and take care of tens of 29:21thousands of brokerage accounts that gives you a few million 29:24dollars very quickly that I can manipulate the market. So 29:27the return on investment is way higher than having to 29:30live in a network for a year. Before I see 29:32that I'm somewhere. Right. So I'm thinking like an attacker, 29:38right, for a change. But I have to think there 29:40so that I can then start defending. Yeah, I, you 29:43know, that's a really good point. And it's almost like 29:45basically every story we've covered so far is sort of 29:48the same theme that like everything old is new again, 29:50right. It's like you said, this is just, I had 29:52not thought about that. This is another kind of password 29:54attack, right? It's put to really neat ends. But at 29:56the end of the day, what are you doing? You're 29:58stealing a password. You're getting in there. You're stealing a 29:59password and using a person's account. Right. Chris, your thoughts 30:03there? We've seen similar attacks with crypto, right. People trying 30:06to get the passwords, get in the crypto account and 30:09liquidate the account and move the fund somewhere else. And 30:12we have seen some of this with, with brokerage accounts 30:15too, but it's usually liquidation and get out, right? The 30:18manipulate the markets with the penny stocks. That's a new 30:21angle. That's an extra step. But at the, the, it's 30:25still a password attack. It's still, you know, a Trojan 30:30that we're going after. Bank, we're going for the money, 30:32right? Whether it's crypto, bank account, brokerage account, they're going 30:35for the money. The added step that's going on here 30:38is that instead of just liquidation and get out, they're. 30:41They're trying to make even more money before they can 30:44liquidate. So, yeah, this is just again, another step, another 30:48evolution that we're seeing in the criminal mind as they, 30:51they take it to the next level. And more reason 30:54to turn on that MFA again, right? It's like you 30:56said, you want to keep people out, just turn on 31:03do want to mention MFA is not a panacea, right. 31:05It's not a guarantee. There are ways that a really 31:08smart attacker can bypass mfa, but it's, it's another step. 31:12It gets rid of the low level, attacks the ankle 31:15biters, as we call them, and makes it more difficult. 31:18And by making it more difficult, remember I said the 31:20attacker is lazy. They're going to go to someone else 31:23because you've made it. Oh, I'm not dealing with mfa. 31:25I'm going to go to this other account. That's what 31:27you want. Protect yourself, let somebody else be the victim. 31:29MFA is absolutely required. I think there's no question without. 31:34But again, too much of MFA also causes distractions. Right. 31:38I think. And that's one of the reasons and one 31:41of the avenues that attackers use. Right. MFA fatigue. Instead, 31:46I think we should think about in 2026 and 2025 31:49and 2026 moving forward. Right. We need to start thinking 31:53about the behavioral analytics like we talked about in the 32:05person? Whether it's a bot or a human, what device 32:08it's coming from, known or unknown, what is the network 32:11or the environment? Have I seen this before or not 32:15seen this before? And the industry is actually doing a 32:18really good job with a open specification called Shared Signals 32:22over here. Right. As a part of the OpenID, if 32:25you use something like that and collaborate on the shared 32:29signals, whether it's identity related IOCs or IOBs or any 32:33other things, the more data that you have, the better 32:37you can do a risk evaluation to be able to 32:40stop this. So I think it's not just MFA is 32:42a means to an end and it is a result 32:44for sure. But how you get to MFA has to 32:47be behavioral analytics. That's a really good point on both 32:50ends. And yeah, I'm glad you said also Chris, that 32:52you know, the MFA is not a, it's not a 32:54panacea, right. It's not going to fix every, it's not 32:56going to stop everything because to be quite frank, I 32:58believe in this attack. One of the things that's interesting 33:01about it is that it does involve stealing one time 33:04passwords, right? They have. And you know, again it's safer 33:07to have that because it puts an extra obstacle but 33:09it's not totally uncrackable. So you need those like you 33:13said, those behavioral signals, Sridhar, that become. And when you 33:16have a bunch of them, you can't fake those as 33:18easily, you know. Yeah. So let's move on then to 33:22our final story for the day, folks talking about bug 33:25bounties getting bigger. This is a report from Bloomberg that 33:34reports that bug bounty programs are skyrocketing in both popularity 33:39and the amounts they're paying out, hitting some all time 33:42highs. For example, HackerOne paid out 81 million over the 33:46past year, which is its single highest year on record 33:50and a 13% increase over the previous year. So pretty 33:53significant jump. And what's particularly interesting to me here is 33:56that in an era of AI when you have things 33:58like Google Code Mender coming out and all this stuff 34:01where people are like hey, we're going to automate the 34:03ability to find your bugs. To see such a human 34:06driven activity like this taking off even more, I just 34:10thought that was kind of interesting and not necessarily what 34:12I would expect. Chris, I want to start with you, 34:14you know, as a hacker. What are your thoughts on, 34:17on, on this kind of thing right now? I got 34:19a lot of thoughts on bu. Bounties. Let me just, 34:22I'll try to keep it to this particular topic today. 34:25The big numbers that you're seeing for these specific bounties 34:29are for very specific, very difficult to exploit, hard to 34:34find bugs. They're not your run of the mill AI 34:38finds 100 bugs in an hour type bugs. These are 34:41the types of bugs that a state sponsored actor would 34:44pay a lot of money for. And so the reason 34:47for these big bounties are to keep them out of 34:49the hands of the state sponsored actor, right. So that 34:53they're not used against dissidents or on mass surveillance of 35:03that AI is able to find, those do not pay 35:06out anywhere near as much. But these big numbers also 35:10make great headlines which helps the companies that run bug 35:13bounty programs saying that, you know, all these, we're paying 35:17out all this money, come find bugs and become a 35:19millionaire. That's not really how it works. It takes a 35:22lot of hard work to make a decent money at 35:24bug bounty and it's a kind of a grind. But 35:28if you have the skill set, yeah, you can make 35:30some money there. But at the same time you have 35:32companies who now have to, who are paying for this 35:35service as well. And then you have attackers who are 35:38using AI to find bugs and are flooding bug bounty 35:41programs. This is a whole nother topic when it comes 35:44to open source. And now I'm kind of going off 35:47into some of my other topics areas so I'll leave 35:49it at that. Yeah, there's big money here that can 35:53be made, but it's difficult to make it. Yeah, we'll 35:55have to have you come back and talk about this 35:56in more depth with Sridhar. A whole show on bug 35:58bounties. That's good to know. Sridhar, your thoughts on the 36:01kind of state of bug bounties today? I can't seem 36:04to recall that the name of a movie, right, where 36:06this person is paid really, really high amount of money 36:10to hack out of a prison. Right. And I kind 36:15of look at it like that, right, that yes, you 36:18are legitimate burglar, but you can make more money doing 36:23ethically and legally than by being on the other side. 36:27Right. So I kind of look at in two dimensions. 36:30Like you've seen my theme. Now, one dimension is for 36:35the attackers, like I said. Right. I know. Jokes aside, 36:38this is a legitimate and illegal way of getting really 36:44well paid and putting the best minds to that to 36:47say you can actually make a good living out of 36:49this, Use expertise from Space Rogue and all the things 36:57that we do. As an example, the other dimension is 37:00from a company perspective, it is a small sum of 37:05money as insurance to pay for what may be a 37:08much larger right for paying a million dollars in bug 37:12bounty versus $10 million in ransomware. I'll take the first 37:16option any day. So as a result, you see both 37:20of those coming together into a perfect storm to increase 37:25the momentum. There's a desire to do more bug bounty 37:30legally and there's a desire to pay more because that's 37:33an insurance. Right. That's where you see this more and 37:36more on the increase. Now, having said that, for both 37:42sides, bug bounty alone is not sufficient. I think Chris 37:46was also saying that MFA is not the panacea. It's 37:49not the only thing necessary. Bug bounty is part of 37:53the resilience program of the overall resilience program. Basically, you 37:57have to do all these things together other and. And 38:00bug bounty should actually be the last thing you do 38:03for all the stuff that you may have missed that 38:05you did check for. So, yeah, to second. Second your 38:10opinion there, it's not the only thing you should be 38:13doing. It is one more tool in the toolbox. Given 38:16their kind of ability to act as like you said, 38:17this kind of last kind of line of like insurance 38:21of like, hey, we did everything we could to find 38:22this thing. If there's something still out there, we'll give 38:24you a reward for finding it. Do you think we're 38:26going to see the bug bounty programs kind of stick 38:28around or do you see a day that, I don't 38:30know, the AI gets good enough that this kind of 38:32thing goes away? Any thoughts there? I think it's going 38:35to evolve, right? I mean, our attackers are going to 38:37evolve with more and more automated agents for doing attacks. 38:42Right. Think of it as automated red teaming. And it 38:47learn on the fly, it learn all the ttps, it 38:49learn all the vulnerabilities. And while you go across get 38:52a cup of coffee, you'll probably have an exploit and 38:55probably the code generated to leverage that exploit as an 38:58example. Right. So I think what we will see is 39:03probably more and More purple teaming, Right? Not just a 39:08mechanism to go and say, let me go and do 39:11this automated testing and then come back and fix it. 39:14But the speed in which this happens requires you to 39:17have some sort of a blue agent which is able 39:20to go and fight AI versus AI. And then individuals 39:26then have to figure out how to govern those in 39:29a manner that they can keep up with the speed. 39:32I think in the short term it's hard to forecast 39:36out 15, 20 years, but in the short term, I 39:40think there's still going to be a need and a 39:42requirement for manual review of code for the weird chaining 39:48of bugs together and finding that weird edge cases that 39:52AI is just not going to find for now. Right. 39:56I have no idea what's going to happen in 20 40:04teaming and other security aspects, if you want those edge 40:07cases, if you want to find that weird chaining where 40:10you're putting five bugs together to gain access, you really 40:14need a human to do that. If you need a 40:15surface level stuff, yeah, you need to check a box, 40:19get your AI agent in there and do your red 40:22team to check your box. But I hope at least 40:26we're still going to need humans for a little while. 40:29What I'm worried about though is that the AI is 40:33going to take all the low level stuff and we're 40:35going to run out of people expert enough to do 40:37the human stuff. What I say, Chris, is AI is 40:41going to help us with speed and accuracy. No question, 40:44no caution on that. Right. But I think the human 40:47ingenuity and creativity will always remain with us so that 40:50when you combine it, that's when good stuff happens. I 40:53hate to leave you all on that question on that 40:55slightly apocalyptic scenario, but that is all the time we 40:59have for today. So thank you Sridhar and Chris for 41:02being here. Thank you to our list, listeners and viewers 41:05and folks. Don't forget to check out the special episode 41:07we released last week, how to Break into an Office, 41:10which features our very own Stephanie Carruthers. Find it on 41:14Apple, Spotify and audio platforms everywhere. As always, subscribe to 41:18Security Intelligence wherever podcasts are found and stay safe out 41:22there. there.