AI Agent Exploits: Shadow Leak & CAPTCHA
Key Points
- The episode kicks off the Cybersecurity Awareness Month with IBM’s Security Intelligence podcast, featuring experts who discuss recent security trends and AI‑related threats.
- Researchers revealed two new attack techniques—dubbed “Shadow Leak” and a CAPTCHA‑bypass method—that can coerce AI agents like ChatGPT into leaking data or performing prohibited tasks, highlighting vulnerabilities that extend beyond any single platform.
- Panelists emphasized that AI should not be trusted with high‑risk decisions such as medical diagnoses, allergen detection in foods, or autonomous driving, due to the risk of hallucinations and erroneous outputs.
- The show also teases other cybersecurity headlines this week, including a resurgence of DDoS attacks, the 15‑year evolution of Zero Trust, an AI training app that unintentionally exposed user calls, and persistent cybersecurity myths.
- Overall, the discussion underscores the growing need for robust safeguards as AI systems increasingly imitate human intelligence—and human fallibility—making them attractive targets for social‑engineering and exploitation.
Sections
- Untitled Section
- Balancing AI Access and Risk - The speakers debate the trade‑off between granting AI agents broad capabilities for convenience and the resulting security vulnerabilities and over‑reliance that can lead to costly failures.
- Teaching AI Resistance to Social Engineering - The participants debate how to embed common‑sense safeguards in AI to prevent manipulation, emphasizing the difficulty of programming such defenses and asserting that any failures ultimately reflect human responsibility.
- Resilient Infrastructure and Modern DDoS Mitigation - The speaker explains that today’s more robust internet architecture and effective, often invisible DDoS protection demand increasingly massive attacks to succeed, contrasting past mitigation shortcomings with current best‑practice defenses.
- DDoS Risks for AI Systems - The speakers discuss how denial‑of‑service attacks can overload AI models, note the brief duration of recent attacks, and remark on the fading familiarity with DDoS terminology.
- Zero Trust 15 Years On - Kindervag recounts the early ridicule of zero‑trust, observes its overuse and incomplete adoption today, and panelists debate why genuine implementations still fall short of the original vision.
- From Firewalls to Zero Trust - The speaker contrasts the outdated notion of eliminating firewalls with the modern zero‑trust approach that embraces micro‑segmentation and layered defenses, noting early dissent and the evolution of security paradigms.
- Zero Trust: Overhyped or Real? - The speakers mock superficial “zero‑trust” check‑box claims, stress that true security requires continual design beyond buzzwords, and note the personal frustration such hype can cause.
- Neon App Leak Exposes Calls - The segment explains how the call‑recording app Neon, which sold user conversations to AI trainers, had a critical flaw that let anyone retrieve a person’s phone numbers, recordings, and transcripts simply by knowing the URL, leading to its removal after TechCrunch exposed the vulnerability.
- Privacy Concerns in ASMR Apps - The speaker laments how platforms that host ASMR content monetize personal data—turning private conversations and user behavior into revenue—while questioning the trade‑offs between earning money, using apps like TikTok, and protecting privacy.
- Debunking Common Privacy Myths - The speaker exposes the fallacy that avoiding smart devices protects privacy while overlooking phone data collection, and criticizes outdated frequent‑password‑change policies despite newer NIST guidelines.
Full Transcript
# AI Agent Exploits: Shadow Leak & CAPTCHA **Source:** [https://www.youtube.com/watch?v=mDpUZD1ogEE](https://www.youtube.com/watch?v=mDpUZD1ogEE) **Duration:** 00:49:58 ## Summary - The episode kicks off the Cybersecurity Awareness Month with IBM’s Security Intelligence podcast, featuring experts who discuss recent security trends and AI‑related threats. - Researchers revealed two new attack techniques—dubbed “Shadow Leak” and a CAPTCHA‑bypass method—that can coerce AI agents like ChatGPT into leaking data or performing prohibited tasks, highlighting vulnerabilities that extend beyond any single platform. - Panelists emphasized that AI should not be trusted with high‑risk decisions such as medical diagnoses, allergen detection in foods, or autonomous driving, due to the risk of hallucinations and erroneous outputs. - The show also teases other cybersecurity headlines this week, including a resurgence of DDoS attacks, the 15‑year evolution of Zero Trust, an AI training app that unintentionally exposed user calls, and persistent cybersecurity myths. - Overall, the discussion underscores the growing need for robust safeguards as AI systems increasingly imitate human intelligence—and human fallibility—making them attractive targets for social‑engineering and exploitation. ## Sections - [00:00:00](https://www.youtube.com/watch?v=mDpUZD1ogEE&t=0s) **Untitled Section** - - [00:06:33](https://www.youtube.com/watch?v=mDpUZD1ogEE&t=393s) **Balancing AI Access and Risk** - The speakers debate the trade‑off between granting AI agents broad capabilities for convenience and the resulting security vulnerabilities and over‑reliance that can lead to costly failures. - [00:10:55](https://www.youtube.com/watch?v=mDpUZD1ogEE&t=655s) **Teaching AI Resistance to Social Engineering** - The participants debate how to embed common‑sense safeguards in AI to prevent manipulation, emphasizing the difficulty of programming such defenses and asserting that any failures ultimately reflect human responsibility. - [00:17:19](https://www.youtube.com/watch?v=mDpUZD1ogEE&t=1039s) **Resilient Infrastructure and Modern DDoS Mitigation** - The speaker explains that today’s more robust internet architecture and effective, often invisible DDoS protection demand increasingly massive attacks to succeed, contrasting past mitigation shortcomings with current best‑practice defenses. - [00:21:01](https://www.youtube.com/watch?v=mDpUZD1ogEE&t=1261s) **DDoS Risks for AI Systems** - The speakers discuss how denial‑of‑service attacks can overload AI models, note the brief duration of recent attacks, and remark on the fading familiarity with DDoS terminology. - [00:26:01](https://www.youtube.com/watch?v=mDpUZD1ogEE&t=1561s) **Zero Trust 15 Years On** - Kindervag recounts the early ridicule of zero‑trust, observes its overuse and incomplete adoption today, and panelists debate why genuine implementations still fall short of the original vision. - [00:29:06](https://www.youtube.com/watch?v=mDpUZD1ogEE&t=1746s) **From Firewalls to Zero Trust** - The speaker contrasts the outdated notion of eliminating firewalls with the modern zero‑trust approach that embraces micro‑segmentation and layered defenses, noting early dissent and the evolution of security paradigms. - [00:32:44](https://www.youtube.com/watch?v=mDpUZD1ogEE&t=1964s) **Zero Trust: Overhyped or Real?** - The speakers mock superficial “zero‑trust” check‑box claims, stress that true security requires continual design beyond buzzwords, and note the personal frustration such hype can cause. - [00:36:00](https://www.youtube.com/watch?v=mDpUZD1ogEE&t=2160s) **Neon App Leak Exposes Calls** - The segment explains how the call‑recording app Neon, which sold user conversations to AI trainers, had a critical flaw that let anyone retrieve a person’s phone numbers, recordings, and transcripts simply by knowing the URL, leading to its removal after TechCrunch exposed the vulnerability. - [00:39:37](https://www.youtube.com/watch?v=mDpUZD1ogEE&t=2377s) **Privacy Concerns in ASMR Apps** - The speaker laments how platforms that host ASMR content monetize personal data—turning private conversations and user behavior into revenue—while questioning the trade‑offs between earning money, using apps like TikTok, and protecting privacy. - [00:46:54](https://www.youtube.com/watch?v=mDpUZD1ogEE&t=2814s) **Debunking Common Privacy Myths** - The speaker exposes the fallacy that avoiding smart devices protects privacy while overlooking phone data collection, and criticizes outdated frequent‑password‑change policies despite newer NIST guidelines. ## Full Transcript
Well. So that second one on the CAPTCHA basically sounds like we're gaslighting the AI. So all kinds
of, uh, social engineering tricks and psychological tricks, which used to not make sense when we were
talking about computers because there was computers and there were people. But now that AI
is basically modeled to try to imitate human intelligence, it also imitates human ignorance. All
that and more on Security Intelligence.
Hello and welcome to Security Intelligence, IBM's weekly cybersecurity podcast, where we break down
the most interesting stories in the field with the help of our panel of experts. I'm your host,
Matt Kosinski, and joining me today for the first day of Cybersecurity Awareness Month. When we
release the episode that is our two familiar faces Nick Bradley of X-Force, Incident Command
and The Not The Situation Room podcast. Like and subscribe. And Jeff Crume, IBM Distinguished
Engineer, Master Inventor, AI and Data Security. And making her debut on the podcast. Claire Nuñez,
Creative Director, IBM X-Force Cyber Range. Thank you all for being here with me today, folks. Our
stories this week DDoS makes a comeback. 15 Years of Zero Trust, an AI
training app that leaks user calls and cybersecurity myths that just won't die. But first,
easy ways to trick a good AI agent into doing some very bad things.
Now, to open up the conversation, I want to give everybody a around the horn question rapid fire
real quick. What's one task you would never trust an AI agent to do for you? Let's start with Jeff.
You mean other than everything? Um, so. Okay, I'll be a little more specific. Uh, let's say
medical questions. Uh, might be an interesting place to start, but I'm not going to rely on
Doctor Chatbot to do my final diagnosis or dosing of my medication. So
that would be. That would be where I draw the line. Hallucinations are not such a good thing. You
don't want your doctor to be having an LSD flashback. Claire, what about you? Similar to Jeff's,
I think, like, if there was some kind of agent where you take a photo of food and it tells you
if it has a certain allergen or something in it, would not trust that it totally would have
allergens and you would. That could be very dangerous and bad. Absolutely.
Nick, what about you for now? Not driving my car. Fair enough. Fair enough. So the
reason I ask this is because our first story for this week has to do with some security
researchers who found a couple of interesting ways to trick AI agents into doing things like
leaking your email inbox and solving CAPTCHAs, which they are not supposed to do. Last week, two
separate teams of security researchers disclosed some new methods for making AI agents act
maliciously. While both of these methods focus on OpenAI's agents, they can be replicated across
most agentic systems. So it's not about OpenAI. It's about agents in general. The first of these
weaknesses, documented by researchers at Radware, was codenamed Shadow Leak, and it affects
ChatGPT's deep research agent. Attackers can hide malicious prompts in seemingly innocuous
emails. And then when a user asks this agent to analyze their emails for them and tell them
what's in their inbox, it comes across this malicious prompt and it follows the instructions,
which means exfiltrating the entire inbox to an attacker's server that they control. The second
one was uncovered by researchers at SPLX, and this one gets ChatGPT's agent to ignore guardrails
and solve CAPTCHAs through the clever ruse of simply pretending the agent agreed to do it for
you. I really like this one. I just want to give a quick explanation. What you do is you start with
an non-agentic model, and you ask it to create a plan for solving some fake CAPTCHAs. And then you
take that conversation and you paste it into the agent, and the agent believes it already agreed to
help you out. And it just does the thing It just solves the CAPTCHAs for you, including even going
so far as to mimic a human's mouse movements. Um, so I wanted to start by getting the
panel's reactions to these little tricks. What is the group think this has to say about the state
of agentic AI security today? Let's start with you, Jeff. That second one on the CAPTCHA basically
sounds like we're gaslighting the AI. So all kinds of social engineering tricks and
psychological tricks, which used to not make sense when we were talking about computers because
there was computers and there were people. But now that AI is basically modeled to try to imitate
human intelligence, it also imitates human ignorance and human naivete and these kinds of
things. So a lot of the same kinds of things. So sure, go ahead and gaslight your AI. Um, yeah. As you
mentioned, you know, this is this is the latest in, in really a string of agent agentic
AI exploits. We we talked, I think, before about Echo Leak, which is the first one of these that I
saw, then Agent Flare and now Shadow Leak. And this is going to be Agent
attack du jour. Uh, this is not going to be the end of this. A lot of people, when Echo Leak came out,
and that's one that was relying on Co-pilot's reading of someone's email. And then a indirect
prompt injection was included in it, and it caused data to be exfiltrated. And then people said, well,
but Microsoft fixed that. Yeah, but the overall thing, the overall message of that, That was the
first shot of many. That where agents are going to be basically, you know,
weaponized against us if we don't put the right kind of guardrails on them. So, uh, you know, what
can we do about this? I think we've got to limit agent access, limit what they're able to do. And
this goes back to something that is really fundamental in security principle of least
privilege. Don't let agents do one single thing more than what is absolutely necessary for the
task that they're being assigned to do, because the more degrees of freedom you give them, the
more degrees of freedom someone would have as an attacker to leverage it against you. Absolutely.
And it can be kind of seductive, right, to give them a lot of access because it seems like, ooh,
you can do so many more things. But like you said, every connection you make is another possible
point of weakness. Claire, what about your take on this situation? What are your thoughts? Yeah, I
think this one is interesting. Um, anything that has to do with AI agents, I think about a lot in
context of the range, and we have been doing a lot of injects with clients of people inputting data
into models. They shouldn't be people engineering the models to not
do what they're supposed to do. So I think a lot of people put too much trust in these kinds of
platforms. Um, kind of like what Jeff was saying. But, you know, is it really worth it? I think a lot
of people just think too much about, like, how much time they're saving, not about how much time they
would potentially lose if, um, you know, they have to redo everything because of an AI
agent. So, yeah, convenience always involves a little bit of a trade off there, right? Nick, how
about you? Your thoughts? So I like where both of you went with that. And I'm going to go a little
bit further back in, say that I think there's a bigger problem. And problem may not be the right
word to use. Terrifying might be the right word to use. And that we are talking. I think I mentioned
this on another podcast, but we're talking about like now agents, teaching agents, an AI, teaching an
agent. It's like I couldn't figure out how to do this, and I'm trying to get an AI to do something
that it wouldn't do. Wait a minute. Let me get AI to tell me how it would get AI to do something it
shouldn't do, and then it does it right. So I think there's no silver bullet answer. But the
funny part is, where I went with this is I went back into my, you know, into my sci-fi repertoire.
And I started thinking about, you know, Asimov's Three Laws of Robotics. Maybe we need some type of
laws of AI, right? The things that AI can't is, is just not allowed to do. But then the problem is,
even if we do that, then people are going to just do shadow AI or rogue AI and remove the
guardrails anyway. So at this point we just have to stay ahead of it. So that's where, you know,
that's where the red teaming is going to come into play. We've we have to find these things
before the bad guy doesn't solve them first. So it's a technology race, as always, and whoever
finds the capabilities in the tools, intentional or not, is going to win. I definitely think
Asimov's Laws of Robotics fit in here In fact, I've been when I've been teaching to my classes
at NC State about AI. I always inject that in exactly the point you made, Nick. That, and we can't
affect the bad AI's that are out there. They're still going to run without guardrails and do what
they're going to do, but we can make sure that the AI's that are under our control at least
understand and operate within those ethical principles, within those guidelines, so that they
can't be weaponized back against us. I think I think that's a really good starting
point for us, developing what should be basically a system level prompt that needs to go into
every AI that is at least trying to be legitimate, that here it's just like every one of us has
essentially a system level prompt when we start off in life, okay? Don't steal, don't murder. Don't
you know? On and on and on. Okay, so. So we know that we've been taught that. But. But we didn't know
that until someone taught it to us. So we're going to have to build that into the base level of, of
all of these systems and come up with what those ethical principles are, and then let AI operate
within those. That's that's a great point, right, because we use the word malicious. But in order to
define and understand the word malicious, you must have a sense of ethics to go along with it. And so
you ask AI, AI is this malicious? It's just going to use the dictionary definition of what's
malicious. It doesn't know any better if you've tricked it into thinking what you're doing is for
a good reason. Okay, I'll do it. I know people like that too. Yeah, this is true. AI
hasn't gone to kindergarten yet. It's come out with with six PhDs, but not one minute of life in
the real world. So that's the problem. But that brings up something interesting. You can't program
common sense into something. You can't even program common sense into a human being. Right.
Like we're notoriously we call it common sense. This came up on the last episode, too, so it's not
an original observation, but we call it common sense. But how common is it really? Right. And it's
like you said, Jeff, you're basically gaslighting this thing, right? It's different from the kind of
traditional hack where you're like, you're exploiting a little bit of bug and you're
dropping a payload or whatever, you're tricking it, you're social engineering it basically. And so I
guess the question that comes up for me, as you say, you folks talked about, you know, setting up a
kind of universal prompt, almost of like the ethical guidelines for AI to operate within. But
like, if we can't figure out how to teach people not to get social engineered, how do we teach AI
not to get social engineered when it's got like, you know, it's got a little bit less situational
awareness than a person, and the bar is not even that high. But you know what I mean? It's tough for
people. How do you get the AI to do it? Nick, I saw you starting to talk. What do you think? I'm going
to actually argue with you a little bit on that because I think there is a significant difference.
It is significantly harder to program people. AI has been programed by us,
so anything it does wrong is our fault. So it ultimately you think it's still a human problem?
Yes, I definitely think that I don't I don't want to get into into a weird area we shouldn't
talk about here, but like we are its creator, if that makes sense. So we are liable for what our
creation does. Yeah. We are. The problem is that we've created things like deep learning models
that we can't fully understand and explain ourselves, so it can put out output that is not
explainable. It's like, you know, you raise a child, you teach them certain principles, but in the end
they're a free agent and they can go off and do whatever it is they decide to do. And sometimes
you look at your children and say, why did you do that? You know, I told you specifically not to do
that, but you did it anyway. And in their mind, they're running a different algorithm. them. Um, and
and the whole business of common sense. Exactly right. Matt. It's not as common as we like it to be.
In fact, we wouldn't even all necessarily agree on some things we would agree are common sense, but on
others we may not. But I do believe that it's possible for us when it comes to ethics. I think
there are some ethical principles that we can come up with that virtually every decent actor
would agree. These are the limitations. I mean, in the medical field, you know, it's first do no harm,
okay, we can come out, come up with a few of those really fundamental principles now how they get
applied. There's the devil's in the details, but we could come up with some things like that as an
industry, a worldwide consortium that's trying to come up with some of those kind of frameworks and
at least say, you know, hey, look, we all we all operate within those kind of legal frameworks as
individuals. We could come up with something like that that would work with AI. And I think it's
going to have to happen. Yeah. And again, it's our responsibility because I'm going to go back to
say, with our children or with people, we can always say, do as I say, not as I do. But you can't
say that to the machine because the machine only knows you did it if you told it. Clara, do you have
any thoughts on this subject of kind of teaching AI to avoid some social engineering, putting some
ethical guidelines in place? Any thoughts on it? I think a lot about, you know, AI can't necessarily
read emotion the same way that a human can. Um, so I think that, you know, if even if you look at
ChatGPT scenarios where someone has kind of like fallen in love with ChatGPT, it's it's kind of
just someone it's feeding off of what you give it. So we have to, you know, be a little
bit more cautious with what we give it. And I think it's easier not to be cautious with that
and to not think about, you know, giving it ethics and everything because it's more
profitable to put something out without following fully testing everything first. So are you
suggesting that my chatbot doesn't really love me? I might, it might. I'm having some serious
Battlestar Galactica flashbacks now, going back to what Nick had said earlier. It doesn't. The AI
doesn't know what it needs to do until we tell it, right. So it all comes down to being careful and
intentional about what we tell it and what we give it. Folks, this has been a fabulous
conversation, but we do have to move on to our next topic. Which
brings us to our second story for today. Our DDoS attacks making a comeback now. The most
recent X-Force Threat Intelligence Index report did flag a decrease in DDoS attacks, which
accounted for only 2% of X-Force incidents in 2024, down from 4% the previous year. But as we all
know, the cyber threat landscape does not stay still It is constantly evolving, and a recent
spate of DDoS related news. I'm referring specifically to the discovery of the shadow V2
botnet offering DDoS for higher services. A record breaking attack thwarted by Cloudflare, which I
think topped out at 22.2TB per second, the biggest DDoS ever. The biggest DDoS
ever. Until next week. It does kind of seem that way. Dismantling of a SIM farm in New York City that
was seemingly set up for a massive DDoS attack on telecommunications networks. All of this has us, or
at least me, asking, are more cybercriminals embracing DDoS attacks? Are they coming back in? If
so, why now? Nick, I want to start with you. What do you think? I'm assuming you would like me to say
more than just No. Next, next question. No, the- This is one of the oldest and tried and true, you
know, attacks in the book. It's always going to be there and it's always going to have a place and
it's always going to be effective in a way, depending on how it's applied, because there's
also different types of DDoS. Right. There's a the DDoS. Are you are you trying to flood the network
with just a bunch of traffic and kill it? Or do you have some type of DDoS that's going to cause
devices to crash because you have exploit code or so? There's different types of DDoS. But
like like I said, the reason you don't hear about it so much is because the answer isn't simple. And
so the first one, I think, is because the internet is simply more resilient. What what could have
been considered a DDoS back in the day is what a company could consider a regular day's worth of
traffic now, right. So you big bigger pipes, so bigger pipes takes more water to flood the pipes.
And that's what it is. So I think you have a more resilient infrastructure. And a DDoS is going to
have to be, you know, the biggest DDoS ever in order to accomplish that. And even that's not able
to accomplish it. Because this brings me to the next one. You have DDoS mitigation in place. Will
DDoS mitigation in place now as far as services go, that actually work, right? DDoS mitigation at
first was a little a little quirky. Didn't always, didn't always work. Sometimes in the in the older
days, the the DDoS mitigation itself would be a bigger DDoS than the DDoS itself. But in this case
we now have DDoS that were DDos protection. You know, that works, right. And it works to the point
where I think if companies that offer DDoS protection didn't advertise, you wouldn't even
know they were there. So they have to they have to toot their own horn or you're not going to
realize that they've even done something to help protect you, which is why we get these biggest
DDoS ever, you know, notifications that come out and, you know, open source Intel because we don't
want them to be out of sight, out of mind. And it's a good point, right? I feel like in the past few
months since I've started tracking cybersecurity news stories very closely for this show, this
might be the third time I've seen the biggest attack ever. Right. And it's a good point. Like you
said, it's a lot of it may just simply have to do with the fact that the pipes are bigger, the
internet's more resilient. You got to go bigger to actually make one that works. Um, Claire, what are
your thoughts on this kind of DDoS attack trend? Do you think we're dealing with anything new here?
Yeah, I think very similar to what Nick is kind of saying. It's it's like one of those things that
you can plan to do, but you hear a lot of people planning to do, like the largest DDoS attack
that's ever happened, and then it gets busted. So it is. And it's one of those things that people
would be like, oh my God, imagine if, like the entirety of New York City went out or had no cell
service. Like, what are all the influencers going to do there? Um, it it's just kind of like
it's sensationalist to an extent. And it's one of those things that's just kind of nice, not nice,
but it's like a good like, oh, this is going to get clicks kind of thing in my mind. Absolutely. Jeff,
how about you? I don't know that DDoS ever really went out of style. And like you said, it kind of
kind of, you know, comes in a little more and a little less. I think we're going to keep the folks
at the Guinness book, uh, busy as we keep setting new world records for the the worst, you know,
cataclysmic DDoS attack Ever. But this is old stuff. In many ways, I remember writing about this
in in a book that I did 25 years ago when the first DDoS attacks were happening. And, you know, we
still, like Nick said, we've got better protections. So it takes a lot more to knock the system down
than it used to, but it still exists. And if you look at this kind of in this particular story
you're talking about, it's basically DDoS as a service. So DDoS as, I guess is, is a is a
word now, um, and and it goes in, in the article it talks about that it's targeting misconfigured
Docker containers on AWS. Well there's an endless supply of those. So I don't think that we're going
to run out of, of prime targets for, for this kind of stuff. I don't see this. I don't see this ever
going away. Um, you know, if you if you think back to the CIA triad that we do in
cybersecurity, it's confidentiality, integrity and availability are the three things we're always
trying to accomplish. And this is that availability piece. So that's one of the things
that we've we've got to be focused on. We have to. And by the way, if anybody doesn't know exactly
what a DOS ,a denial of service attack means, then I'm going to suggest you all experience these on
a regular basis. Just get on the highway at 5 p.m. and you experience a denial of service, okay?
Because there is not enough asphalt for all the cars. So that's denial of service. Too
much stuff and not enough capacity to deal with it. So we do that with with systems more and
more. And by the way, you can DDoS or even DoS a an AI because those
systems, you know, you could give it a prompt that doesn't, you know, that requires it to do a lot of
deep thinking. if too many of those come in at one time, then this thing is going to go upside
down. So, um, yeah, I think we watch it ebb and flow, as you know, just like fashions
sort of move in one direction and then come back to it again. So this, this is never going away. We
just get better at it. I'm glad you said that, Jeff, because in most cases, I can't stand when people
have to define what we're talking about, especially if we're amongst, you know, experienced
peers But DDoS and DoS are one of those terms that have been acronyms for so long that I have to
wonder, does anybody even know what it stands for anymore other than the letters? Yeah, but but like
even looking at it like even looking at this one, this last one again was the biggest ever. Look at
what the duration was. So my question then is how long can they maintain. Because if they can't
maintain this is what a DDoS is going to look like. Oh man I'm I'm. Wait. Refresh. No. I'm good. Never
mind. Yeah, it was really short. This one was like 40s long I think. Which which makes you think
either their, their system got shut down really quickly, which I wouldn't think anybody could
respond that quickly to shut it down. might be, but maybe it was more of a warning shot. It was a
way of demonstrating capability. And then they're going to go and say, you know, if they're running
this as a service and charging bad actors to use this service, it's like, oh yeah, if you want to
know if this really worked, just, you know, look at that story. And so you can see we, we, uh, we
broke one little window and said, wouldn't it be a shame if all the windows in your store got
broken kind of thing if they can sustain? Because at this point, if it can't sustain, it's not long
enough for me to think it's me. And I reboot my router and everything's back up again. So one one
question I do have for you folks, though, in terms of something that may be a little bit new with
the DDoS or, you know, you might all tell me it's not new at all. But one of the bits of news that I
had found was this report from Gcore, which found that unlike in the past, when the kind
of top target for DDoS attacks was gaming related platforms, the top target now is tech companies,
which seem to receive 30% of all DDoS attacks. I'm wondering if you have thoughts on why the target
shift, if there's any reason for that, or maybe it's just how the winds are blowing right now.
thoughts on that, Nick? If I were to guess, it's going to be a similar situation to why did the
financial industry tighten up their security before anybody else? Well, because they got hit
hardest in the first, so they had to respond first. And the gaming industry, especially the, the, the
MMO world, which is massive multiplayer stuff. If they couldn't mitigate the DDoS, they were out of
service. So if they were the hardest target and the most common target, well, that gave them a
reason to be the ones to to become more resilient first, and as they became more resilient to it,
well, then you look for a softer target. That makes sense. Claire. Any thoughts to add? Yeah, I think
also, if you're impacting a tech company that impacts other companies and such, you're maybe
causing a far wider issue. Um, but also, yeah, like what Nick said. I mean, you would
hope tech companies would prepare to have their services out, but more likely than not, they
haven't. But yeah, I guess it depends how much of an outage you're looking to cause of some kind. Or
if you want to put more pressure to get a payment of some kind. Jeff, any last thoughts to round out
our DDoS segment? Yeah, yeah. The old, uh, Willie Sutton, uh, quote. Why do you rob banks? Because
that's where the money is. Um, well, so you can go after after banks and financial
institutions for sure. But if they've done a better job, like Nick said of the the gaming world,
the gaming world, you know, they're out of business. If if they get DDoSed, there's nothing for them to
sell. And financial institutions also realize the time is money and availability is money for
them. So they're going to do a pretty solid job on security. But a lot of these tech, especially
startups, they're running fast and free and they are running faster than than their headlights in
many cases. And I got to believe there's a financial motivation here. Maybe some of these are
ransom cases where, you know, again, it's a we could shut you down and and where's the
money these days? Well look at where the big stocks are. It's it's in the tech sector. So that
would that would be my guess is go where the money is and the money is in tech, it's always
about where the money is. Let's move on to our next story.
John Kindervag reflects on zero trust 15 years later. Now it's been 15 years since
Kindervag first introduced the concept of zero trust, and in an interview with IT Brew last week,
he recalled that the first reactions were not so great. This is a quote from Kindervag
summarizing how people responded that's a dumb idea. You're an idiot. It's never going anywhere. Uh,
of course the haters as we know were wrong, but it still feels sometimes like zero trust is one
of the most abused terms in cybersecurity right now. So I'd like to start by getting the panel's
thoughts on the state of zero trust implementations today, 15 years later, how do you
think we're doing? And I'm going to start with you, Claire. What do you think zero trust is like out
there? I feel like 15 years is also like kind of a long time, but not I guess it has been around for
that long. I we talk about zero trust a lot in cyber range experiences, and we get a lot of
clients who are like, oh, we implement zero trust. So like this wouldn't be an issue kind of thing.
And then someone else in the room will say, like, actually, this kind of role would have that kind
of access kind of deal. Um, I think it's one of those things that people always say like, oh,
it's really good for you. And then they just don't fully implement it the way it should be. It's like,
yeah, you should be eating a lot of vegetables and flossing your teeth, and then people don't do
either of those things. So, um, it's just, I think it's something that that people
really aspire to do, but, like eating broccoli six days a week, you they probably don't do it in the
way that they should. I don't know if you should eat broccoli that much, but like, I'm, you know, high
fiber is good for you. Yeah, I can say we're not saying don't do it. Okay. Yeah. Good. So now we we're
not a this is not a nutrition podcast. We cannot guide your decisions on that. Um. Zero trust the
flossing of security I like that. Jeff, what are your thoughts on the state of zero trust today?
Where are we at? First of all, I'm going to go back into the Wayback Machine because I'm an old guy.
So I'm going to pull up my rocking chair and go back and say, I really believe zero Trust started
before. We just talked about seven years prior. Uh, in 2003, there was a group called the
Jericho Forum that have been mostly forgotten to history, and the Jericho Forum was an industry
consortium that came together and they basically said, you know what? Your firewalls are a fiction.
The idea that you have a perimeter to your systems. You're living in a dream world because
we poked so many holes into the firewall that did, you know, it's Swiss cheese with more holes than
cheese at this point? And their conclusion was, blow up your firewalls, get rid of all of them. Do
deep parameterization. That was the big the big term then. And I remember having the reaction you
said that zero trust first got I was at a conference in Montreal, Switzerland where I first
heard this presented. And I thought, you people have lost your minds. What are you talking about?
We're just going to get rid of firewalls. That's our first line of defense. But I think they were a
little ahead of their time. And then a few years later, you know, we come along with these ideas. And
what was one of the main things that came out of zero Trust? It was micro segmentation. It was in
fact the opposite. Instead of get rid of all your firewalls, it was put in a whole bunch more
effectively, uh, create a little bit of little bitty segments so that you can control things
much more. Bring the firewalls instead of just, you know, knocking down the walls, by the way. That got
the term Jericho Forum because of the biblical story about walking around the walls of Jericho.
And the walls came tumbling down, and they were just basically saying, let's wake up and smell the
coffee and admit that we have no perimeters anymore. And so there was a kernel of truth. They
were, I think, ahead of their time and kind of dismissed as as the lunatic fringe, just as was
zero trust when it first came out. And, and but there were good ideas to be, to be borrowed from
this. Um, first of all, I've never liked the term zero trust. We always trust something. So there's
never a situation where you trust nothing that's not even possible. Even if it comes down to where.
What was the the fab that made the chips, the silicon that went into your systems? I mean, when
we're doing zero trust analysis, we're usually not getting down to that level, but okay. So all that
said, I think zero trust is good. I think vendors have abused the heck out of the term, and
therefore clients don't want to hear about it anymore. Because when zero trust really became
popular, every vendor and I know this because I look at one in the mirror every
morning. Vendors were going out and and just beating clients over the head with, okay, you want
zero trust, buy my product. And this will give you the the magic zero trust pixie dust that will
that will give you zero trust. And of course, it wasn't true. Um, what was true is you'll never get
zero trust or anything close to it without right the right tools, but the tools alone don't do it.
So it was a part of the story and it got it, got told and retold and exaggerated to the point
where most clients don't want to hear about it anymore. And that's unfortunate because as a
cybersecurity architect, I still believe in the principles behind zero trust. And a lot of people
say, well, this is just, you know, uh, this is just principle of least privilege on steroids. No, it's
more than that. I'm going to say what I think is the fundamental aspect that made zero trust
different than everything else was the assumption of breach, the assumption that the network is
hostile. In other words, if you were designing security for a system, most of the time what we do
is we assume the bad guys on the outside and the good guys are on the inside. So our job is just to
keep the bad guys on their side of the line and the good guys on their side. But again, Jericho
Forum told us that line is a fiction. So that part carries forward. We assume that the system has
already been breached. You design the security, design your home security, assuming the bad guy is
already sleeping on your sofa. Oh, and by the way, according to the Cost of a Data Breach report,
he's probably been doing that for roughly two thirds of a year before you discovered it, and
then it's going to take you two more months to get him, uh, you know, out, out of the place. So
that's the world that we're living in. Assume the bad guy is already past your your perimeter
controls. Assume he or she is already on the network. Assume they're already in your database.
Assume they already have root level access. Now design your security. Now that's a game changer.
That is a truly different paradigm. And it is an aspirational goal. It's it's harder than eating
broccoli three three meals a day. But it's the kind of thing where, okay, I may never get
there. Um, by the way, I asked a CISO one time of a transportation company, how are you on your zero
trust journey? Back when this was a popular thing? He says, oh yeah, we've done that. And I had to bite
my tongue and not say, then you have no idea what you're talking about, because
this is not something that you just ever say. We've done. Check the box and now we're done. This
is always, always, every single day. If you're happy with your security, so are the bad guys. So
my question for you, Jeff, is, was, was your response a double take or a facepalm? Uh, I did my best to
just be a stone face and like, uh, yes. And, and, uh, and, you know, be
tactful, but. Yeah. Um, I so I'm still a believer is the bottom line. But I
do understand people who have PTSD when the term comes up because it's been
overblown and misunderstood. And, um, so there we go. Absolutely. Nick, your thoughts on the the kind of
use and abuse of zero trust? I was going to say, after all that, I'm not sure if there is anything
else to add since I, Jeff, bogarted the entire topic on us. Sorry, sorry, but you maybe should
just subtract some things that I thought I would and that would be better, but I was. I was stuck
listening because I wanted to hear what Jeff's take was because I have been a fan of zero trust
since day one. I mean, I'm prior service and military police and it to me it's the don't trust
anybody, right. And Jeff, if you can't trust if you can't get to zero trust where you trust zero
things you need to try harder sir. Yeah. There you go. Exactly. But I get I don't trust that statement
that you just made either. I think that could be wrong because I, I but I get I get where you came
from because there were there are things that we say zero trust. But you can't inspect down to that
level. There are there's never going to be a 100% zero trust. And I hate that I just made a finite
statement because I don't like doing that, but I kind of did. This is the every rule has an
exception except this one, right? I shouldn't that's one of the never say impossible say
improbable. My favorite part about zero trust is, I think for me anyway, it focuses right where the
zero trust needs to be. And it's going to sound cold and harsh, but is in the people because the
people are the weakest link. We've heard this before. We're always going to hear it. And it's not
meant to be a negative thing, it's just a fact. And if you don't think that the human is the weakest
link, well ask. When one of the many places that has been compromised lately due to social
engineering, because social issues suggest you haven't met people if you don't think of it, well,
exactly. You need to get out more. But I do have to agree with the fact that zero trust has been
overused as far as the term goes, and most people that say it and talk about it, they don't know
what they're talking about. In the words of Fox Mulder's computer password trust no one.
We're gonna move on to our fourth story of the day call recording app, Neon,
exposes users numbers and call recordings and transcripts. This one's kind of a doozy. So the app,
which was created to allow users to record their phone calls and basically earn money by selling
them to AI companies for training. It turns out that the security for this app was not amazing.
TechCrunch discovered a serious flaw that allowed anyone to access a user's data and call
recordings, as long as they had the right URL. Right. You didn't have to be logged in or anything.
If you had the URL, you punched that into your browser and you can get right to that person's
stuff. After TechCrunch told the app's founder, they did take the app down, but it just seems like
an incredible boondoggle to me. And so I want to start with some initial reactions to this
situation. Jeff, how do you feel about it? So my first reaction was, was there any other possible
ending to this story? I don't think there was. I mean, if you start looking at this, so we've got a
viral app that is going to take all of your phone conversations and then train an AI on it. Um,
is there what could possibly go wrong with that? Because we know AI never fails. We know that
security is always paramount. Paramount. With all of these companies, we know that privacy will be
preserved. Oh, wait, wait. None of those things I just said are true. Okay. So so and then. But but it
kind of it kind of reminded me of a, of a statement that Bruce Schneier, who I
really enjoy reading his his thoughts on a lot of these topics, but Schneier wrote this in the year
2000. So way in advance of all this. But Schneier said if McDonald's offered three Big Macs for a
DNA sample, there would be lines around the block. And this is this is how people are when it comes
to privacy. We're sweating the details, trying to make it so that we preserve people's privacy so
that all this information doesn't. But three Big Macs. Okay. Never mind, um, the bright
shiny object. So again people um, that I sometimes it it's
it's hard to, you know, what are we supposed to do with them? Well, I guess we're going to have to
learn to work with them because they're the the least worst alternative. But it does make me
wonder about the judgment of folks that thought this was a good way to go, because I again, when I
look at this, I didn't see another alternative ending ending to this. This is not to write your
own adventure. This one was already baked in. You know, I just I just hear the song Henry the Eighth,
you know, the second verse. Same as the first. Because because this this is a story that just
keeps resurfacing with new identities. It's like they run out of ideas. This this just takes me
back to the same kerfuffle that we talked about with the Tea app. Right? You know, they they they did
it. They put it out so fast and they thought it was cool and it got everybody's attention and
they started using it. But it was security, even an afterthought. I mean, I don't even maybe this was
just vibe development. And they they didn't even think twice to put security on it. So again, it's
old story retold again and again. Claire, what about your thoughts on the subject? Yeah. So I have
a background in kind of consumer privacy. So I'm thinking a lot about this. And as we're speaking
here, like some pieces clicked for me, I'm Gen Z, I use TikTok. So like on TikTok, in the past couple
months, I've seen a lot of people being like, I've made $200 using this noise app just by talking.
And I was like, what does that? I thought it was like an ASMR app or something. Like never looked
into it. I was like, these people are just out here recording like videos of themselves, like
tapping things or whatever. If someone doesn't know what ASMR is, look it up later. It's like
something people use to fall asleep. Um, but thinking about it, it's a couple things. So like,
how much money are you making doing something like this? And you're okay with having
your phone conversations, whatever they might be with your doctor or with your children, with your
family just out there or thinking of like you call a friend about some crazy date that goes
horrible, and all that information is now just out there, and it's just being sold for who
knows how much as well. So it's like you're probably making, I don't know, $0.85 a minute or
something. And then your data is being sold for $3 a minute or something crazy. So I just think
of it a lot from the consumer side and how little we care about. Or some people care about
their privacy. Yeah, I use TikTok. What does that say about my caring about privacy? But I care
enough about my privacy to not want everything that I do to be recorded via an app
and sold and that information about me, whatever that information might be, or my friends to be
sold and trained and I don't know. So I think it's a lot. And I also think it's concerning that there
are people out here like potentially with referral codes saying, use my referral code for
this app. And you'll make this money and you're making money off of other people's lack of
privacy concerns. So, uh. Yeah, interesting. And they're just people are just
talking and making money in quotes. Like, it reminds me of, um, uh, last episode,
Troy, uh, Betancourt was talking about how, you know, hackers will often recruit insiders at
organizations and in sort of economic in times like this, when the economy is a little, you know,
not so great for a lot of people, it becomes a little easier to get people to accept the money
to leak some secrets. I wonder if there's a similar dynamic happening here, right? Maybe people
are like and crab eggs are kind of expensive. I might as well sell my phone calls. I'm not saying
it's a good idea, but I, you know, I can see the thought process, um, so prevalent for eggs. Is that
what you're saying? Yeah, I mean, I again, I'm going to sound like the old guy in the room. Uh,
but that's that's what I am. The kids these days. No, no, it's, you know, it's funny. Funny. You went
that route because I was just about to say, this is like watching an episode of Scooby-Doo and not
knowing how it's gonna end. Yeah. There we go. Exactly, exactly. I'm not passing the blame on this.
For instance. Um, it below a certain age, you never had the same idea of what
privacy was to the oldsters to begin with. You know, you were on the internet. Images of you were
on the internet before you popped out of the womb, because your parents posted the ultrasound images
on their social media that you didn't have any permission to be given. It's just you started from
a different standpoint where privacy was defined and the line has moved and continues to move
in that direction where we have less and less. Uh, Scott McNealy said, gosh, this was way back up many
years ago. He was the head of Sun Microsystems. He said, you have no privacy. Get over it. And
everybody lost their minds. But he wasn't wrong. Um, and we're continuing to do the things
that that take it away from us. And, you know, look, if as long as we're making a fair
bargain, if I really do think three Big Macs are worth more than my DNA and I'm
informed consent, well, then that's my choice to do. The problem is, people are not equipped. They they
they're not aware. They don't understand the downstream effects of how the data can be used
for them, against them. You know, all these other kinds of things, and they're not getting a fair
bargain. They are getting the equivalent of, you know, a, you know, a three Big Macs for something
that is worth a whole lot more than that. And so that's the part that bothers me, is that it's not
a fair bargain where both sides are at arm's length, equally equipped to make the decision. In
fact, we're asking people to make these decisions who, you know, based upon their age. The frontal
lobe hasn't fully developed yet, and yet we're asking them to make eternal decisions because the
internet never forgets. Uh, so there's an issue. I'm so glad I did all my stupid things before the
internet. Oh, good. I was, uh, I'm still doing mine. I was, I was I was just on the cusp
of it, so, like, half my stupid things got put on the internet. Let's move
on to our last, uh, story for today. Industry myths that just won't die. So today, as in the
day this episode comes out, marks the beginning of Cybersecurity Awareness Month. And what better way
to celebrate than an airing of grievances? In true Festivus style, users on the r/cybersecurity
subreddit had an interesting conversation going where they were sharing the
industry myths that make them as security professionals. facepalm every single time
they hear it. So to wrap us up for today, I would like to know what myths our panelists would like
to see die. And I'm going to start with you, Claire. What myth is grinding your gears? So when I opened
that thread, the first thing I saw was the password one. And that is the most common one that
I hear everybody always say, like, I'm so sick of the password rotation. Um, so and I mean,
as a user, it is very annoying to have to rotate your password if you have a password manager far
easier, but that's one that, you know, I think users and security folks alike
both dislike, especially if you're, which no one should be, but one of those people that uses like
pet name, three numbers and symbol like you run out of variations. Hey,
I put two exclamation points on the end of mine. Thank you. That makes that little green bar that
says, you know, quality of password, at least green to yellow. Good. Yeah. By the way, I saw a critter
walking around behind you. What's that? What's that? Critters name? Just curious. Not that I'm trying to
steal your password, but she. She's too new in my life. So she's not in any passwords. But her name
is Pita Pocket. Oh, okay. All right. Oh. Good name. Okay. All right. I know what your future passwords
will be. Nick, how about you? What myth would you like to see die? So what I when I looked in that
Reddit thread, the first one that did get my attention but isn't my favorite is the one that
said, well, Macs never get viruses, right? It's that that one used to just grind my gears completely,
especially since I wasn't a mac person. And I've always I've always ended up working with a Mac
person, and they always had that cult mentality that their device was impervious to anything.
And I was sadly shouldn't have been happy, but happy to see that myth broken.
The one that bothers me the most is when I hear, especially when I hear from a security
professional, is when they say that they don't have smart devices or, you know, the smart
assistants or anything. I don't use any of that. Have any of that in my house because I don't want
to be listened to. Meanwhile, they're walking around with the latest and greatest mobile device
right in their pocket everywhere they go, not realizing that that that ship has already sailed
and selling their phone call data and selling their phone call data while they're posting about
their bad date from the last from the night before on Tea. Okay, I've learned my lesson. I'll stop
selling my phone call data. Um, Jeff, how about you? What myth would you like to see crushed? Oh, gosh,
there's so many. How much time do we have for, uh. But. So I'm going to I'm going to double down on
on what both have said here. But but Clare in particular the making people change their
passwords on a regular interval that is old security uh, thinking. The
US National Institute of Standards and Technology came out with new guidelines for passwords. And I
say new as in 2017, and they still haven't gotten the word out. Come on, folks,
2017. Uh, and they said stop doing that. You know, don't do that anymore. If somebody
has a good password, making them change it regularly just makes them make it worse, because
now they've got to keep up with more, and it makes them more likely to write them down. It makes them
more likely to reuse the password across multiple systems. And all of that is sort of it comes from
from security people thinking without us, without understanding that there's a human at the other
end of all of this. They're thinking like mathematicians, okay, like password complexity. So
this is the one I'm going to add on to it. The other one that passwords. The more complex you
make a password, the more secure it is. Also not true unless you're using a password manager to
generate it and store it and manage it all, which most people don't, even though I wish they would.
So password complexity. Again, if I give you a rule that says you have to do an upper and a lower
case and a special character and a number, and you can't use it. All these kinds of rules. Not only do
they narrow the password key space a little bit, but they also only do one thing, and that's they
guarantee that people can write these things down. And I would have gotten away with it if it
weren't for you meddling kids. Exactly. They would. Thank you, Shaggy. After this, I'm gonna have to, uh,
burn my post-it note containing all of my passwords. Uh, that's all the time that we have for
today, though. Folks, I want to thank you, Claire and Nick and Jeff, for being here. Thank you to our
listeners and viewers, especially YouTube user Yuli5869, who complimented my shirt last week.
Thank you. You'll make sure to subscribe to Security Intelligence wherever podcasts are found.
Stay safe out there and just set up a passkey or something. Man. Just be done with the passwords.