Shai Hulud 2.0: NPM Threat Escalates
Key Points
- The podcast stresses that personal responsibility for security—pausing to consider decisions—directly influences safer practices at work.
- IBM’s “Security Intelligence” show, hosted by Matt Kaczynski with guests Dave Bales, Michelle Alvarez, and Brian Clark, highlights current cyber‑threat news and expert analysis.
- A new wave of the Shai Hulud worm is targeting both NPM and Maven packages, now executing during the pre‑install phase, self‑healing, and even deleting home directories if no secrets are found.
- Compared to its September debut, the worm has become fully automated, spread to over 25,000 repositories, and incorporates more aggressive behaviors, effectively “growing up” from a toddler to a teenage threat.
- The episode also previews other security topics—including developers leaking secrets, the 200‑company Gain site breach, a pre‑hacked Android streaming device, and how poetry can bypass AI guardrails—culminating in a teaser of a bonus interview with a malware reverse‑engineer.
Sections
- Personal Security Meets Workplace Ops - The segment emphasizes personal responsibility for security decisions and introduces IBM's Security Intelligence podcast episode covering developer secret leaks, major breaches, compromised Android devices, AI guardrail bypasses, and the resurgence of the Shai Hulud worm.
- Balancing Open Source and Security - A discussion on the delicate trade‑off between publishing tools for community benefit and the heightened attack surface such releases create, including considerations like credential rotation and scanning for secrets.
- Protecting Developers from Supply Chain Attacks - Panelists discuss how organizations can bolster defenses and training to guard developers against malicious code injected into software packages, which can damage reputations.
- Shadow IT’s Double‑Edged Dilemma - The speaker debates the benefits and risks of shadow IT, emphasizing how personal device use and human psychology shape security outcomes.
- Risks of Persistent SaaS Permissions - The discussion warns that granting SaaS vendors ongoing access creates a broad attack surface, urging organizations to assess breaches comprehensively and coordinate security across all integrated platforms.
- Taking Threats Seriously & IoT Hijacking - The speaker urges organizations to heed shared security warnings and then explains how cheap Android streaming devices are covertly commandeering home bandwidth to support malicious botnet traffic.
- Trusted Vendors, Untrusted Products - The discussion highlights how scams can infiltrate reputable retailers—both brick‑and‑mortar and online—undermining the assumption of safety and complicating traditional anti‑phishing defenses.
- The Limits of Trust in Tech - The speaker cautions against assuming safety based on brand, stresses personal due diligence when integrating tools, and questions how organizations can effectively defend against unsanctioned purchases without resorting to intrusive surveillance.
- IoT Scam Allure and Security - The speaker reflects on the tempting yet shady IoT device scam, the urge to create a safer personal version, and what this reveals about broader IoT security challenges and the limits of organizational control.
- Poetic Prompts Crack AI Guardrails - A recent study shows that framing malicious instructions as poems dramatically boosts jailbreak success rates across major language models, prompting a discussion on the implications and defenses.
- Creative AI Jailbreak Strategies Discussed - The panel debates using poems, rap, and other creative prompts to jailbreak AI models, stresses “trust but verify,” and wraps up with a teaser about a honeypot malware bonus episode.
- Subtle Email Trigger Reveals Malware - Although Raymond initially couldn’t pinpoint why a seemingly innocuous email raised his “spidey sense,” his deeper investigation uncovered a sophisticated malware loader that blends classic techniques with novel evasion tricks, underscoring the perpetual cat‑and‑mouse nature of cybersecurity and the need to stay abreast of emerging threats.
Full Transcript
# Shai Hulud 2.0: NPM Threat Escalates **Source:** [https://www.youtube.com/watch?v=o3caaeeCPXg](https://www.youtube.com/watch?v=o3caaeeCPXg) **Duration:** 00:42:54 ## Summary - The podcast stresses that personal responsibility for security—pausing to consider decisions—directly influences safer practices at work. - IBM’s “Security Intelligence” show, hosted by Matt Kaczynski with guests Dave Bales, Michelle Alvarez, and Brian Clark, highlights current cyber‑threat news and expert analysis. - A new wave of the Shai Hulud worm is targeting both NPM and Maven packages, now executing during the pre‑install phase, self‑healing, and even deleting home directories if no secrets are found. - Compared to its September debut, the worm has become fully automated, spread to over 25,000 repositories, and incorporates more aggressive behaviors, effectively “growing up” from a toddler to a teenage threat. - The episode also previews other security topics—including developers leaking secrets, the 200‑company Gain site breach, a pre‑hacked Android streaming device, and how poetry can bypass AI guardrails—culminating in a teaser of a bonus interview with a malware reverse‑engineer. ## Sections - [00:00:00](https://www.youtube.com/watch?v=o3caaeeCPXg&t=0s) **Personal Security Meets Workplace Ops** - The segment emphasizes personal responsibility for security decisions and introduces IBM's Security Intelligence podcast episode covering developer secret leaks, major breaches, compromised Android devices, AI guardrail bypasses, and the resurgence of the Shai Hulud worm. - [00:04:58](https://www.youtube.com/watch?v=o3caaeeCPXg&t=298s) **Balancing Open Source and Security** - A discussion on the delicate trade‑off between publishing tools for community benefit and the heightened attack surface such releases create, including considerations like credential rotation and scanning for secrets. - [00:08:22](https://www.youtube.com/watch?v=o3caaeeCPXg&t=502s) **Protecting Developers from Supply Chain Attacks** - Panelists discuss how organizations can bolster defenses and training to guard developers against malicious code injected into software packages, which can damage reputations. - [00:14:42](https://www.youtube.com/watch?v=o3caaeeCPXg&t=882s) **Shadow IT’s Double‑Edged Dilemma** - The speaker debates the benefits and risks of shadow IT, emphasizing how personal device use and human psychology shape security outcomes. - [00:18:32](https://www.youtube.com/watch?v=o3caaeeCPXg&t=1112s) **Risks of Persistent SaaS Permissions** - The discussion warns that granting SaaS vendors ongoing access creates a broad attack surface, urging organizations to assess breaches comprehensively and coordinate security across all integrated platforms. - [00:21:45](https://www.youtube.com/watch?v=o3caaeeCPXg&t=1305s) **Taking Threats Seriously & IoT Hijacking** - The speaker urges organizations to heed shared security warnings and then explains how cheap Android streaming devices are covertly commandeering home bandwidth to support malicious botnet traffic. - [00:25:07](https://www.youtube.com/watch?v=o3caaeeCPXg&t=1507s) **Trusted Vendors, Untrusted Products** - The discussion highlights how scams can infiltrate reputable retailers—both brick‑and‑mortar and online—undermining the assumption of safety and complicating traditional anti‑phishing defenses. - [00:28:28](https://www.youtube.com/watch?v=o3caaeeCPXg&t=1708s) **The Limits of Trust in Tech** - The speaker cautions against assuming safety based on brand, stresses personal due diligence when integrating tools, and questions how organizations can effectively defend against unsanctioned purchases without resorting to intrusive surveillance. - [00:31:49](https://www.youtube.com/watch?v=o3caaeeCPXg&t=1909s) **IoT Scam Allure and Security** - The speaker reflects on the tempting yet shady IoT device scam, the urge to create a safer personal version, and what this reveals about broader IoT security challenges and the limits of organizational control. - [00:35:42](https://www.youtube.com/watch?v=o3caaeeCPXg&t=2142s) **Poetic Prompts Crack AI Guardrails** - A recent study shows that framing malicious instructions as poems dramatically boosts jailbreak success rates across major language models, prompting a discussion on the implications and defenses. - [00:38:53](https://www.youtube.com/watch?v=o3caaeeCPXg&t=2333s) **Creative AI Jailbreak Strategies Discussed** - The panel debates using poems, rap, and other creative prompts to jailbreak AI models, stresses “trust but verify,” and wraps up with a teaser about a honeypot malware bonus episode. - [00:42:07](https://www.youtube.com/watch?v=o3caaeeCPXg&t=2527s) **Subtle Email Trigger Reveals Malware** - Although Raymond initially couldn’t pinpoint why a seemingly innocuous email raised his “spidey sense,” his deeper investigation uncovered a sophisticated malware loader that blends classic techniques with novel evasion tricks, underscoring the perpetual cat‑and‑mouse nature of cybersecurity and the need to stay abreast of emerging threats. ## Full Transcript
I think there is something to be said about individuals
taking ownership of their own personal security that then has
an impact in what they do in their workplace. It's
all about kind of just thinking about it, right? Just
taking a minute to just stop and think about it.
Is this the right decision? Am I making the right
choice? All that and more on security Intelligence. Hello and
welcome to Security Intelligence, IBM's weekly cybersecurity podcast where we
break down the most interesting stories in the field with
help from our panel of experts. I'm your host Matt
Kaczynski and joining me today, Dave Bales of X Force
Incident Command and host of the not the Situation Room
podcast, Michelle Alvarez, manager, X Force Strategic Threat Analysis and
the illustrious Brian Clark, senior technology advocate and a producer
on this very show. Stepping in front of the camera
here is what we're talking about today. Developers keep leaving
secrets out in the open. 200 companies were hit in
the gain site breach, the Android streaming device that comes
pre hacked, and how poetry can defeat your AI guardrails.
Plus, stick around at the end for a sneak peek
of our newest bonus episode where a malware reverse engineer
tells us what it's like to discover a new strain
of malware. But first, the return of Shai Hulud. The
Shai Hulud worm first ripped through NPM packages in September
and now a new strain dubbed Shai Hulud. But the
eye is a one this time. It's very confusing, is
causing even more chaos in the NPM registry. It has
even started infecting packages in Maven too. Like its predecessor,
it's a worm that steals developer secrets and spreads by
publishing malicious packages under its victim's account names so they
look very legitimate. Hundreds of NPM packages have been trojanized
by the worm thus far, including packages from trusted entities
like Zapier and Postman. Shai Hulud Round 2 also comes
with some notable upgrades, including it now executes during the
pre install phase, which helps it evade some detection. It
has self healing capabilities and if it doesn't find any
secrets on your machine, it just tries to delete the
entire home directory. Now Dave, I know you folks have
covered shy Hulud on your podcast, starting with the original
worm back in September. So I want to throw to
you first, what are your thoughts on this new round,
especially compared to the last time we saw Shai Hulude?
What's changed here? It's automated now. It's completely automated. It
spreads automatically, it installs automatically. There's no interaction that's needed
between the user and the machine. And like you said,
upgraded capabilities and it has, it's infected, what, more than
25,000 repositories at this point. Which was large. Yeah, larger
than the first iteration of Shai Hulud. So it's, it's
growing up. It's, you know, it's no longer a toddler,
it's now a petulant little teenager. And what do you
think of that choice to. To make it a wiper
if things don't go the way they want it to
go? Right. Like, what do you think the g. Is
it just like throwing a tantrum? Basically, it's exactly is
I can't find the cookies, so I'm going to lay
down on the floor kicking and screaming. It's basically what
it's doing. Michelle, I want to ask you if this
pattern of kind of, you know, a malware that when
it doesn't find what it wants, it throws a tantrum
and tries to blow things up, is this common? Have
we seen this kind of thing before? You know, what's
your take on it? Yeah, I don't know if I'm
environments, but I do want to say that, Shahud, it's
not an enigma in terms of like things that we're
seeing in environments. We definitely are seeing this active and,
you know, it's a concern for our clients right now,
especially since we had round one and round two is
significantly worse. So, you know, in terms of the broader
picture though, I think what we are concerned about also
is possibly the loss of trust with these open source
platforms. We want to be able to leverage them and
use them and they've done a lot of good in
terms of being open source for the community and innovation
and automation. But then when we have these supply chain
attacks right now we're looking at, okay, what's the bigger
picture? Is there more risk to using these types of
platforms? Yeah, that's a very good point. And we talked
last week about kind of open source approaches to security.
Right. We were talking specifically about the X Force releasing
a bunch of new tools to Open source on GitHub.
And it is this tricky line you have to walk
when you go open source with anything. Right. Where like
you said, it's done a lot of good for the
community, but it is also like part of your attack
surface and it extends your attack surface even beyond the
confines of your own organization. Brian, any thoughts on that?
The kind of dance of open source, keeping it secure,
putting things out there. What's your take? I think we're
always going to have that, like, thin line between, you
know, everybody wanting things to be open source and accessible,
and then also at the same time sort of backpedaling
or double checking what, what we're putting out there or
what we're bringing onto our own machines. I guess, like
my first takeaway. I'm just still so stuck on the
name the the Second Coming and wondering if this is
going to be part of a trilogy. Maybe it's safe
to say at this point that they're. They're Dune fans.
We know that. I did find it interesting that it
uses Truffle Hog to scan for the tokens on the
local machine. Um, and then also just the thoughts about,
well, what do we do if this happens or if
we're affected. I was reading a little bit about how
rotating your credentials or scanning your endpoints, but it's a
scary thing. Like Dave and you were talking about a
minute ago, throwing a tantrum, just, okay, I don't get
what I want. So nobody wins. So this whole thing
places the focus yet again on the kind of vulnerabilities
of the software supply chain. Right. Is like a primary
target for attackers. And it's got me thinking, you know,
have developers kind of become the front line in our,
in our, you know, defense systems right now? You know,
we talk a lot about your everyday employee being the
front line. They get a phishing email, they got to
stop it. But what about developers too? Are they on
the front line now and are we doing enough to
support them? Dave, I'll start with you. Any takes there?
sensitive to their code. They like their code being their
own. And when their code gets obfuscated with something else,
they take great offense to that. The problem is that
they're not always aware that their code is being stolen
from them and then turned into something malicious. And then
they find that out later, and now they don't know
what to do. They have to go back into their
code. They've got to fix their code, republish their code.
The most important key key to that, though, is that
they have to reestablish the trust that they've been given
by the people who use their code. That's not always
easy to Do. That's a very good point. Right. How
do you reestablish that trust? And it's almost like it
makes the tail of the attack, its blast radius, even
bigger because now it's not just your package got hit
and you got to fix that package. It's like you
package is good now. Yeah. My package is fixed. It's
fixed, I promise. And they can say that all they
want. But you know, as a user of some of
these npms, I'm not going to trust it until I've
seen someone else, you know, dip their toe in. The
water, something like this shy hulud where the way it
spreads is by publishing malicious packages under your name. Right.
And so I don't know, I can imagine a situation
where I'm a developer, I don't even know one of
my packages has been hit with this thing. And all
of a sudden people are telling me that I'm spreading
malware. I'm like, what the heck happened here? So if
developers are becoming a target, not just the frontline, which
I think is a very important distinction, what are some
of the things our organizations have to kind of start
doing to help strengthen the defenses around this particular target?
Michelle, any thoughts there on your end? What can we
start doing? Yeah, I feel like we're always iterating this,
which is user education and user enablement and training. It
could be attack that we're not aware of, but now
we are. Right. So are we now integrating that in
our user training and awareness programs? Because we've had so
many similar types of attacks. I really think this is
something we're going to see into the foreseeable future as
more and more of these types of packages are basically
modified for malicious purposes. And we have this domino effect
of again, one library now cascading to multiple projects and
infecting multiple organizations. And it's, it is a brand reputation
issue, as Dave alluded to right now. This is my,
this is my brand, this is my name now. It's
got a black mark on it. Absolutely. Brian, any final
thoughts here? To, to round out the segment, I. Would
just say that for developers and security experts to sort
of work together, making sure that I know developers are
usually working on what gets what gets the product or
whatever they're working on out the quickest and most efficiently
where security experts are what is going to get it
done safely. So those two working together and security experts
not necessarily saying no, you can't do that. But let's
work together and get that done in a safe manner.
So I guess security experts thinking like developers, developers thinking
like security experts and just sort of bridging the gap
there. I like that you've kind of tapped one of
the running themes of the show, which is that security
kind of has to not always be saying no, but
saying, here's how we do it safely Right now. Let's
move on to a story where developers are still the
target, but this time they might be doing some things
to make themselves the target. This is some research from
Watchtower that found that developers keep leaking secrets to code
formatting tools. Now, offensive security researchers at Watchtower Labs analyzed
some publicly accessible URLs on the popular code formatting tools
JSON formatter and code beautify and found 80,000 plus saved
JSON blobs, which included such treasures as SSH keys, active
directory credentials, and even some customer pii. The researchers decided
to plant some of their own fake tokens that they
could track to see if there was anybody exploiting this
weakness. And spoiler Al, of course, there were people. They,
they found people taking their fake tokens and trying to
use them just I think within 48 hours of putting
them out there. So, you know, there are some malicious
entities who are aware of this vulnerability. I want to
start by asking the question of, of, of why are
people so willing to paste confidential code and data into
a public tool with unproven security controls? Brian, you left,
which means I'm gonna throw it to you first. What
are your thoughts here? It's quick and easy. I feel
like as human beings, we are simple creatures. It's a
quick and easy thing. I don't necessarily, I guess, understand,
like the formatting. I know there was a lot in
the article talking about like beautifying the code, prettifying the
code, but at the end of the day, that tool
does what people are looking for to do quickly and
easily. So of course it's going to be a perfect
place to set up an attack. I'm wondering though, if
these tools have any kind of responsibility to shore up
their defenses. I want to ask you, Dave, do you
think that this is something that we can put some
of the blame on the Tools feed, or is it
really just like developers stop doing this. What are your
thoughts? No, I think you can always put some of
the blame on the, on the Tools feed. I mean,
it's, it's, it's not always the developers and we do
have a bad habit of blaming developers when something goes
wrong and, you know, damaging the reputation, like we talked
about a few minutes ago, you can't always look at
the developer and say, hey, I know what you're doing
here. You're sending out these. The tool is just making
it pretty. It doesn't work that way. There's some blame
to go around. It's not one person, it's not one
tool, it's both. They have to have to be able
to work together. You've got to be able to work
with your machinery in order to create the cognitive. Yeah,
absolutely. So I think it can come down to just
as simple as an sop, right. What is your process
and procedures say when you're developing code? What are the
tools you're able to use? And if you're able to
use those tools, what types of data are you able
to include on those platforms? And you know, maybe nine
times out of ten, I'm just throwing out a statistic
I'm not really necessarily confident in. But you know, maybe
it's just as simple as that. What does your SOP
say about how you develop and share code? And what
are those specific data sets that you're able to do
on those platforms? And it may be as simple as
that. I mean, there may be more complexity to it,
but sometimes that's all it is. Absolutely. And I think
it does start to raise too though, you know that
there's always this kind of, I don't call it a
phantom issue of shadow it. Right. You can tell people
what they are or aren't allowed to use. Some people
don't always listen. Right. And I think especially in kind
of like an AI era, we're seeing a lot more
shadow it and shadow AI kind of pop up. And
so I'm wondering, right, you know, you kind of, you
can set some, some rules around what people can or
can't use, but there's always going to be people who
maybe skirt the rules or whatever. I don't know, is
there anything you can do about that or shadow it?
Just something we're destined to deal with and I'm going
to throw it to you. Dave, actually I want to
see if you have some thoughts here. I feel attacked
here. I just feel like you're the guy most likely
to have some ideas about shadow it. You know, shadow
it, it's a double edged sword. This can be a
good thing, it can be a bad thing. Companies don't
want you going to certain places on their equipment. This
is my laptop does not belong to me. I can't
Just go out and visit whatever side I want. And
in doing that, the shadow, it is not a bad
thing. I mean, I would much rather know that when
I come in here in the morning and turn this
thing on that it's going to work properly and it's
not going to be broken because of some website that
I visited. I didn't get a token that's that a
bad guy planted on a website. And now I can't
get to, you know, my self evaluation. I just brought
that up because they're due. Yeah. So I think that,
you know, at the end of the day, people are
always going to kind of. There's no accounting for peeps.
You know, personal psychology, I guess, is what I would
kind of come down to. And, and I've said this
so many times on the show, people have said this
so many times on the show. So much of security
is just kind of dealing with that psychology and just
enabling people to make the best decisions in the situations
that they're in. So let's move on to our Next
story then. 200 companies breached in the Gainsite attack. Hackers
compromised the customer support platform Gainsight to get access to
Salesforce data through connected apps. Now, for me, the most
interesting part of this particular attack is that the threat
actors first got into Gainsight through that Sales Loft breach
back in August, if folks remember that. So if you
recall, you know, the Sales Loft breach involved hackers stealing
some authentication tokens from the Drift chatbot and then using
those to get into some connected Salesforce instances. Now, according
to the hackers themselves, they initially breached Gainsight during this
attack and then they used that access to move laterally
into Gainsight's customers Salesforce instances a few months later. And
the most striking thing for me off the bat is
that long tail of this breach, right. We're talking about
fallout from an attack that happened a few months ago,
which, which, which seems like a complicated timeline to kind
of deal with. Michelle, any thoughts on your end about
what this, you know, how this timeline looks, how it
complicates things for defenders? What's your take here? Yeah, absolutely.
I mean, I think it's sort of a warning to
when these types of events happen to sort of anticipate
or expect a bit of a fallout and to be
on guard and vigilant with this type of attack or
compromise. It's just another fallout from targeting this type of
ecosystem. Right. Where we now again have the proverbial domino
effect across all of these organizations. Yeah, it's that ecosystem
situation again, right. And it's kind of why I wanted
to include that here because it's yet another example of
like that software supply chain being kind of vulnerable and
it being hard to get some visibility into that. And
speaking of visibility, you know, Brian, do you have any
thoughts on how organizations can kind of, I don't know,
maybe gain some more insight into all these moving parts
going on in, in their software supply chains or is
it just kind of, I don't know, wait until something
happens? Any takes there? I was reading a bit about
like the tokens and that were affected in this attack
or that were used and they're like the persistent permissions.
I'm not really sure if that's something that you need
for, for all of these that maybe that, that could
be a huge issue because with those persistent permissions, that's
just giving attackers the chance or the opportunity to go
in there and exploit those. But I guess that's, that's
the only thing I have on that. Well, I'm glad
you brought that up. Right. Because I do think one
of the important stories here is like this, this kind
of, you know, system to system trust that we have
sort of, you know, inherently it's like, hey, you know,
we trust the, the SaaS vendor so we trust them
to have this kind of persistent access, not necessarily thinking
about what that access could do if it falls into
the wrong hands. Right. Dave, how about you? I want
to bring into the conversation any thoughts gain site attack
on, on the sales loft breach. What's your take here?
Any time that you have a breach, you're, you're going
to have to sit back, you're going to have to
evaluate where the breach came from and you're going to
have to evaluate what that breach is going to touch.
And you really do need to be in the mindset
that it's going to touch everything because chances are it
is. So if you're, if you're hosting gainsight like Salesloft,
Salesforce, Gemini, Google, any of the big software chains, it's
going to be touched. So you have to go through
and you've got to work with all of those companies
that you partner with to make sure that their software
is secure and let them know, hey, we've been breached,
that means you could possibly be breached. And unless someone
completely rebuilds their network and their software connections and that's
always going to be a problem. Absolutely. So it calls
for like a more collaborative approach almost to this kind
of stuff in terms of practicalities. I mean, I don't
know, is There any way to foster more of that
collaboration? Do we see enough of it right now? You
know, I'm just going to kind of go around, I
think the circle here and see what our thoughts are
in terms of the state of this, you know, inter,
organizational, interplatform collaboration. Michelle, I'll start with you. Do you
think things are on the right footing for this stuff
right now? Yeah, I mean, I guess we could always
do better. There's definitely a lot of organizations and CERTs
and ISACs and opportunities to share information where we can
all benefit from. I think it's also a issue of
having the right curated threat intelligence and knowing what to
focus on because there's so much out there and something
that may not seem as a big issue could be
a very, well, a major issue for your organization based
on your attack surface. So I think you can't put
out all of the fires, especially the ones that aren't
near your house, but you should definitely focus on the
ones that are really close to your house, like next
door. Yeah, it's a fine line, right? It's like these
things, you know, they can, the attacks can ramify in
very interesting ways. You can't expect, but you still need
to like pay attention. You can't just assume that every
single one's going to hit you. Right. And so, I
don't know, it's a delicate balance. Brian, your take any,
any thoughts there. To go along with what Michelle said?
Just taking these threats seriously. I think that sometimes when
organizations are willing to share their information, other organizations don't
take it seriously. I mean, for example, if you tell
your next door neighbor that, hey, you don't have a
front door, you're, you know, your door is missing and
they don't do anything about it and they get robbed.
I mean, I think that's a frustrating thing in the
world of security. I feel like Dave and Michelle will
both agree to that. You need to take, take these
things seriously when they arise. And like Dave said, do
your due diligence and look at each and every part
of your organization because like you said, chances are, it
is affected. Fantastic. Then let's move on to our next
story. The Android streaming devices that hijack home bandwidth for
the bad guys. Now, security researcher Brian Krebs broke down
the nefarious network of super boxes and other cheap IoT
streaming devices, all for sale in legitimate stores and websites
that secretly conscript consumers Internet connections from malicious activity. So
at a high level, the scheme kind of works like
this. The devices promise to, to let People stream various
platforms for free and, and they, they make good on
that. Subscript. Very sketchy apps. But the deal that you're
making that you may not know as a consumer is
these on the back end. These boxes are also using
your home Internet connection as bandwidth for a proxy network
that helps funnel traffic for botnets, shady content scrapers, and
all kinds of malicious activity. Now, you know, most of
our listeners are enterprise security folks and we often touch
on enterprise security topics. So to start off this little
story, I'm wondering, is there an enterprise angle to this
risk or is this purely a consumer safety issue? And
I'm going to start with you, Brian. Do you think
this is something organizations need to be worried about or
is it just, hey, consumers, watch what you're doing. Maybe
a little bit about. I mean, I think everybody can
take a lesson from this. It, to me, it's like
if it's something that seems too good to be true,
Absolutely. And I'm also kind of thinking about, you know,
we were talking before about shadow it, and it's very
easy for me to see, I don't know, somebody plugging
this thing into the wrong network or, you know, I
don't know, maybe you've got your company laptop on the
same network as this thing and who knows what it's
going to do, you know, So I think there's opportunity
here to ramify. That's a great point. I feel like
that's very much the case. When users have something like
this at home, they're taking the work home. Now, a
lot of us work from home at least a few
days a week. So chances are, yes, if you have
one at home, it's on the same network. Yeah. Side
note, I said the word ramify like 12 times in
this episode. Why do I keep saying that word? Okay,
now one of the very interesting things to me about
these devices is that they, they, they, they totally sidestep
the kind of dark web distribution we're used to for
a lot of malware. Right. You know, they're promoted by
influencers on social media sites. Not big name influencers, but
Now the retailers are rarely selling these things directly. Right.
It's often third parties using their platform, but they're still
there. Right. And you have a sort of element of
trust in these things. And I'm thinking about how one
of the most common tips we give people to avoid
being scammed is only shop in legitimate stores. What happens
when you're, you're shopping in a legitimate store and you
buy one of these things? Michelle, any thoughts on how
this complicates our defenses? Our anti phishing techniques, if you
will? Yes, absolutely. So when I saw this article, I
immediately sent it to my family because tis the season,
right? Everybody's gonna go out and they're gonna be shopping
and there's so many scams out there. So it's never
too early to be warning about that, so. Exactly. I
was thinking the same thing. Right. You have these trusted
vendors, whether it's brick and mortar or online, but they're
selling these products that shouldn't be trusted. But it's not
too far of a leap to go from trusted vendor
to trusted product. But we see this across all types
of industries. So you know, pharmaceutical, food industry, if you
go into your favorite supermarket and you buy something, it
could be expired, you might go into your favorite restaurant,
there could be, you know, food that gives you food
poisoning. So but you know, these are trusted providers across
many different industries and, but you know, it's similar concept
with this box that could be sold at your favorite
store, online or otherwise and they're not to be trusted.
So again, it comes back to awareness. Absolutely. Dave, your
take on this whole scheme, any thoughts? I would never
buy one of these. Not to say that there may
be one in the living room, I don't know. But
seriously, I don't know how many people have gone to
Amazon and actually looked at sold by. Everyone thinks that
when they go to Amazon they're buying it from Amazon.
That's not necessarily the case. Like you were alluding to
earlier, Best Buy does the same thing. You can go
onto the Best Buy website and you can buy something
that's actually not sold in the store. So. So the
trust be very wary about trusting Amazon when it's sold
by someone else. Amazon is just a distribution platform for
them. As far as having it on your network with
your work machine, I mean, yeah, you probably shouldn't do
that, but if you do, what's anyone actually going to
do about that? Is IBM going to call me and
say, hey, we noticed you have a super box on
your network? The first question that's going to come out
of my mouth and is what are you doing scanning
my network? The trust has to be there. And it's,
and it's, you can't, you can't go by where something's
purchased from to gain the trust. Yeah, these boxes really
don't have a place in the home. And I understand
why people get them. They can't afford Netflix. Well, Netflix
is 7 99. Cough up the 8 bucks. Don't go
buy one of these super boxes. You really are putting
yourself, your network, your family at risk by doing this
because you're opening up this gigantic door that someone doesn't
even have to, you know, you don't have to be,
you know, 4 foot 9 to walk through it. You
can walk through it as a seven foot tall giant.
It's there. And so don't do that. That's my answer
to that. Don't do that. It's what I used to
tell my kids. Well, that burns. Well then don't do
that. You heard it here, folks, just don't do it.
But no, but I'm glad, I'm glad you brought up
that, that, that buzzword trust, right? It's been in like
all of our conversations today. And it's like the through
line, right? How like at the end of the day
you can't really use something like, you know, what's the
name of the website as a proxy for trust or
like, or what, what, who developed the thing, right? Like
you have to do your due diligence as the individual,
whether you're a developer, consumer or whoever. You got to
make sure the stuff that you' plugging in is safe.
And, and you know, that can be tough, but just
you can't just rely on it being Amazon. How was
the organization even know that this thing is on your
network? And that raises the question for me of like,
is there really, there's, I guess there's nothing that an
organization can kind of do, right, like short of, of
surveilling people. I'm probably getting into territory that's going to
get this part cut. But I just need to ask
the question, you know, is it just kind of like,
hey, you know, tell your workers not to buy these
kinds of things and then just cross your fingers and
hope that they don'? Is that the extent of our
defenses here? It's about the only thing you can do.
You can't stop Joe Public from buying one of these
or Joe IBM from buying one of these and putting
it on their network. You can say, don't buy one
of these. It's dangerous to the network or it might
be dangerous to our machinery, but in the end, I'm
paying for my Internet access. I'm paying for my network
access. If something happens to an IBM Product, the machine
that I'm using. Okay, then maybe I should be held
responsible. Responsible for that. But as far as putting it
on my network, I don't really see a case where
any company would be able to say, don't do that,
or you can't do that. I should say, absolutely. Brian,
I saw you nodding. Do you have anything to add
there? Yeah, exactly what Dave said. You can tell people
not to do something, but at the end of the
day, it's their prerogative. They pay for their WI fi,
they pay for their power. If they want to connect
something in their home and be at risk, then. And
that's up to them. I will touch on what Dave
said about just going out and purchasing Netflix. I think
what makes these things so attractive in today's world is
that you can't just have Netflix anymore. You have to
have hbo and you have to have Hulu and Disney,
and you have to have Amazon and you have to
have Paramount. And I feel like the list keeps getting
longer and longer every time I want to watch a
show. Sometimes I feel like halfway through a season, I
have to switch to another streaming service. It seems to
be getting a little ridiculous. So paying one one price
to have access to everything is awesome, but I feel
like we also need to use that. I don't know,
just your gut instinct. Like Michelle was saying, like, you
can go off to the store and purchase something, but
if it looks bad or maybe seems like it's expired
and you don't feel great about putting it in your
cart, just don't do it. It. I read that you
have to rip out, essentially rip out the Google Play
store to get this to work. That should probably be
like, huh, Maybe at least get you get, you know,
get you thinking that this might not be the best
idea. But honestly, I do see. I do see the.
The attractiveness in this and, like, the desire to want
one of these. I actually recently thought about, like, oh,
could I build one myself? That's a little bit safer.
Like, because, I mean, it is. It's. It's awesome. Like,
who would not want to have access to this? That's
part of what makes it such a good scam, though,
right? Like, it promises to address, like, a real pain
point that people. We. Look, we've all been there, man.
We've all been like, how do they reinvented cable? But,
like, you know, like, it addresses. It claims to address
a real pain point. It just does so in a
really shady, sketchy way. And, you know, I'm just. I'm
glad that, you know, you both brought up this fact
that, like, I think sometimes you have a tendency to
feel like in the name of security, we have to
kind of go a little further than we really should
in terms of what we tell people to do or
what we're allow them to do. At the end of
the day, hey, you have to recognize that there's a
separation and that, you know, people are people and what
they do in their, you know, on their home networks.
You don't have a right as an organization to touch
on that. Zooming out, though, I want to talk a
little bit about what this says about the kind of
state of IoT security, because I feel like for as
long as I've been around, people have been saying IoT
up to everything else? Yeah, because I think a lot
of them are in the hands of just everyday people.
All of those devices that are just open to the
Internet, that are just food for botnets. Yeah, we've been
tracking that for a long time. I don't know. But,
you know, it kind of likens to this previous story
about the box. I think there is something to be
said about individuals taking ownership of their own personal security
that then has an impact in what they do in
their workplace. So if they're taking ownership of that and
doing, you know, what is best for themselves personally and
their own home environment, then that would, I would imagine,
have some sort of positive impact on what they do
in the workplace. Because it's all about kind of just
thinking about it. It. Right. Just taking a minute to
just stop and think about it. Is this the right
decision? Am I making the right choice? And I know
that's difficult to do because you have to apply that
to so many things in your life now you have
to do it, you know, as it relates to cybersecurity.
But I do think there, there could be impact there
if you just stop and think is, is this going
to impact me in some negative way? And I'll just
say real quick that I had a conversation with a
friend over the weekend where her family members had experienced
some fraud and it kind of led back to not
having MFA enabled. And it's like, that is something we've
talked about so much, right? And now they know. I'm
sure MFA is now enabled across everything that can possibly
have mfa, but they had to experience that compromise first
to know that. But wouldn't it be great if they
did it. Absolutely. And yeah, ever since we've started this
podcast, actually even before that, ever since I got into
the security world, I just yell at all of my
friends and family to make sure they have, have many
factors of authentication. But I like that, that's a nice,
positive kind of end to this, this segment. You know,
if you, if, if people kind of take some, you
know, practice personal, practice security in their personal lives in
good ways, that shows up elsewhere. Right. And, and that
can have those, you know, effects on, on your corporate
security too. I do not own one of those, by
the way. No, Dave, we know, we know that people
would never, ever do anything sketchy in any way, shape
or form. You're an upstanding gentleman. Moving on to our
last story of the day, Malicious poems break AI guardrails.
Now, I saw this and I had to put it
on the show because as people probably do not know,
outside of my day job as a podcaster, I am
a poet. And so the one time I'm allowed to
talk about poetry on this podcast, I will do it.
In a paper titled Adversarial Poetry as a Universal Single
Turn Jailbreak Mechanism in Large Language Models, researchers from Dexi
Sapienza University of Rome and Santana School of Advanced Studies
share that phrasing malicious prompts as poems instead of direct
instructions is a remarkably effective way to break guard rails
on 25 different models, including all the big ones. Your
Geminis Gro, your chat GPTs, Anthropic deep deep sea seek,
they all kind of fell for this thing. The researchers
original poems, you know, when they write, they wrote like
a malicious poems with their own two hands, achieved an
overall attack success rate of 62%. And if they took
some prose malicious prompts and fed them to machines and
had them turned into pro poems, they still had a
success rate of 43%, which is pretty high. So I
just want to start with initial takes from people. What
do you think about using poems to break the AI
systems? Dave, let's start with you. Any thoughts here? I
am always looking for a to break the guard rails
on AI. Not because I want to do anything malicious,
but because I want to see what, you know, test
the security of the AI that I'm using. Gemini 100%
success rate when using this pros chat GPT 0 to
10%. Which 1am I going to trust more? I'm going
to go with chat GPT if it's something that I
need to establish trust with. I think it's brilliant to
use poetry to get this guardrail off of these things.
I really don't want to see it in the AI
that I'm using, so I'm going to lean more towards
the ones that are more secure. And if I want
to try it myself, I'll write a poem and stick
it in Gemini. Brian, I want to get your take.
Any thoughts here on your end? Yeah, I honestly think
that this is setting up for the great backdrop for
a movie where a special team needs to acquire like
the best poet in the future when AI has taken
No, it's on a serious note. I think it, it's,
it's just interesting how people are constantly finding new ways
to jailbreak these AI systems, these LLMs. I, I don't
know where I read it before, but somebody said or
told me that it's like it's more of an art
when dealing with these and when trying to jailbreak them,
especially when new models come out because one that was
better defended against a prompt like this, now with the
new model might not be so because of reasons that
people aren't sure of. But like Dave, I'm always trying
to find new ways to jailbreak these as well before
I start. Start using one that I want to, I
guess, want to choose as my. As my daily driver.
Yeah, I'm always trying to jailbreak these in different ways,
so it's super interesting. Absolutely. Michelle, let's bring you in
here. Any thoughts on this tactic? Yeah, I mean, obviously
very creative, right? So we can throw poems at it.
We can throw maybe rap songs, other genre of music,
creative language. Yeah, let's keep doing that. I think, as
Dave and Brian both said, we need to make sure
that we're training and leveraging models that are. That can't
get jailbreak. So the only way to do that is
to jailbreak it. Right. To continue to improve upon the
models and the versions. Trust but verify. Trust but verify.
Ooh, that's good. And you know what? That's a perfect
way to end the episode because that is all the
time we have have for today. I want to thank
our panelists, Michelle and Dave and Brian and thank the
viewers and the listeners, of course. As always, subscribe to
Security Intelligence wherever podcasts are found. Stay safe out there
and don't let anyone tell you that poetry degree was
useless. You finally have a reason for it. Now, a
sneak peek of our latest Security Intelligence audio only bonus
episode. Trawling the honeypot. What it's like to discover a
new malware Strain. They call it a honeypot. A fake
computer system set up to attract cybercriminals just like bees
to honey. The purpose is twofold. First, if they're busy
attacking a dummy system, they won't be attacking your real
assets. Second, while they're poking around in your digital terrarium,
you can watch them, learn from them, see what they're
up to in a safe, control, controlled environment. I understand
where the name comes from. Honeypot, as in something enticing.
But to me, they look more like superfund sites. Pits
into which all the Web's toxic sludge flows. Grease traps
more than honey traps. But if you've got the guts
to dig through that sludge, you can find things, valuable
things, things that help security pros and everyday people protect
themselves from that poison. I'd say diamonds in the rough,
if we weren't actually talking about malware. Okay, so, hi,
I'm Raymond Joseph Alfonso. I am a malware reverse engineer
for IBM X Force Threat Intelligence team. Raymond is one
of those people with guts. As a malware reverse engineer,
he spends a lot of time working directly with some
very dangerous code. Code. Malware analysis is really exciting because
you don't really know what you're going to get this
time. And I also find it kind of fulfilling because
sometimes I'm also learning new things. The only downside that
I can think of is sometimes it gets real stressful.
You know, Raymond is in the malware pit every day,
by which I mean a fake inbox meant to solicit
phishing emails and assorted other evils. It was there not
too long ago that one junk email caught his eye.
When I was doing my research, I saw this one
emailed. I decided to investigate it further, and then I
saw that it was delivering different payloads every time. Quirky
loader. That's what this thing came to be known as.
Raymond still can't say for sure why this particular email
stood out. As far as spam goes, it was fairly
nondescript. But I don't know, call it his spidey sense.
Something triggered an alarm. Alarm. So he started to dig
in, and what he found was far from nondescript. A
malware loader combining both tried and true tactics and some
new tricks for evading detection. As they always say, cybersecurity
is a cat and mouse game. And I think it
will always be like that until the end of time.
So we should always try to stay on top of
the current malware trends in order for us to effectively
protect others from those threats. Listen, to the full episode
wherever podcasts are found.