AI-Powered Cyber Attacks Emerging
Key Points
- AI is becoming a dual‑edged sword: while it powers business innovations, it also equips hackers with more sophisticated tools for attacks.
- AI‑driven agents can automatically locate login forms on websites with about 95% accuracy, using large language models to parse page elements.
- These agents enable advanced credential‑guessing techniques such as password spraying and brute‑force attempts, bypassing traditional rate‑limit defenses.
- Frameworks like BruteForceAI automate the entire penetration testing (or malicious) process, allowing attackers to launch high‑speed login attacks without deep technical knowledge.
- Understanding and defending against AI‑augmented attack vectors is essential for organizations to safeguard authentication systems against the growing threat.
Sections
- AI-Powered Cyber Attack Landscape - The speaker outlines how AI is being weaponized—highlighting six emerging threats such as autonomous brute‑force login tools like BruteForceAI—to illustrate the growing need for stronger defenses.
- AI-Driven Ransomware Orchestration - The excerpt outlines the “Prompt Lock” research project, where an autonomous agent powered by a large language model plans target selection, assesses data value, generates encryption code, and executes the ransomware attack end‑to‑end.
- AI-Generated Phishing Neutralizes Old Cues - The speaker warns that traditional clues like bad grammar are becoming unreliable because attackers now use large language models to create flawless phishing emails, necessitating a retraining of users to recognize more sophisticated threats.
- AI Deepfake Scam and Exploit Automation - The speaker recounts a 2024 video‑deepfake that tricked a CFO into wiring $25 million, then explains how AI can automatically generate exploits by processing public CVE reports.
- AI Lowers Attack Barriers - The speaker warns that AI automates the full cyber‑kill chain, making sophisticated attacks accessible to low‑skill actors and forcing defenders to adopt AI for prevention, detection, and response.
Full Transcript
# AI-Powered Cyber Attacks Emerging **Source:** [https://www.youtube.com/watch?v=0tHb6U2604g](https://www.youtube.com/watch?v=0tHb6U2604g) **Duration:** 00:18:25 ## Summary - AI is becoming a dual‑edged sword: while it powers business innovations, it also equips hackers with more sophisticated tools for attacks. - AI‑driven agents can automatically locate login forms on websites with about 95% accuracy, using large language models to parse page elements. - These agents enable advanced credential‑guessing techniques such as password spraying and brute‑force attempts, bypassing traditional rate‑limit defenses. - Frameworks like BruteForceAI automate the entire penetration testing (or malicious) process, allowing attackers to launch high‑speed login attacks without deep technical knowledge. - Understanding and defending against AI‑augmented attack vectors is essential for organizations to safeguard authentication systems against the growing threat. ## Sections - [00:00:00](https://www.youtube.com/watch?v=0tHb6U2604g&t=0s) **AI-Powered Cyber Attack Landscape** - The speaker outlines how AI is being weaponized—highlighting six emerging threats such as autonomous brute‑force login tools like BruteForceAI—to illustrate the growing need for stronger defenses. - [00:03:08](https://www.youtube.com/watch?v=0tHb6U2604g&t=188s) **AI-Driven Ransomware Orchestration** - The excerpt outlines the “Prompt Lock” research project, where an autonomous agent powered by a large language model plans target selection, assesses data value, generates encryption code, and executes the ransomware attack end‑to‑end. - [00:06:36](https://www.youtube.com/watch?v=0tHb6U2604g&t=396s) **AI-Generated Phishing Neutralizes Old Cues** - The speaker warns that traditional clues like bad grammar are becoming unreliable because attackers now use large language models to create flawless phishing emails, necessitating a retraining of users to recognize more sophisticated threats. - [00:11:41](https://www.youtube.com/watch?v=0tHb6U2604g&t=701s) **AI Deepfake Scam and Exploit Automation** - The speaker recounts a 2024 video‑deepfake that tricked a CFO into wiring $25 million, then explains how AI can automatically generate exploits by processing public CVE reports. - [00:17:16](https://www.youtube.com/watch?v=0tHb6U2604g&t=1036s) **AI Lowers Attack Barriers** - The speaker warns that AI automates the full cyber‑kill chain, making sophisticated attacks accessible to low‑skill actors and forcing defenders to adopt AI for prevention, detection, and response. ## Full Transcript
AI attacks. Sounds like a bad sci-fi movie, right? Well, unfortunately, in this case, it's actually
happening, and we can expect to see more of it going forward. Businesses are using AI to improve
customer service. Customers are using AI to research products. Unsurprisingly, hackers are
using AI to, well, hack. Agents powered by AI are equipped with the tools to write
code, attempt logins, generate fake videos and much more. While AI is doing amazing things to
reshape our businesses and our lives in positive ways, it's also amping up the threat by putting
more and more power in the hands of the bad guys. In this video, we're going to take a look at six
different examples of attacks that are emerging using AI to power attacks so that you can
prepare your defenses to withstand the onslaught. The first type of attack we're going to take a
look at is an AI-powered login, where we're going to test the security of your system and your
authentication capabilities to see if they'll withstand an attack. In this case, AI is being
leveraged, and it's a pen testing or penetration testing framework that could be used, again, to
test your security, or by a bad guy to break into your system. And what it does,this particular one
is called BruteForceAI, it leverages an agent, which is an AI system which is able to
operate autonomously, and it then uses an LLM to do some of its processing.
So what is it looking for? Well, this agent is going to go out and start identifying login pages.
So it's going to look for web pages that have login information. It takes the page, sends it off
to the large language model that parses the page and figures out if there are any forms, login
forms, areas where you can type in user IDs and passwords and things like that. LLMs are
particularly good at doing that. In fact, this one was able to do it in roughly 95% of cases,
correctly identify where the login area was on that page. Once we have that identified, then the
agent is, uh, conducting an attack and it directs the attack. What it does in this case is you have two
different options. One type of attack is a brute force attack, where you basically try every
combination of user IDs and passwords. This usually is not going to work all that well,
because you're going to run into a three strikes policy on a particular website that's going to
lock you out after three bad attempts or something along those lines. So that's one
possibility is brute force, but a password spraying attempt might get away with, because in
this case, you're sending the user ID and password to a particular system or to a particular ID, and
then trying a different ID with that password. So, you're trying in these cases to a number of
different possibilities, but not all just barreling in on only one possibility. So, this is,
again, AI is running this attack. The user didn't have to figure out all of these capabilities.
They use the pen testing framework and they launch it, and the AI takes care of the rest. By a
similar theme, let's take a look at AI-based ransomware. In this case, we're going to talk about
something called prompt lock. It was a research project that was designed as a way to see what's
possible here. Um, what is the art of the possible in this case? And it also uses an
agent, which then is leveraging a large language model. So, you're probably seeing a
theme here. And this whole thing then is designed where the agent is going to go off
and really orchestrate and do all of the activities, direct all of the activities that are
necessary here. So, it's going to plan the particular attack. It's going to go off and
analyze the information that it needs once it figures out. In other words, it
will figure out what systems do I want to attack and then analyze sensitive data on those systems. So,
it will look for files and say, look, I think this stuff could be really sensitive. They're
going to pay a lot for this. Or, look over here and say, oh, that's probably really not worth my time.
And it can use that information then to figure out how much to charge, for instance. Uh, it's also
going to generate the actual attack. So, whatever code or whatever is
necessary in order to ... to encrypt the files and that sort of thing and then execute the attack. So,
it's going to do all of that under the... the auspices of this agent. So the agent is really
running the whole thing. The LLM is coming into play because it can help with the analysis of
these files you feed them in. And then we know what we're going to do. And the attack itself
could result in an exfiltration of data, where I take your data and keep it for
myself. It could amount to an encryption of your data where I say, I've got your data and I'm not
going to give it back unless you pay me. Or it could just erase the data. So it depends on what
type of ... of attack you want to do or threaten that you're going to erase the data after a certain
period of time. And then some other things that ... that the execution of the attack could do also is
where this AI agent leveraging the LLM, which is able to understand language, can actually
write the ransom note for you. And it could be very personalized. It could, for instance, say, here
are the files that I have ... have leveraged. And the ... here are the files. If you want them back, this is what
it's going to cost you. And do all of that, all completely directed within the AI. And because
it's all being done within an AI, we also have the ability to make every single one of these attacks
different. can be essentially a polymorphic attack where it changes. The first instance of
this attack looks different than the next instance, which looks different than the next
instance, which makes it really difficult to detect. We've seen polymorphic viruses and malware
for decades, and there they present a problem. Now we could see polymorphic generated ransomware
attacks coming from AI. And by the way, this particular instance, this particular project, all
of this capability runs in a cloud, which means you basically end up with ransomware as a service,
all brought to you by AI. The next type of attack we're going to talk about is AI-powered phishing.
Now, what have we been telling our users about phishing attacks? Remember, those are those emails
that come in that say, I'm your bank or I'm some well-known entity, but it's a fake and
they're trying to use ... usually get you in most cases to log in, click on a link that takes you to
a bogus site. Then they harvest your credentials and then they're off to the races. What do we
normally tell people to look for that is a clue. This is the dead giveaway that this is not real.
Well, oftentimes it's bad grammar, bad spelling. So we say
if you see those kinds of things, then think it's a phishing attack. Well, the implication is if you
don't see those things, then people are likely to believe that it's legitimate. And I'm telling you,
we need to untrain all of our users from that, because now with AI, we're not going to see this
kind of stuff much anymore. The smart phishers will, in fact use an LLM, a large language
model, which will generate their text in perfect English or Spanish or French or what have
you, even though the attacker may not speak a word of that language. So, these kinds of artifacts,
these kinds of clues, if we're expecting to find them, we may not find them much anymore. And it
could cause someone to have a false sense of security in this. And the way it would work is, an
attacker just basically puts prompts into a large language model. They're saying, okay, generate a
phishing email that does this, that or the following. And then what comes out is a phishing
email that then they send out to others. So they can just copy and paste that. And you might say,
well, but the ... the LLMs that I work with, if I ask them to generate a phishing email, they'll refuse
to do it. And they might. But there are other forms of those LLMs that sit out there that I'm not
going to give the names of, but you can find them if you want to, some are on the dark web, that will
generate all of this, that don't have those kind of guardrails and don't have those kinds of
restrictions. The bad guys will be using those. So, and you could also do a little bit of research to
really personalize this, hyper-personalize this. With AI, it could go out maybe and scrape all of
your social media posts and things like that, gather a lot of information about you to make
this phishing email that sent to you very specific to you and therefore you're more likely
to believe it. I did a video on this a while back where IBM did an experiment on this, where we
basically took an AI and gave it five prompts and five minutes and compared the
phishing email it had and how effective it was to what it took a human 16 hours to
produce. Well-crafted phishing attack. And you know what? They were almost equal. This
one was slightly more effective, the human-generated one, but not by much. And when you
consider 16 hours versus five minutes, you can see where the economics of this go. And here's the
thing. The humans will not be getting vastly better at generating these; the AI will. So, this is
going to be another area that we're going to continue to see more and more influence from, is
AI-generated phishing. Now, the next type of attack we're goi ... going to take a look at is AI-powered
fraud. And in this case, the fraud could take a lot of different forms, but I'm going to
zero in on one particular type of fraud that we call a deepfake. And a deepfake is
basically a ... a case where we're using generative AI. Uh, so we've got a gen AI
model here, and I'm going to take, in this case, something that you say either an audio
recording of your voice or a video of you doing something. I'm going to feed that into my
generative AI model, and then it is going to generate a model itself. And that model
that it has is basically copying what you act like, sound like, look like, all of this sort of
thing. Then the only thing I have to do is come up with a script, words that I want to put in your
mouth, and I feed those into this, and then it generates out the result. So that's how a deepfake
works. And they're not very hard to do. And in fact, we've already seen cases where these have been
very effective. And by the way, if you want to know more about this, I've got a whole video on that, so take
a look. I'm just going to say, if you think that by not leaving your ... your voice on your ... on
your voicemail, it's going to protect you from not getting deepfaked, uh, think again. It doesn't take
very long. Some of these models can ... can generate a very believable deepfake of your voice with as
little as three seconds of audio recording of you. Now, we've already seen, as I said, this be
effective. This is not brand-new news. In 2021, there was a case where an audio deepfake was
done and it convinced an employee that their boss was telling them to wire 35
million dollars to a particular account. Turns out that was a deepfake; it wasn't their boss.
The company had then basically lost 35 million dollars. More recently, in 2024,
there was a case where the deepfake got even better, and it was video-based. In this one, there
was a video call that simulated the CFO, the Chief Financial Officer of a company, and it
convinced an employee to wire 25 million dollars to an attacker. So, this is not theoretical. And
this is a case where generally we believe what we see and hear. Well, I'm telling you, with deepfakes,
if you aren't in the room, you can't believe it. Our next type of attack that's AI-powered is
going to be AI-powered exploits. Now, an exploit is something that once you found a vulnerability,
the exploit is the thing that takes advantage of that vulnerability. So for instance, we publish in
the security industry these things called CVEs, common vulnerabilities and exposures. are
reports where once we find a particular vulnerability, it's described and these things are
numbered, they're cataloged, they're publicly available. It's a way that everyone in the
security industry can talk about a particular vulnerability. And we all know what we're all
talking about. Also, it will talk about the way the thing works and what the underlying vulnerability
is. So, this is publicly available information. So, with this, another research project, they ... they took
something, uh, CVE and feed it into a thing that is in an AI called a CVE
Genie. So this again, is an agent that's going to go off and take the CVE,
the document itself, feed it into an LLM. Starting to see a trend here. Agent leverages LLM.
LLM reads the document, pulls out the salient details, sends that information back to
the genie, which the ... is the agent that then not only figures out what this vulnerability means,
but how do we exploit it, and writes the actual exploit code for you. So in this case, the whole
process is automated, from feeding in the CVE to processing it, to generating the exploit.
And, here's the thing, uh, with this uh, particular version, they achieved a 51%
success rate by just feeding in a CVE into the system. And the cost for each one of these
exploits? Less than 3 dollars. So, the economics of this are astronomical
for the bad guys. And that means, individuals who don't know anything about coding will be able to
take advantage of systems by using publicly available information and an AI at their disposal.
And other examples that do this also are malware. So, malware is a type of
exploit, in many cases where we're taking advantage of an underlying vulnerability in the
system. So, I could use a system like this to generate malware, which would then obfuscate uh,
its nature, be polymorphic, like I was talking about before. It could hide certain details about
the way it's going to operate and make it even harder to detect. So you have a smart system that
is making itself difficult to detect, making it a lot more effective potentially as well. Now what
if you have AI that runs the entire kill chain? AI-powered attacks all the way across the
board. This has already been done. It's been proven to be effective with a sys ... an AI system that
is weaponizing Anthropic, which is a popular AI system. And what it does, is it uses an AI
agent, as do many of these other attacks, and the agent is basically responsible for running the
entire attack. So, it's going to figure out and make decisions, tactical and strategic, on what
kinds of things it wants to attack, what kind of attack does it want to run. It's going to find its
victims. It's going to identify the ones that thinks are the most effective, maybe the high-value
targets, that sort of thing. It's going to analyze data that it has gotten off of their
systems that it has exfiltrated, figured out, here's the good stuff that I really want to have,
and I can analyze all that within the context of this agent. The agent might, by the way, leverage an
LLM to process the documents and things like that. I might create personas to hide behind so I can
say, if I'm going to do an extortion attack and say, maybe I'm going to release all of this
information to the world, if you don't give me a certain amount of ransom and pay a ransom, well, we
could create false personas and hide behind those and say you need to pay it to this false persona
And then that way, it makes it easier for the attacker to get away. And then ultimately have it
create the ransomware itself. It's going to create all of this. It's going to figure its demands and
calibrate those based upon the value of the information, the value of the target that they
have gone after, and figure out what likelihood they have to actually pay. Because if you ask for
a ton of money from someone who doesn't have it, they're not going to pay it. If you ask for too
little, well, then you sold yourself short. So, this can make all of those economic decisions. And
basically, uh, we could in the future add all kinds of things to this. You could imagine any kind of
attack that could happen, could potentially be done with a system like this. So, the AI
agent is able to advance the attack, it's able to design the attack, it's able to execute the attack.
And what all of this amounts to is basically we're making the skill level that is required for
an attacker be much lower. In other words, there was a time when an attacker had to be really
sharp, elite-level skills in order to pull off a complex attack like this. Now, all they have to do
is basically be like a vibe coder who's doing vibe attacking, vibe hacking. In other words, you
come up with the idea, instruct your agent to go do it, it figures out all the details, and then
just you collect the money. This is an example of where AI has been weaponized to do the full kill
chain. By now it should be pretty clear where all this is headed. AI-powered attacks are on the rise,
and we're just seeing the beginning of this trend. This much I'm sure of: AI attacks are not going to
get worse. That means the defenders are going to have to step up their game to meet the challenge.
We're going to need to leverage AI for cyber defense to do prevention, detection and response.
it won't be optional. It's going to be good AI versus bad AI. Make sure the good one wins.