Learning Library

← Back to Library

LLMjacking: Cloud Cost Hijacking Attack

Key Points

  • Generative AI can process natural language, create documents, and summarize large texts, but running these models can incur very high cloud costs.
  • A newly identified threat called **LLMjacking** hijacks an organization’s cloud resources to run large language models, leaving the victim to foot the massive bills (up to $46,000 per day).
  • Attackers typically gain footholds by exploiting misconfigurations, vulnerabilities, or stolen credentials—often leveraging publicly exposed API keys and passwords that can even be retrieved from LLM training data.
  • Once inside, the adversary downloads a language model from a repository to the compromised instance, fine‑tunes it, and uses it for their own purposes while the victim is billed for the compute usage.
  • The attacker may then set up a reverse proxy to sell access to the illicit LLM to others, turning the victim’s cloud environment into a profit‑generating service for the threat actor.

Full Transcript

# LLMjacking: Cloud Cost Hijacking Attack **Source:** [https://www.youtube.com/watch?v=dibZ1itSvM4](https://www.youtube.com/watch?v=dibZ1itSvM4) **Duration:** 00:07:09 ## Summary - Generative AI can process natural language, create documents, and summarize large texts, but running these models can incur very high cloud costs. - A newly identified threat called **LLMjacking** hijacks an organization’s cloud resources to run large language models, leaving the victim to foot the massive bills (up to $46,000 per day). - Attackers typically gain footholds by exploiting misconfigurations, vulnerabilities, or stolen credentials—often leveraging publicly exposed API keys and passwords that can even be retrieved from LLM training data. - Once inside, the adversary downloads a language model from a repository to the compromised instance, fine‑tunes it, and uses it for their own purposes while the victim is billed for the compute usage. - The attacker may then set up a reverse proxy to sell access to the illicit LLM to others, turning the victim’s cloud environment into a profit‑generating service for the threat actor. ## Sections - [00:00:00](https://www.youtube.com/watch?v=dibZ1itSvM4&t=0s) **LLMjacking: Hidden AI Billing Attack** - The speaker describes how attackers exploit poorly secured cloud instances to hijack large language model usage, generating massive unintended AI service costs known as LLMjacking. - [00:03:10](https://www.youtube.com/watch?v=dibZ1itSvM4&t=190s) **Protecting Credentials and Shadow AI** - The speaker explains how attackers leverage unmanaged secrets and hidden AI workloads (“shadow AI”) to compromise cloud environments, and advises using secure vaults for credential storage and monitoring for unauthorized AI instances to mitigate these risks. - [00:06:21](https://www.youtube.com/watch?v=dibZ1itSvM4&t=381s) **Detecting Abnormal Cloud Usage** - The speaker explains how monitoring usage patterns, billing spikes, and unusual activity can reveal unauthorized behavior and help prevent LLMjacking. ## Full Transcript
0:00Gen AI is an amazing technology that has changed the face of computing seemingly overnight. 0:06It can understand what you say using this thing called natural language processing, 0:12or it could create a brand new document for you just based upon a prompt that you feed to it. 0:18Another useful task is if you've got a ton of documents that are too long, didn't read, 0:24I could feed that in and it will give me a summary of just the important point, 0:29but running this advanced tech can be really costly, and someone has to pay the bills. 0:34The problem is that you may be unknowingly paying the bill for someone else who's riding on your dime. 0:39In fact, one report found that this could cost your organization $46,000 a day. 0:47That's a lot of money. 0:48The name given to this type of attack is called LLMjacking. 0:51LLM is in the large language model that's the underlying technology that powers these latest chatbots and jacking. 0:59because it's essentially hijacking your environment and leaving you with the bill. 1:04Let's take a look at how this whole attack works and what you can do to prevent it. 1:09Okay, so how does all this attack work? 1:12Well, it typically starts with a cloud instance that you own, but that you haven't really secured all that well. 1:18The attacker then figures out how to get into your cloud instance. 1:22How are they doing that? 1:24Well, they could be exploiting some known vulnerability or maybe it's unknown to you, and they are able to break in. 1:31It could be because of some sort of misconfiguration of the cloud environment 1:36that you didn't really lock everything down as well as you should have. 1:41It could also be from some stolen credentials. 1:44This could be passwords, API keys, things like that, and this is not theoretical. 1:50In fact, one recent report came out and said that they found 12,000 API keys and passwords that were available. 1:59through one of the very popular LLMs in its training data. 2:04So in fact, you could go to the LLM and ask it for some of these things and it would just spit that stuff right out for you. 2:09So here's how they end up breaking into your cloud environment, 2:13and once they've done that, then the attacker goes to a model repository, picks out a model that they like, 2:21this is just like shopping, downloads that into this cloud instance and now they have a large language model. 2:28a few tweaks more and they're off to the races. 2:30They now have their own LLM and it's running on your instance. 2:36You're the account holder and you're gonna get stuck with the bill, but that's not the end of it. 2:42In fact, if this guy wants to make a little profit, then what he can do is set up a reverse proxy, 2:48and the reverse proxy would allow lots of other people to log in, he gives them access, 2:55and then it exploits the vulnerabilities or the credentials that this guy used to break in in the first place 3:03or maybe he set up some others as a back door into this and he basically charges them access to this LLM. 3:10So not only is he having you pay the bill, but he then is getting a lot of this going directly to his pockets. 3:19So now we've taken a look at how this particular attack works. 3:23Now, let's take a look at what you need to do to protect against it. 3:27Well, let's look at the ways that this person was able to break into the environment in the first place. 3:32I told you one of them was this thing right here, credentials. 3:36So credentials, another word for that, are essentially secrets. 3:40And there are tools that do secrets management. 3:44In fact, the secrets that we're involved with here are things like API keys. 3:49It could be passwords. 3:51It could anything like that, that is supposed to be something that allows you to get into system that no one else knows. 3:57And what we need is a good place to store all of this kind of stuff, 4:01essentially a vault that can store those things and we can access them. 4:06And we make sure that we have access to them, not that all of the public LLMs and the bad guys have access them, 4:12so that's a good to start, is locking the front door and taking care of the keys once you've locked it. 4:19The next thing is, look what was happening in this cloud environment. 4:22There was an AI that was running, and it's running in your environment, and you didn't know about it. 4:28So this is what we refer to as shadow AI. 4:32In many cases, shadow AI can happen because an employee put it there. 4:35And they just wanted to experiment with it and see what it was gonna do. 4:38And it might be harmless, but you need to know about. 4:41So in fact, you can't secure what you can't see. 4:45You need to discover shadow AI in your enviroment. 4:48And in this case, you'd be discovering a truly illegitimate AI that's running on your system and using up your resources. 4:55The other thing you'd want to do if you discovered a shadow AI is to make sure that you secure it, 5:01that you have this thing locked down, that you don't have these kinds of problems with misconfigurations and the like. 5:08Now another area I said that the bad guy might get in would be through some sort of vulnerability that's found. 5:14Well, vulnerability management tools also exist, and we need those kind of capabilities. 5:18For instance, I need to be able to patch all the software 5:22in this system and that's not always an easy thing to do so use tools 5:26that allow you to do that, because every single level of down level old software you have 5:32has probably got a number of security vulnerabilities in it 5:35and those are the things this guy can take advantage of in order to get into your system. 5:39Some other things that we're going to look at is the configuration of this environment, 5:44the way the cloud is configured so we're gonna check on these kinds of things as well in that, 5:50and make sure that it discovers some of the common errors that people do, 5:54where they're exposing information that they wouldn't intend to do otherwise. 5:59There's also tools that help with this, cloud security posture management tools. 6:03So I recommend something like that in this space. 6:06And then ultimately, we need to be able to monitor all of this. 6:10Again, you can't secure what you can see. 6:12So I need to able to use the standard security information and 6:16event management tools and things like that, that look for security issues. 6:21that see whenever there are abnormal patterns and things like that. 6:25I also wanna look at usage records and see if there are people that are doing things in the system that they shouldn't be doing. 6:33Why is this particular cloud instance that's been pretty quiet for a long time, now all of a sudden it's peaked up? 6:40And well, maybe the reason that it is hitting a peak is because somebody is selling services on our system. 6:48And another thing is, look at the billing records here as well. 6:51Just look and see, if you're costing $46,000 a day 6:56suddenly for this environment, there might be a reason for that, and you'd want to know that. 7:00So do the things that I've talked about right here, and you should be in a better shape to avoid this LLMjacking.