Learning Library

← Back to Library

Understanding AI Attacks with MITRE Atlas

Key Points

  • Effective problem‑solving requires first identifying the root cause, whether it’s a leaky pipe or the specific steps of a cyber‑attack.
  • To defend against AI‑based threats, analysts must understand the attacker’s goals, methods, and the target’s value before deploying appropriate mitigations.
  • MITRE’s new ATLAS (Adversarial Threat Language for AI Systems) extends the ATT&CK framework to map tactics, techniques, and procedures unique to AI attacks.
  • Real‑world AI attacks can be extremely costly—MITRE cites a $77 million incident—so using ATLAS to visualize and counter these threats is increasingly critical.

Full Transcript

# Understanding AI Attacks with MITRE Atlas **Source:** [https://www.youtube.com/watch?v=QhoG74PDFyc](https://www.youtube.com/watch?v=QhoG74PDFyc) **Duration:** 00:08:40 ## Summary - Effective problem‑solving requires first identifying the root cause, whether it’s a leaky pipe or the specific steps of a cyber‑attack. - To defend against AI‑based threats, analysts must understand the attacker’s goals, methods, and the target’s value before deploying appropriate mitigations. - MITRE’s new ATLAS (Adversarial Threat Language for AI Systems) extends the ATT&CK framework to map tactics, techniques, and procedures unique to AI attacks. - Real‑world AI attacks can be extremely costly—MITRE cites a $77 million incident—so using ATLAS to visualize and counter these threats is increasingly critical. ## Sections - [00:00:00](https://www.youtube.com/watch?v=QhoG74PDFyc&t=0s) **Diagnosing AI Cyber Attack Origins** - The speaker likens fixing a leaky pipe to analyzing AI‑based cyber threats, emphasizing that understanding the attack’s source and progression is essential before selecting proper tools and mitigations, and introduces the MITRE ATT&CK framework as a helpful resource. ## Full Transcript
0:00if you want to fix a problem you have to 0:02first understand what's causing the 0:04problem so for instance with this leaky 0:06pipe we've got water pooling up here 0:09where's the cause well is it because 0:11there is break in the bend in this pipe 0:15or is it further Upstream maybe it's 0:17this fitting that's loose and therefore 0:19it's dripping down there or maybe the 0:22source is actually higher up in the 0:24system and the water is Flowing down the 0:27bottom line is if I'm going to fix this 0:29I got to know where the problem is and 0:31how this water has traversed so it's the 0:34same with cyber security in particular 0:37with AI based attacks I'm going to need 0:40to understand the type of attack that 0:43I'm dealing with then I can get out the 0:45right tools I need to understand what 0:47the target is what is the bad guy after 0:50in this attack and then what are the 0:52steps that they took if I can understand 0:54that and retrace those then I can do a 0:56better job of preventing this in the 0:58future and then ultim 1:00what are the mitigations that I need to 1:02put in place in order to figure out how 1:05I fix this problem we're going to take a 1:07look in this video at a timeline a tool 1:10that you can use to help understand 1:12better aib based attacks so there's an 1:15organization called miter that came out 1:18with a tool that we use in the industry 1:20very useful I did a video on this first 1:23one it's called the adversarial tactics 1:26techniques and common knowledge and it 1:28goes over cyber security attacks in 1:30general and shows you what are the steps 1:33what are the things that an attacker 1:34could go through so that you understand 1:35it better well they've built on that and 1:38come out with a new version that is 1:40designed specifically for AI and it's 1:42called Atlas for short it's the 1:45adversarial threat language for AI 1:48systems so Atlas is what we're going to 1:50take a look at today so that we can 1:52better understand these new class of aib 1:54based 1:55attacks so why do we have to care about 1:58these AI based attacks well it turns out 2:00miter that I mentioned previously has 2:03already documented one case that cost 2:05$77 million in Damages that was an AI 2:09based attack it was an attack on the AI 2:11within a particular system so we've 2:14already seen that this can be expensive 2:16I expect that number is only going to 2:18increase as we start using AI more and 2:20more in all kinds of use cases so Atlas 2:24let's take a look at what this thing is 2:26so this is what the framework looks like 2:28you can get just a general sense of of 2:30what's there and you can see in the 2:32columns we have the tactics for instance 2:35the first is reconnaissance then we have 2:37resource development initial access and 2:39so forth so well that is what the 2:42framework looks like and the tactics 2:45then are the things that are basically 2:47the why what is the attacker uh really 2:51trying to accomplish in a particular uh 2:53step for instance as I mentioned 2:55reconnaissance they're trying to case 2:56The Joint they're trying to figure out 2:58what does the environment look like like 3:00that's the why and merer has documented 3:0314 different ones of those different 3:06kinds of why the tactics then the 3:09techniques this is the how this is how 3:12do they go about doing what they're 3:14going to do and we've got 82 of those 3:18already documented these things might in 3:20fact grow over time as we learn more and 3:22more and attackers learn more and more 3:24different ways to do things and also 3:27included to sort of illustrate a lot of 3:29this are case studies there's 22 3:31different case studies as of the time of 3:33this uh video and there may be more in 3:36the future in fact we're going to take a 3:37look at one of those in a minute to give 3:39you an idea though to that further 3:41illustrates this there's also this thing 3:43called a navigator so the Navigator 3:46shows you in fact which ones of these 3:49have been selected which ones of these 3:50have been followed think of it as a 3:52breadcrumb trail that shows you in this 3:55particular attack what actually occurred 3:58out of all the different possible things 4:00here are the ones that were actually 4:01selected by the attacker and then a heat 4:04map as well and the heat map shows you 4:07uh other visualization for what these 4:10different tactics and techniques could 4:12be okay let's take a look at an actual 4:14case study from the miter Atlas 4:16framework this particular case looked at 4:20a malware scanner that was based on 4:22machine learning and it discovered that 4:24there was a universal bypass that could 4:26be appended to malware that would fake 4:29out the system and it wouldn't identify 4:31the malware as in fact harmful so how 4:34did this work we're going to map this to 4:36the various tactics and techniques in 4:38particular we'll take a look at the 4:39tactics so the Recon stage what did the 4:42attacker do well the first thing they 4:44did it seems is they went for public 4:47information there was a decent amount of 4:49this available through uh the the 4:52organization maybe does talks at 4:54conferences presentations maybe even 4:57YouTube videos or things like that so 4:59publicly ailable information like that 5:01also uh patents and other intellectual 5:04property that might have been filed in a 5:06public format so you can use all of this 5:09to do your initial reconnaissance okay 5:11The Next Step then is machine learning 5:14model access what did they do in this 5:16case well what they did was they took a 5:18look at the product itself the tool 5:20that's supposed to be doing this 5:22detection and they started trying to see 5:24how does this thing work they turned 5:25verbose logging on so that means the 5:27system is writing out all kinds of 5:29information information about what it's 5:31seeing and all the information it's 5:33writing about what it's seeing is also 5:34information an attacker can use later at 5:37further steps and they discovered by 5:39looking at all of this sort of figured 5:41out a bit about what the reputation 5:43scoring system was like in the system so 5:46it's looking at this malware and 5:47classifying it as this is good or this 5:50is bad then the next stage is resource 5:53development in this case what they're 5:55going to do is take a look at developing 5:58some adversarial machine learning in 6:01particular what they identified uh 6:04through reverse engineering was that 6:07there were some specific attributes 6:09things that the malware scanner was 6:12looking for all the time and when it saw 6:15those things then that's when it would 6:17flag this as malware so what they tried 6:20to do in in this was discover how did 6:23that algorithm work what was that 6:25reputation scoring process like and in 6:28particular they made a Discovery but 6:30there was actually a second model that 6:33was included in this and the second 6:35model was basically an override and if 6:38the second model found enough good in 6:40the code then it would override what its 6:42suspicions were about malware and that 6:45became the weak point that got 6:47exploited then the ml attack staging in 6:51this case what they did was a manual 6:54modification they go in and 6:57modify the malware that being submitted 7:00into the system and in this case what 7:02they did was they appended uh just a 7:05little bit of good information they mix 7:07in just enough good information with the 7:10malware and figured out if I add that at 7:13the very end and append that everything 7:16will be okay and the system will not 7:18recognize this because this second model 7:20would do the override and then 7:23ultimately they launch this and we have 7:25our boom that's the defense that evades 7:29uh that the attack that evades the 7:31defense which is looking for this 7:33malware okay so now we've gone through 7:36one of the case studies that comes with 7:38the miter Atlas framework hopefully you 7:41have a little better idea of how this 7:43framework is able to give us a better 7:45understanding of the problem because we 7:47can go back and see the source we can 7:49see the steps that the person went 7:51through we can understand what sort of 7:52tactics and techniques were deployed and 7:55employed um we can also take a look at 7:59this as a common description a Common 8:01Language a lingua franka if you will 8:04something that we can all in the 8:06industry use to describe so when we talk 8:08about reconnaissance we know what that 8:10means when we talk about resource 8:11development we know what that means 8:13because we're all reading from the same 8:15description the hope then is that with 8:17better understanding and a common 8:19description we end up with better 8:21defenses and that's really what we're 8:23trying to do with AI this new attack 8:27surface 8:29if you like this video and want to see 8:31more like it please like And subscribe 8:33if you have any questions or want to 8:35share your thoughts about this topic 8:36please leave a comment below