Learning Library

← Back to Library

Video jFHPEQi55Ko

Full Transcript

# Video jFHPEQi55Ko **Source:** [https://www.youtube.com/watch?v=jFHPEQi55Ko](https://www.youtube.com/watch?v=jFHPEQi55Ko) **Duration:** 00:07:25 ## Sections - [00:00:00](https://www.youtube.com/watch?v=jFHPEQi55Ko&t=0s) **Untitled Section** - - [00:03:26](https://www.youtube.com/watch?v=jFHPEQi55Ko&t=206s) **Untitled Section** - - [00:06:45](https://www.youtube.com/watch?v=jFHPEQi55Ko&t=405s) **Untitled Section** - ## Full Transcript
0:00How do you know if you can trust the results of an AI model? 0:05Let's say I've deployed a new AI model called "Fraud Detection". 0:14Here it is. 0:14You know what? 0:15I spent a lot of time on this model. 0:17It's got an input layer, an output layer, some hidden layers -- all connected together. 0:25Now, this model analyzes all of your transactions. And this AI model of mine has flagged one of your transactions 0:34for a purchase of $100 at a coffee shop as potentially fraudulent. 0:42You know, people can be fraudulent sometimes. 0:48Now, how confident can you be that my AI model is probably right and that this transaction should be denied or investigated further? 0:58Well, from the information I've given you, you can't possibly make that call. 1:03You don't know anything about my AI model. 1:07It's what's commonly referred to as a "black box". 1:12And that's just impossible to interpret. 1:14You have no idea what's going on in those calculations. 1:18But here's the kicker -- me, the guy who created this beautiful AI algorithm, well, I have no idea either. 1:27You see, when it comes to application of AI, not even the engineers or data scientists who create the algorithm 1:35can fully understand or explain what exactly is happening inside them for a specific instance and result. 1:44But, thankfully, there is a solution to this problem. 1:48Actually, we have plenty of solutions to plenty of problems! 1:52Consider subscribing to the IBM Technology Channel to hear about those. 1:56But the solution in this case, well, it's called Explainable AI, or XAI. 2:05And it allows us humans to understand how an AI model comes up with its results. 2:14And consequently build trust in those results. 2:19Now the set up of XAI consists of three main methods -- the prediction accuracy and there's traceability. 2:36And these two methods, they address technology requirements. And then we have decision understanding. 2:48And decision understanding addresses human needs. 2:51Now, prediction accuracy is clearly an important component in how successful the use of AI is in everyday operation. 2:59By running simulations and comparing XAI output to the results in the training dataset, we can figure out prediction accuracy. 3:08The most popular technique used for this is called Local Interpretable Model-agnostic Explanations, or LIME, 3:19which explains the prediction of classifiers by the machine learning algorithm. 3:26Now, traceability can limit the way decisions can be made, setting up a narrower scope for machine learning rules and features. 3:34One traceability technique is called DeepLIFT. 3:41DeepLIFT stands for Deep Learning Important FeaTures, 3:48which compares the activation of each neuron in the neural network 3:51to its reference neuron showing traceability links and dependencies. 3:55And then decision understanding is the, well, the human factor. 3:59There are no fancy measurements here. 4:01This is all about educating and informing teams to overcome distrust in AI and helping them understand how the decisions were made. 4:09Now this can be presented to business users in the form of a dashboard. 4:17And, for example, here, a dashboard could show the primary factors why a transaction was flagged as fraudulent 4:23and the extent to which those factors influenced the decision. 4:28Was it the transaction amount? 4:29Was it the location where the transaction took place and so forth? 4:34And further, this dashboard can show the minimum changes that will be required for the AI to produce a different outcome. 4:42So, if the transaction amount of, let's say, $100 was a significant factor, and we showed that in the dashboard -- 4:49how much lower would that amount have to be for the AI to have made a different decision? 4:54Let's say, flagged the transaction as non-fraudulent. 4:57But, you see, explainable AI is more than just building trust in the AI model. 5:03It's also about troubleshooting and improving model performance. 5:08It allows us to investigate model behaviors through tracking model insights on deployment status, fairness, quality and drift -- 5:16because AI model's performance can indeed drift. 5:21And by that we mean, it can degrade over time, because production data differs from training data. 5:29By using explainable AI, you can analyze your model and generate alerts when models deviate from the intended outcomes and perform inadequately. 5:39Such as, well, a bunch of false positive fraud alerts. 5:43From there, analysts can understand what happened when deviations persist. 5:48And we've talked already about financial services as a use case. 5:56But there are plenty of use cases across many industries that we can apply this to. 6:02So, for example, let's consider healthcare. 6:06And with healthcare, XAI can accelerate diagnostics and image processing and streamline the pharmaceutical approval process. 6:17Or, how about in the field of criminal justice? 6:23With criminal justice, XAI can accelerate resolutions on DNA analysis, or prison population analysis, or crime forecasting. 6:33Explainability can help developers ensure that the system is working as expected, 6:39meet regulatory standards, and even allow a person affected by a decision to challenge that outcome. 6:46So, when I deny or approve that $100 transaction of yours, you can understand how I came to that decision. 6:56And perhaps I can also suggest where to find a more moderately priced coffee shop. 7:02And that's a wrap. 7:04As you may have heard, we're on the lookout for new topics that are of interest to you. 7:08So, if you have topics in mind we could address in future videos, hit us up in the comments. 7:15Thanks for watching. 7:18If you have any questions, please drop us a line below. 7:20And if you want to see more videos like this in the future, please Like and Subscribe.