Shou-De Lin
Jan 26, 2021

Explainable AI: Turning the black box into a glass box

As AI's influence over business decisions is growing, so too is the requirement for greater oversight over how it makes decisions—leading to the development of explainable AI. Appier's chief machine-learning scientist explains the factors behind this new trend.

Explainable AI: Turning the black box into a glass box

For many people, artificial intelligence (AI) is an unexplainable and uninterpretable black box that takes millions, or even billions of inputs and delivers an answer that we are supposed to trust and act on. As the impact of those actions could be wide-reaching, there is a movement towards explainable AI, or XAI.

AI models deliver their outputs by using training data and algorithms. Then, when new information is fed into an AI model, it uses this information to infer some sort of response. In marketing, when a customer visits an online store, data about that customer, such as previous purchases, browsing history, age, location and other demographic information will be used to make recommendations.

If marketers then want to segment different groups of customers in order to target them with different offers, they can use AI. By understanding how and why an AI model categorises customers, they can design and implement better marketing strategies on each of those different groups.

For example, a marketer can use AI to segment customers into three different groups—guaranteed buyers, hesitant buyers and window shoppers—and then decide a different action for each group. Guaranteed buyers could be directed towards upselling while hesitant buyers might be sent a discount code or voucher to increase the likelihood of purchase. For those who are definitely not buying, the marketer doesn’t have to do anything.

By understanding how the AI performs the market segmentation, marketers can devise appropriate strategies for each group.

Why explainable AI matters to marketers

The level of explainability of an AI model depends on what marketers want to understand. While they might not be concerned by the mechanics of the algorithms that are used, they may want to understand what features, or input instances to the system, will influence the suggestions from the model to plan the follow-up actions.

For example, a customer can be predicted as ‘hesitant’ by AI based on different signals. It may be because the mouse has moved across an item many times, or because this customer has put an item in the shopping cart without checking out. The actions for those two scenarios may be different. For the former, marketers can simply recommend a set of items similar to the one the customer is looking at; for the latter, marketers might want to offer limited-time free shipping to trigger the final purchase.

Marketers need to know the key factors driving the model’s decision. Understanding the algorithms may be very challenging, but knowing which factors are driving decisions makes the model more interpretable.

When we talk about explainable AI, it does not have to be about understanding the intricacies of the entire model but understanding what factors can influence the output of that model. There is a significant difference between understanding how a model works and understanding why it gives a particular result.

XAI allows the owner or user of a system to explain the AI model’s decision-making process, understand the strengths and weaknesses of the process, and give an indication of how the system will continue to behave.

In image recognition, telling the AI model to focus on specific areas of a photo can drive different results. By understanding what parts of the image are most likely to drive the model to deliver a particular outcome or decision, users can better explain and interpret the actions of the AI model.

As well as aiding decision making around strategies, XAI allows marketers and other users of AI models to explain results to management and other stakeholders. This can be useful when justifying the outputs of a model and why a particular strategy is being used.

It is important to understand that not all AI models are as easy to explain as others. Some researchers have noted that algorithms such as decision trees and Bayesian classifiers are more interpretable than deep learning models such as those used in image recognition and natural language processing. It is also noted that the trade-off here is between accuracy and explainability. As models become more complex, it becomes harder for non-experts to explain how they work, though they usually can achieve better performance.

Explainable AI and bias in AI models

Bias exists in all AI models because the training data can contain bias. Algorithms can also be designed with bias, either intentionally or accidentally. However, not all AI bias is negative.

Bias can be leveraged to make accurate predictions, but it needs to be used carefully where it applies to sensitive areas such as race and gender.

Explainable AI can help us to distinguish whether a model is using good bias or bad bias to make a decision. It also tells us which factors are more important when the model makes the decision. XAI doesn’t detect bias, but it allows us to understand why the model makes that particular decision.

Explainable AI also allows us to understand whether bias comes from the data that the AI model is trained with or how different labels are weighted by the model.

A matter of trust

For many people, AI appears to be a black box where data enters, and an output or action appears as the result of an opaque collection of algorithms. That can lead to distrust when the model delivers a result that may, at first, seem counter-intuitive or even wrong.

XAI makes these models more understandable and reasonable to humans, so everybody can look at the result and determine whether they want to use it or not. XAI brings humans into the decision-making loop and allows people to be the last step before a final decision is made. It makes the entire process easier to trust.

We can expect that AI models will be able to provide explanations of how they came to their decisions in the future. The decisions can be judged, increasing the accountability of the developers creating the models. While the decisions made by AI models can be traced (as opposed to a black box), we will see systems that provide explanations of how they work in the near future.

Creating more explainable AI models

There have been several papers proposed by academia to facilitate further explanations by AI or other methods.

There are some models that are easier to be explained than others. For example, deep learning models can be difficult to explain. So, in order to do that, some research proposes to use some proxy models to mimic the behavior of these deep learning models. These proxy models are more explainable.

Another way is to build the models to be more explainable by design. For example, using fewer parameters in neural networks may deliver similar accuracy with less complexity, therefore making the model more explainable.

With more and more businesses deploying AI, it is critical to understand how these models work so that decisions can be understood, any unwanted bias can be recognised, and systems can be trusted. XAI takes the black box of AI and machine learning and makes it into a glass box that we can easily see into.


Dr Shou-De Lin is chief machine learning scientist at Appier.

Source:
Campaign Asia
Tags

Related Articles

Just Published

2 hours ago

40 Under 40 2024: Mamaa Duker, VML

Notable achievements include leading VML through a momentous merger, helping to reel in big sales, and growing WPP’s ethnic and cultural diversity network by a mile.

2 hours ago

Will you let your children inherit a world without ...

A raw, unflinching look at the illegal wildlife trade, starring Ray Winstone, will force you to confront the horrifying truth... and act.

4 hours ago

Campaign CMO Outlook 2024: Why marketers still want ...

In the second part of the Outlook series, global marketers weigh in on Amazon Prime’s move into ad-tier streaming, how video-on-demand will reshape strategies, and where it's still falling short.

5 hours ago

Jaguar's identity crisis: A self-inflicted wound ...

Jaguar's baffling attempt at reinvention from feline grace to rock-based abstraction is a masterclass in brand self-sabotage, says Resonant's Ramakrishnan Raja—and it risks destroying the marque entirely.