What is explainable AI, and why is transparency so important for machine-learning solutions?

The use of AI technology in business and government has been quietly evolving. Machine-learning models have become experts at identifying patterns in oceans of data. AI is being utilized alongside human decision-making with the goal of creating better outcomes, in part by having people manage how it operates and conditioning them to trust it as a viable partner.

So far, so good. Except there may be one critical trade-off: a lack of transparency in decisions that are being made by AI tools.

Imagine that…

  • You were denied admission to your first-choice college based on an algorithm used by the school staff.
  • You work on a production line, and the newly “optimized” AI-generated instructions for one process don’t seem safe.
  • Your doctor’s AI software has drawn a conclusion that seems wrong, given what you know about your body.
  • On Black Friday, your ecommerce customers are not doing what the AI software predicted they’d do in droves. You’re losing sales and time is running out.

Improve pretrained LLMs banner

The need for (clear) explanations

In all of these theoretical case studies created by AI model outputs, you’d probably be rattled. You’d want to know why you failed to impress the admissions-committee robot, whether your new workflow could cause an accident, whether you need a second opinion, how you can fix your website.

You’d want to know how, exactly, the model predictions were made by machine-learning systems that are supposed to be so unquestionably intelligent. Smart in some ways — after all, it’s AI — but not exactly brainy computer science, as far as you’re concerned. You’d feel like you deserve — have the right, even — to know the system’s precise learning methods and decision-making flows.

You’d also want to be able to implicitly trust the model performance. To be OK with the conclusions, you’d feel compelled to know how the nonhuman data scientists applied their training data. You’d want to have a human intervene if you suspected problems with explanation accuracy or foul play with the data set. However, you wouldn’t be able to prove that the robot had reached for the wrong item in its toolkit.

Beyond the individual

And it wouldn’t be just you worrying about the AI technology. Transparency related to the  important features of the work would be imperative for everyone involved in the process. If you were one of the internal folks, such as the head admissions officer or healthcare team supervisor, you’d expect complete visibility of the decision-making. You would feel the weight of the decision’s impact, if not just because you’re afraid of lawsuits but because your company prioritizes fairness and ethical standards. And if you were a regulator, you might need to demand a complete, crystal-clear explanation.

So you can see why the ability to detail AI explainability methods is critical. Being able to interpret a machine-learning model increases trust in the model, which is essential in scenarios such as those that affect many financial, health care, and life and death decisions.

What is explainable AI?

Explainable AI — or explainable artificial intelligence (XAI) — means basically what it sounds like: illumination of what’s going on in the “black box” that often surrounds AI’s inner workings. Explainable machine learning is accountable and can “show its work.”

“Explainability is the capacity to express why an AI system reached a particular decision, recommendation, or prediction,” states a 2022 McKinsey & Company report.

IBM defines explainable AI as “a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms.”

Violet Turri of the Software Engineering Institute at Carnegie-Mellon notes that as of 2022, exact definitions still hadn’t really been adopted: “Explainability aims to answer stakeholder questions about the decision-making processes of AI systems.”

It’s hard to imagine algorithms could present the rationales behind their decisions, plus point out the strengths or weaknesses of their choices. But that’s exactly what it comes down to: intelligent technology presenting information that people can easily comprehend, with “breadcrumbs” back to when the decisions were made.

AI can’t go rogue (can it?)

So you see the problem: the possibility that AI could do whatever it wants in an opaque system, and people could have no way of following its logic or making sense of its conclusions.

Humans need the ability to understand how an ML model has operated when it spits out a conclusion that may have far-reaching ramifications. People impacted by a decision — not to mention government agencies (e.g., DARPA) — typically want to know how conclusions were made. And this is true for the internal stakeholders, such as insurance company salespeople who are relying on what AI models suggest to determine coverage amounts, and who need to both get customer buy-in and preserve their corporate reputation.

So business and government, not to mention individual consumers, are in agreement that AI decision-making must clearly convey how it’s reached a decision. Unfortunately, that’s not necessarily achievable all the time due to the complexity of the AI process, with subsequent decisions being made on the backs of initial decisions.

Can they explain?

Perhaps if robots had a say, they’d tell us we’re not smart enough to follow the tricky maneuvers they complete, and then ask why we need to know, anyway. But they’re not in charge, at least not yet. (Hmm; maybe we’re closer than we realize to the day they start secretly deciding things, ignoring our pleas for transparency.)

So there must be AI-process explanations. People’s comfort levels, companies’ reputations, even humanity’s ultimate survival could well depend on this. In short, businesspeople, their customers and partners, and oversight agencies must all be able to audit and comprehend every aspect of the AI decision-making process.

Furthermore, those explanations must be presented in our language. They must make sense logically, and (we’d hope) make us humans feel like we understand — and that we’re OK with,  even if we’re disappointed by — what was decided.

Why transparency is critical for machine-learning solutions

What if AI electronic paper trails aren’t intelligible for humans?

In short, chaos could ensue on various levels. In 2017, Stephen Hawking warned that “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization.” Along those lines, an inability to identify or document an AI program’s “thought process” could be devastating for consumers, companies, and an overall sense that the world is a just and fair place.

So there’s plenty of motivation among all affected groups to ensure transparency.

Progress so far

There’s no single global standard for AI transparency at the moment, but there is wide-level consensus that AI operations must be explained. And as legal and other concerns grow, the world of XAI will be expected to adapt to any changing requirements.

Companies in some industries that are utilizing AI have been subject to regulatory requirements for some time. Europe has put in place the General Data Protection Regulation (GDPR, 2016), which requires that companies give consumers explanations for how AI has made decisions that affect them.

While the United States has yet to follow Europe and adopt this type of sweeping law, the California Consumer Privacy Act (CCPA, 2020) dictates its same sentiment: that users have a right to know inferences made about them by AI systems, as well as what data was used to make those inferences.

In 2021, Congress identified explainability as integral to promoting trust and transparency in AI systems.

Lower-level initiatives

There are also references by agencies and companies to this objective, as well as the development of various versions of explainable AI principles. The U.S. Department of Defense has been working on creating a “robust responsible AI ecosystem,” including creating principles (2020). And Health and Human Services aims to “promote ethical, trustworthy AI use and development.”

Various organizations, including Google, have also taken it upon themselves to develop responsible AI (RAI) principles. While not all companies are prioritizing these standards, they’re a good foundation for growth.

Explainability has its rewards

Aside from the peace of mind a clear explanation can supply, there are financial benefits of making AI decisions transparent.

McKinsey has found that better explainability has led to better adoption of AI, and best practices and tools have developed along with the technology. It also learned that when companies make digital trust a priority for customers, such as by incorporating explainability in algorithmic models, those companies are more likely to grow their yearly revenue 10 percent or more.

IBM has equally impressive proof: users of its XAI platform realized a 15–30 percent improvement in model accuracy and $4.1–15.6 million more in profit.

Challenges of XAI

Complex models can cloud transparency

As AI capabilities have progressed, they’ve been applied to solving increasingly more difficult problems.

As that’s happened, questions about how it’s happening have proliferated: Why did the AI software model make that decision? Do the human teammates understand how the model operates and have a solid understanding of the data input used to train it? What’s the explanation for each decision made by the AI, not just the initial model behavior?

“The more sophisticated an AI system becomes, the harder it is to pinpoint exactly how it derived a particular insight,” says McKinsey. …“Disentangling a first-order insight and explaining how the AI went from A to B might be relatively easy. But as AI engines interpolate and reinterpolate data, the insight audit trail becomes harder to follow.”

The problem is that many AI models are opaque: it’s difficult or impossible to see what’s going on behind the curtain. They’re created without transparency, either intentionally (e.g., to protect corporate privacy), unintentionally but with technical data that’s hard for nontechnical people to comprehend, or unintentionally “from the characteristics of machine learning algorithms and the scale required to apply them usefully,” says one researcher.

In addition, AI has a variety of approaches, including convolutional neural networks, recurrent neural networks, transfer learning, and deep learning. Because there are so many ways to operate, getting to the root of an AI explainability problem can be even trickier.

Opacity is one thing; lip service to transparency that still isn’t comprehensible by humans could be another. What if the explanation is too technical? How can you ensure trust in a system if your end users can’t grasp a concept such as deep neural networks that they need to understand? What if you can’t convey to them in a coherent way what went on with the data?

Lack of consensus on AI explainability definitions

The field of AI is still emerging, so there’s not a lot of practical know-how yet on choosing, implementing, and testing AI explanations. Experts can’t even agree on how to define basic terms. For instance, is explainable AI the same concept as interpretable AI?

Some researchers and experts say yes, and use these terms interchangeably. Others passionately disagree. Some feel that it’s imperative to create models that have built-in transparency, so that decisions can be easily interpreted by people after they’re formulated (post-hoc explanations). Interpretable machine learning is a different approach than foregoing formal accountability and trying to explain what’s in an AI black-box model with an explanation afterward.

This lack of consensus on concepts makes for awkward discourse among various groups of academics and business people that are using AI in different industries, and it also inhibits collective progress.

The academia bubble

The AI community’s issues are largely still being debated by academics, as opposed to having gone mainstream in the civilian world. One result of the fact that solid understanding of and faith in AI haven’t taken hold is that people may inherently mistrust it to make decisions that affect them.

Lacking in social accountability

Some AI advocates are highlighting the human experience surrounding AI decisions; this is known as “social transparency.” They refute the conventional premise that if humans can get inside the black box and make sense of what’s discovered, all will be well. But “not everything that matters lies inside the black-box of AI,” points out computer scientist Upol Ehsan. “Critical answers can lie outside it. Because that’s where the humans are.”

Bias in AI models

People currently working in AI are not a particularly diverse group (the field is dominated by white men). As a result, there’s some inherent bias in the ways that interpretable models are created and operate, argue diversity proponents. This sounds accurate, but without a diverse employee population working with machine-learning models, how do you address it? Whether there’s bias in a given model’s decisions — and if so, how to address it — are persistent concerns.

Possible mistakes

One reason explainable models are so worthwhile is that AI models sometimes make algorithm-related mistakes, which can lead to everything from minor misunderstandings to stock-market crashes and other catastrophes. “We view computers as the end-all-be-all when it comes to being correct, but AI is still really just as smart as the humans who programmed it,” points out writer Mike Thomas.

Human reactions and business impacts

If a data-science-based decision is universally unpopular — and also incomprehensible by — those affected by it, you could have major pushback. What if even a single disgruntled person who’s prone to complain about perceived unfairness, for instance, in the financial services industry, damages a company’s reputation? This scenario is the nightmare of company execs.

The public may not understand how an AI model interpretability method works, but if the model is responsibly created and supervised, and reliably supplies accurate results, that’s a step toward ensuring acceptance of this expanding technology. As reliance on AI algorithms to make significant business, medical, and other types of choices with different use cases continues to increase, being able to explain and audit decisions made by AI applications will help build trust and acceptance of AI work.

The benefits of explainable search data

Search data is one area where strong AI transparency is integral to success. Complex ranking formulas (e.g., incurred by mixing attribute weights and proximity between words) essentially constitute an opaque model. If you’re aiming for website optimization, you may wonder why your search results are presented in a particular order, and can they be tested, refined, and adjusted to get your desired relevance?

With Algolia, you don’t have to trust an algorithm’s interpretability techniques or worry about how the system works or model explainability. Our search-providing customers enjoy a white-box model of transparency for how search relevance is computed. For example, you can see how search results are ranked based on personalization and relevance factors, and then manually adjust for real-world needs.

Want to tap the power of AI to create excellent search for your users with true transparency for your teams? Check out our search and discovery API and get in touch with our experts now.

About the authorVincent Caruana

Vincent Caruana

Senior Digital Marketing Manager, SEO

Recommended Articles

Powered by Algolia AI Recommendations

An introduction to the four principles of explainable AI

An introduction to the four principles of explainable AI

Catherine Dee

Catherine Dee

Search and Discovery writer
Debunking the most common AI myths

Debunking the most common AI myths

Vincent Caruana

Vincent Caruana

Senior Digital Marketing Manager, SEO
How Algolia uses AI to deliver smarter search

How Algolia uses AI to deliver smarter search

Julien Lemoine

Julien Lemoine

Co-founder & former CTO at Algolia