Search by Algolia
Vector vs Keyword Search: Why You Should Care
ai

Vector vs Keyword Search: Why You Should Care

Search has been around for a while, to the point that it is now considered a standard requirement in many ...

Nicolas Fiorini

Senior Machine Learning Engineer

What is AI-powered site search?
ai

What is AI-powered site search?

With the advent of artificial intelligence (AI) technologies enabling services such as Alexa, Google search, and self-driving cars, the ...

John Stewart

VP Corporate Marketing

What is a B2B marketplace?
e-commerce

What is a B2B marketplace?

It’s no secret that B2B (business-to-business) transactions have largely migrated online. According to Gartner, by 2025, 80 ...

Vincent Caruana

Sr. SEO Web Digital Marketing Manager

3 strategies for B2B ecommerce growth: key takeaways from B2B Online - Chicago
e-commerce

3 strategies for B2B ecommerce growth: key takeaways from B2B Online - Chicago

Twice a year, B2B Online brings together industry leaders to discuss the trends affecting the B2B ecommerce industry. At the ...

Elena Moravec

Director of Product Marketing & Strategy

Deconstructing smart digital merchandising
e-commerce

Deconstructing smart digital merchandising

This is Part 2 of a series that dives into the transformational journey made by digital merchandising to drive positive ...

Benoit Reulier
Reshma Iyer

Benoit Reulier &

Reshma Iyer

The death of traditional shopping: How AI-powered conversational commerce changes everything
ai

The death of traditional shopping: How AI-powered conversational commerce changes everything

Get ready for the ride: online shopping is about to be completely upended by AI. Over the past few years ...

Aayush Iyer

Director, User Experience & UI Platform

What is B2C ecommerce? Models, examples, and definitions
e-commerce

What is B2C ecommerce? Models, examples, and definitions

Remember life before online shopping? When you had to actually leave the house for a brick-and-mortar store to ...

Catherine Dee

Search and Discovery writer

What are marketplace platforms and software? Why are they important?
e-commerce

What are marketplace platforms and software? Why are they important?

If you imagine pushing a virtual shopping cart down the aisles of an online store, or browsing items in an ...

Vincent Caruana

Sr. SEO Web Digital Marketing Manager

What is an online marketplace?
e-commerce

What is an online marketplace?

Remember the world before the convenience of online commerce? Before the pandemic, before the proliferation of ecommerce sites, when the ...

Catherine Dee

Search and Discovery writer

10 ways AI is transforming ecommerce
e-commerce

10 ways AI is transforming ecommerce

Artificial intelligence (AI) is no longer just the stuff of scary futuristic movies; it’s recently burst into the headlines ...

Catherine Dee

Search and Discovery writer

AI as a Service (AIaaS) in the era of "buy not build"
ai

AI as a Service (AIaaS) in the era of "buy not build"

Imagine you are the CTO of a company that has just undergone a massive decade long digital transformation. You’ve ...

Sean Mullaney

CTO @Algolia

By the numbers: the ROI of keyword and AI site search for digital commerce
product

By the numbers: the ROI of keyword and AI site search for digital commerce

Did you know that the tiny search bar at the top of many ecommerce sites can offer an outsized return ...

Jon Silvers

Director, Digital Marketing

Using pre-trained AI algorithms to solve the cold start problem
ai

Using pre-trained AI algorithms to solve the cold start problem

Artificial intelligence (AI) has quickly moved from hot topic to everyday life. Now, ecommerce businesses are beginning to clearly see ...

Etienne Martin

VP of Product

Introducing Algolia NeuralSearch
product

Introducing Algolia NeuralSearch

We couldn’t be more excited to announce the availability of our breakthrough product, Algolia NeuralSearch. The world has stepped ...

Bernadette Nixon

Chief Executive Officer and Board Member at Algolia

AI is eating ecommerce
ai

AI is eating ecommerce

The ecommerce industry has experienced steady and reliable growth over the last 20 years (albeit interrupted briefly by a global ...

Sean Mullaney

CTO @Algolia

Semantic textual similarity: a game changer for search results and recommendations
product

Semantic textual similarity: a game changer for search results and recommendations

As an ecommerce professional, you know the importance of providing a five-star search experience on your site or in ...

Vincent Caruana

Sr. SEO Web Digital Marketing Manager

What is hashing and how does it improve website and app search?
ai

What is hashing and how does it improve website and app search?

Hashing.   Yep, you read that right.   Not hashtags. Not golden, crisp-on-the-outside, melty-on-the-inside hash browns ...

Catherine Dee

Search and Discovery writer

Conference Recap: ECIR23 Take-aways
engineering

Conference Recap: ECIR23 Take-aways

We’re just back from ECIR23, the leading European conference around Information Retrieval systems, which ran its 45th edition in ...

Paul-Louis Nech

Senior ML Engineer

Looking for something?

facebookfacebooklinkedinlinkedintwittertwittermailmail

The use of AI technology in business and government has been quietly evolving. Machine-learning models have become experts at identifying patterns in oceans of data. AI is being utilized alongside human decision-making with the goal of creating better outcomes, in part by having people manage how it operates and conditioning them to trust it as a viable partner.

So far, so good. Except there may be one critical trade-off: a lack of transparency in decisions that are being made by AI tools.

Imagine that…

  • You were denied admission to your first-choice college based on an algorithm used by the school staff.
  • You work on a production line, and the newly “optimized” AI-generated instructions for one process don’t seem safe. 
  • Your doctor’s AI software has drawn a conclusion that seems wrong, given what you know about your body.
  • On Black Friday, your ecommerce customers are not doing what the AI software predicted they’d do in droves. You’re losing sales and time is running out.

The need for (clear) explanations

In all of these theoretical case studies created by AI model outputs, you’d probably be rattled. You’d want to know why you failed to impress the admissions-committee robot, whether your new workflow could cause an accident, whether you need a second opinion, how you can fix your website.

You’d want to know how, exactly, the model predictions were made by machine-learning systems that are supposed to be so unquestionably intelligent. Smart in some ways — after all, it’s AI — but not exactly brainy computer science, as far as you’re concerned. You’d feel like you deserve — have the right, even — to know the system’s precise learning methods and decision-making flows. 

You’d also want to be able to implicitly trust the model performance. To be OK with the conclusions, you’d feel compelled to know how the nonhuman data scientists applied their training data. You’d want to have a human intervene if you suspected problems with explanation accuracy or foul play with the data set. However, you wouldn’t be able to prove that the robot had reached for the wrong item in its toolkit.

Beyond the individual

And it wouldn’t be just you worrying about the AI technology. Transparency related to the  important features of the work would be imperative for everyone involved in the process. If you were one of the internal folks, such as the head admissions officer or healthcare team supervisor, you’d expect complete visibility of the decision-making. You would feel the weight of the decision’s impact, if not just because you’re afraid of lawsuits but because your company prioritizes fairness and ethical standards. And if you were a regulator, you might need to demand a complete, crystal-clear explanation.

So you can see why the ability to detail AI explainability methods is critical. Being able to interpret a machine-learning model increases trust in the model, which is essential in scenarios such as those that affect many financial, health care, and life and death decisions. 

What is explainable AI?

Explainable AI — or explainable artificial intelligence (XAI) — means basically what it sounds like: illumination of what’s going on in the “black box” that often surrounds AI’s inner workings. Explainable machine learning is accountable and can “show its work.” 

“Explainability is the capacity to express why an AI system reached a particular decision, recommendation, or prediction,” states a 2022 McKinsey & Company report.

IBM defines explainable AI as “a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms.”

Violet Turri of the Software Engineering Institute at Carnegie-Mellon notes that as of 2022, exact definitions still hadn’t really been adopted: “Explainability aims to answer stakeholder questions about the decision-making processes of AI systems.”

It’s hard to imagine algorithms could present the rationales behind their decisions, plus point out the strengths or weaknesses of their choices. But that’s exactly what it comes down to: intelligent technology presenting information that people can easily comprehend, with “breadcrumbs” back to when the decisions were made.

AI can’t go rogue (can it?)

So you see the problem: the possibility that AI could do whatever it wants in an opaque system, and people could have no way of following its logic or making sense of its conclusions.

Humans need the ability to understand how an ML model has operated when it spits out a conclusion that may have far-reaching ramifications. People impacted by a decision — not to mention government agencies (e.g., DARPA) — typically want to know how conclusions were made. And this is true for the internal stakeholders, such as insurance company salespeople who are relying on what AI models suggest to determine coverage amounts, and who need to both get customer buy-in and preserve their corporate reputation.

So business and government, not to mention individual consumers, are in agreement that AI decision-making must clearly convey how it’s reached a decision. Unfortunately, that’s not necessarily achievable all the time due to the complexity of the AI process, with subsequent decisions being made on the backs of initial decisions.

Can they explain?

Perhaps if robots had a say, they’d tell us we’re not smart enough to follow the tricky maneuvers they complete, and then ask why we need to know, anyway. But they’re not in charge, at least not yet. (Hmm; maybe we’re closer than we realize to the day they start secretly deciding things, ignoring our pleas for transparency.)

So there must be AI-process explanations. People’s comfort levels, companies’ reputations, even humanity’s ultimate survival could well depend on this. In short, businesspeople, their customers and partners, and oversight agencies must all be able to audit and comprehend every aspect of the AI decision-making process.

Furthermore, those explanations must be presented in our language. They must make sense logically, and (we’d hope) make us humans feel like we understand — and that we’re OK with,  even if we’re disappointed by — what was decided.

Why transparency is critical for machine-learning solutions

What if AI electronic paper trails aren’t intelligible for humans?

In short, chaos could ensue on various levels. In 2017, Stephen Hawking warned that “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization.” Along those lines, an inability to identify or document an AI program’s “thought process” could be devastating for consumers, companies, and an overall sense that the world is a just and fair place.

So there’s plenty of motivation among all affected groups to ensure transparency.

Progress so far

There’s no single global standard for AI transparency at the moment, but there is wide-level consensus that AI operations must be explained. And as legal and other concerns grow, the world of XAI will be expected to adapt to any changing requirements.

Companies in some industries that are utilizing AI have been subject to regulatory requirements for some time. Europe has put in place the General Data Protection Regulation (GDPR, 2016), which requires that companies give consumers explanations for how AI has made decisions that affect them.

While the United States has yet to follow Europe and adopt this type of sweeping law, the California Consumer Privacy Act (CCPA, 2020) dictates its same sentiment: that users have a right to know inferences made about them by AI systems, as well as what data was used to make those inferences. 

In 2021, Congress identified explainability as integral to promoting trust and transparency in AI systems.

Lower-level initiatives

There are also references by agencies and companies to this objective, as well as the development of various versions of explainable AI principles. The U.S. Department of Defense has been working on creating a “robust responsible AI ecosystem,” including creating principles (2020). And Health and Human Services aims to “promote ethical, trustworthy AI use and development.”

Various organizations, including Google, have also taken it upon themselves to develop responsible AI (RAI) principles. While not all companies are prioritizing these standards, they’re a good foundation for growth.

Explainability has its rewards

Aside from the peace of mind a clear explanation can supply, there are financial benefits of making AI decisions transparent.

McKinsey has found that better explainability has led to better adoption of AI, and best practices and tools have developed along with the technology. It also learned that when companies make digital trust a priority for customers, such as by incorporating explainability in algorithmic models, those companies are more likely to grow their yearly revenue 10 percent or more.

IBM has equally impressive proof: users of its XAI platform realized a 15–30 percent improvement in model accuracy and $4.1–15.6 million more in profit.

Challenges of XAI

Complex models can cloud transparency

As AI capabilities have progressed, they’ve been applied to solving increasingly more difficult problems. 

As that’s happened, questions about how it’s happening have proliferated: Why did the AI software model make that decision? Do the human teammates understand how the model operates and have a solid understanding of the data input used to train it? What’s the explanation for each decision made by the AI, not just the initial model behavior?

“The more sophisticated an AI system becomes, the harder it is to pinpoint exactly how it derived a particular insight,” says McKinsey. …“Disentangling a first-order insight and explaining how the AI went from A to B might be relatively easy. But as AI engines interpolate and reinterpolate data, the insight audit trail becomes harder to follow.”

The problem is that many AI models are opaque: it’s difficult or impossible to see what’s going on behind the curtain. They’re created without transparency, either intentionally (e.g., to protect corporate privacy), unintentionally but with technical data that’s hard for nontechnical people to comprehend, or unintentionally “from the characteristics of machine learning algorithms and the scale required to apply them usefully,” says one researcher.

In addition, AI has a variety of approaches, including convolutional neural networks, recurrent neural networks, transfer learning, and deep learning. Because there are so many ways to operate, getting to the root of an AI explainability problem can be even trickier.

Opacity is one thing; lip service to transparency that still isn’t comprehensible by humans could be another. What if the explanation is too technical? How can you ensure trust in a system if your end users can’t grasp a concept such as deep neural networks that they need to understand? What if you can’t convey to them in a coherent way what went on with the data?

Lack of consensus on AI explainability definitions

The field of AI is still emerging, so there’s not a lot of practical know-how yet on choosing, implementing, and testing AI explanations. Experts can’t even agree on how to define basic terms. For instance, is explainable AI the same concept as interpretable AI?

Some researchers and experts say yes, and use these terms interchangeably. Others passionately disagree. Some feel that it’s imperative to create models that have built-in transparency, so that decisions can be easily interpreted by people after they’re formulated (post-hoc explanations). Interpretable machine learning is a different approach than foregoing formal accountability and trying to explain what’s in an AI black-box model with an explanation afterward.

This lack of consensus on concepts makes for awkward discourse among various groups of academics and business people that are using AI in different industries, and it also inhibits collective progress.

The academia bubble

The AI community’s issues are largely still being debated by academics, as opposed to having gone mainstream in the civilian world. One result of the fact that solid understanding of and faith in AI haven’t taken hold is that people may inherently mistrust it to make decisions that affect them. 

Lacking in social accountability

Some AI advocates are highlighting the human experience surrounding AI decisions; this is known as “social transparency.” They refute the conventional premise that if humans can get inside the black box and make sense of what’s discovered, all will be well. But “not everything that matters lies inside the black-box of AI,” points out computer scientist Upol Ehsan. “Critical answers can lie outside it. Because that’s where the humans are.”

Bias in AI models

People currently working in AI are not a particularly diverse group (the field is dominated by white men). As a result, there’s some inherent bias in the ways that interpretable models are created and operate, argue diversity proponents. This sounds accurate, but without a diverse employee population working with machine-learning models, how do you address it? Whether there’s bias in a given model’s decisions — and if so, how to address it — are persistent concerns.

Possible mistakes

One reason explainable models are so worthwhile is that AI models sometimes make algorithm-related mistakes, which can lead to everything from minor misunderstandings to stock-market crashes and other catastrophes. “We view computers as the end-all-be-all when it comes to being correct, but AI is still really just as smart as the humans who programmed it,” points out writer Mike Thomas.

Human reactions and business impacts

If a data-science-based decision is universally unpopular — and also incomprehensible by — those affected by it, you could have major pushback. What if even a single disgruntled person who’s prone to complain about perceived unfairness, for instance, in the financial services industry, damages a company’s reputation? This scenario is the nightmare of company execs. 

The public may not understand how an AI model interpretability method works, but if the model is responsibly created and supervised, and reliably supplies accurate results, that’s a step toward ensuring acceptance of this expanding technology. As reliance on AI algorithms to make significant business, medical, and other types of choices with different use cases continues to increase, being able to explain and audit decisions made by AI applications will help build trust and acceptance of AI work.

The benefits of explainable search data

Search data is one area where strong AI transparency is integral to success. Complex ranking formulas (e.g., incurred by mixing attribute weights and proximity between words) essentially constitute an opaque model. If you’re aiming for website optimization, you may wonder why your search results are presented in a particular order, and can they be tested, refined, and adjusted to get your desired relevance?

With Algolia, you don’t have to trust an algorithm’s interpretability techniques or worry about how the system works or model explainability. Our search-providing customers enjoy a white-box model of transparency for how search relevance is computed. For example, you can see how search results are ranked based on personalization and relevance factors, and then manually adjust for real-world needs.

Want to tap the power of AI to create excellent search for your users with true transparency for your teams? Check out our search and discovery API and get in touch with our experts now.

About the author
Vincent Caruana

Sr. SEO Web Digital Marketing Manager

Recommended Articles

Powered byAlgolia Algolia Recommend

Removing roadblocks: A closer look at the three biggest barriers to relevant, personalized AI-Powered Search
algolia

Julien Lemoine

Co-founder & former CTO at Algolia

How Algolia uses AI to deliver smarter search
ai

Julien Lemoine

Co-founder & former CTO at Algolia

10 ways AI is transforming ecommerce
e-commerce

Catherine Dee

Search and Discovery writer