Other Types
  1. All Blogs
  2. Product
  3. AI
  4. E-commerce
  5. User Experience
  6. Algolia
  7. Engineering

The Fellowship of AI: why great AI apps need both Galadriel and Gandalf

Published:

Listen to this blog as a podcast:

At Algolia DevCon 2025, we walked on stage with a simple claim: AI products only succeed when technical ambition and human-centered design pull each other into balance.

Paul-Louis came to represent the “Gandalf” mindset: how ML researchers are obsessed with new capabilities, speed, automation, and making technology like magic: fading in the background.

Meanwhile, Maria took on the “Galadriel” role: advocating for users, their context, their safety... bringing the subtle wisdom needed to keep powerful AI systems from running wild.

This post is our reflection on the three battles we fought on stage and the shared conclusion we reached. If you're building AI apps, you’re maybe fighting the same battle. Let us know if this helps you succeed!

You can watch the presentation below at the end of this blog.

Battle #1: Friction vs. Flow: should AI ask for permission?

The first question was deceptively simple: Should AI pause and ask for confirmation?

Look at something like Algolia’s Generative Shopping Guides. AI can now produce what used to take hours in seconds. But if it hallucinates product details or misses cultural nuance, trust evaporates instantly.

Strategic friction matters. Humans need moments to think, especially when outputs carry downstream consequences. Without friction, errors scale, misinformation spreads, and we risk eroding trust and human expertise.

Engineers often see only its negative side, but friction gives us better feedback loops, which in turn improves the AI features. A pause isn’t just UX moralizing—it’s upstream ML fuel.

In practice, “magic” requires data. Lots of it. Personalization? Learning to Rank? Automated synonyms? These things only work because the system continuously learns from user behavior—implicitly or explicitly. And if we gate every interaction behind a confirmation step or an opt-in, adoption tanks. Opt-out systems achieve >80% engagement; opt-in often sits below 20%.

So from my perspective, friction taxes the very data that powers the magic users expect.

Here we, Gandalf and Galadriel, eventually met in the middle: Friction must scale with stakes.

  • Low-risk tasks? Let the AI flow.

  • High-risk decisions? Hit the brakes.

This is one of those places where neither extreme works: Context decides.

Battle #2: Promise vs. Reality: Can you ship a lab prototype?

Every week there’s a new capability. Multimodal models. Agentic loops. Self-improving chains. Developers expect this stuff now, not six months from now. My ML engineer instinct is: "Ship it fast. Automate everything. Don’t let UX slow us down." e.g. we integrated OpenAI’s Completion API in a few weeks. Then we had to do it again for the Responses API! And next week? Who knows.
If we build slowly, we’re perpetually behind.

The only way to keep up is a lab-style workflow:

Research → Prototype → Fast experiment → Fast release → Iterate. Again. Forever.

Yet Galadriel has a point: lab performance means nothing if the real world breaks it. Reality is messy: ambiguous inputs, cultural context, unlabeled edge cases, sarcasm, shifting norms. Even the “best” models fail when the mental models of humans differ from the training data. IBM Watson learned this the hard way: when oncology recommendations built on U.S. data performed poorly in South Korea, they lost $4 billion.

It's a good example of what can happen when technical prowess ignores contextual reality.

Our conclusion here is that real users are not datasets: their context is essential to validate any theoretical innovation. We must move fast, sure, but also disclose capabilities progressively, and with the due respect for their cognitive load.

Battle #3: Transparency vs. magic: should we reveal the trick?

For decades, users submitted an input and got one output. Generative AI breaks that rule: one prompt now yields infinite possibilities. 

That uncertainty terrifies people; unless we make it visible. Transparency is more than an ethical practice: it’s also how you calibrate trust, explain capabilities and limits, and reveal sources. It allows users to correct the system.

To us, the future UX pattern is co-creation, not human consumption of AI work. AI and humans should collaborate. We should design for variability, not hide it behind a curtain.

However, too much transparency becomes noise. Take most AI playgrounds: a lot of users don’t want to think about model parameters, temperature settings, etc. They want to: “Press button → get good output.”

Great engineering should make complexity invisible. When you flip a light switch, you don’t need to think about electrons if the electrician did because someone else did a good job. So here’s where we landed together:

We need progressive complexity.

  • Start with smart defaults that deliver reliable value.

  • Reveal complexity only when users seek advanced control.

  • Let AI adapt implicitly, but give humans the tools to steer when stakes rise.

So the answer is not transparency or magic: it’s magic when it delights and transparency when it matters.

Conclusion

Left alone, each of us becomes dangerous:

  • Paul-Louis (Gandalf) would ship new capabilities weekly, regardless of whether humans can handle them.

  • Maria (Galadriel) would over-anticipate risks, slowing innovation to avoid hypothetical UX disasters.

Together, we keep each other honest. Machine learning provides intelligence. UX provides wisdom. Great AI products need both. 

Building the AI humans need is about augmenting them. That augmentation only works when technical power and human understanding evolve hand-in-hand.

The fellowship matters. 🧙‍♀️

Recommended

Get the AI search that shows users what they need