Over the past year, the number of companies asking Algolia questions about voice search has gone from a trickle to a torrent. Our customers and prospects have the usual search-related questions—organizing data, configuring relevance—and they want to know how best to handle voice input. To this end, today Algolia is announcing VoiceOverlay for iOS and Android applications, a UI component for developers to accept voice input for search and other purposes—whether they use Algolia or not.
Voice is becoming a necessity, especially on mobile. You may have heard that 50% of all searches will be through voice by 2020, and already 71% of people would rather use voice to search than a keyboard. Users will consider mobile apps without voice to be less useful than those that do.
However, be it for search or another purpose, handling user speech without tooling isn’t easy. The app needs to handle permissions with all their permutations, listen to the user, display the text on the screen, and then do something with it. Developers need to put a lot of work into piecing all of this together.
VoiceOverlay gives developers tooling to handle voice input in their mobile applications quickly and easily. Taking inspiration from our InstantSearch libraries, VoiceOverlay reduces development time for this task from hours to minutes. It handles the entire flow, including:
- Requesting User permissions
- Listening for audio
- Recording the voice input
- Retrieving text from the native iOS and Android speech to text
All of this comes in a nice and customizable UX that will fit in with any app, with a unified experience between the different platforms. Even more, the applications don’t need to use Algolia to use VoiceOverlay. We want all apps to be voice ready, and this is our contribution to making that happen.