One of the most exciting challenges in tech today is undoubtedly trying to beat the human brain at its own game. Whether it’s with the super “helpful” automatic checkout at the grocery store or IBM’s supercomputer Watson, technology has come a long way in keeping pace with and occasionally surpassing human capabilities.
Interpreting images, however, is still considered a human’s game as images are processed faster by the brain than a machine, and human minds consistently outperform technology when it comes to image analysis.
As a full-text search engine, Algolia searches a lot of images but cannot actually search the images directly. We rely instead on metadata associated with the image such as title, description, location and other tags to help us make sense of the image. We often run into situations where an image has no associated words or tags but customers still want to search within those images. So what happens then?
Luckily, we have seen a lot of text-based algorithm tools emerge in the past decade. One such tool, Imagga, boasts an image recognition software that can—amongst other things—automatically generate these missing image tags. And it just so happens to be an API too! Imagga is essentially the missing link that enables searching raw images within Algolia. Algolia provides a powerful way to explore large amounts of data, and Imagga brings to the table the ability to create textual data from a set of images.
We’ve devised a game combining both tools to compare how humans and machines tackle image tagging and how this affects your ultimate search experience. Find out who comes out on top in Human vs. Robot: Battle of the image tags.
Illustration by Martin David