On Wednesday, Google introduced Google Lens, new AI technology that allows users to discover and interact with the world around them through visuals. For advertisers, it also means a new way to connect with consumers across Google products.
"Google Lens is a set of vision-based computing capabilities that can understand what you’re looking at and help you take action based on what you are looking at," Google CEO Sundar Pichai explained as he demonstrated the new technology at the company’s I/O Developer Conference, now celebrating its 10th anniversary.
For example, by pointing the camera at a storefront, Lens can tell you the name, rating and other information related to the business, like price and which friends have checked in there. Google Lens will also be integrated into Google Assistant to allow users to converse with the Assistant using photos. A user can take a photo of a marquee with a concert date and time and the Assistant can add it to their Google Calendar.
With Google Lens, your smartphone camera won’t just see what you see, but will also understand what you see to help you take action. #io17 pic.twitter.com/viOmWFjqk1
— Google (@Google) May 17, 2017
As the saying goes, a photo is worth a thousand words. Therefore, a photo can communicate a lot more information about a user than words can. Ray Dollete, associate director of creative technology at Phenomenon, believes Google Lens will help advertisers determine user intent.
"All of this allows more fine-tuned demographics for advertisers," Dollete said in an email. "The simple act of taking a photo of a boarding pass could tell you a lot about a person: their travel plans, lifestyle, airline preference, seating preference, etc."
Of course, Google Lens is the latest example of the increased investments digital companies such as Google are making in image-driven products.
"The mobile camera is quickly becoming one of the most important gateways (and translators) between the real world and the superpower that is the internet," said Tom Buontempo, president of New York-based agency Attention. "Snapchat showed the power of the camera to transform the traditional ad unit with brand lenses and filters. Pinterest showed how the camera could eliminate friction to product identification and purchase and now Google, with Lens, is looking to power so much more of our life."
But, at its core, Google is a search-driven platform, and the product extends those capabilities. Instead of searching with text, users can use a combination of voice and images or the option to choose between the two. Google has been working on the technology for years. Lens is similar to Google Goggles, the visual search app that was shuttered after four years.
"Maybe we weren’t ready for Google Goggles when it launched," said Buontempo, "but they’re slowly grooming us back in that direction through behavior modification."
The introduction of Lens has led to accusations that the tech giant is copying platforms like Pinterest. Pinterest released a similar product with the same name in March. Pinterest’s "Lens" also lets users snap a photo of an object like food or a piece of clothing and the platform will direct them to related items or recipes. From a user perspective, the process is the same, expect Google Lens can integrate with Google Assistant, giving users the option to use a combination of voice and images to search for something. Pinterest, which markets itself as visual search platform, does not appreciate the comparison.
Kent Brewster, front-end engineer at Pinterest, replied to Google’s tweet about the new product, not out until later this year, with: "Or, you could try out Pinterest Lens, which works right now for everyone, everywhere."
A spokesperson for Pinterest said that despite the identical names, the products are different. "Our use of computer vision is not just to identify what an object is, but to see the possibilities of what it could become. What to make with fresh strawberries? How to style a new dress? Those are the kinds of subjective questions we answer, which is very different from how anyone else is applying computer vision technology."
The advantage Google may have over Pinterest, said Dollete is that its Lens is available across other Google products such as Google Translate, so advertisers can see more layers of data associated with each. Still, more information doesn’t necessarily mean more targeting accuracy. "While they have much more data to draw from to understand the image that a user has given them," he said, "they also have less context about what the user is expecting in return."
Pinterest, on the other hand, has "much less data to draw from," but they are "aware that the user expects result related to thing like lifestyle," he said.
An image of a basket of fruit shot with Google Lens, said Dollete, might bring up nutritional information, known allergies, plant species, etc., but Pinterest Lens, drawing from a user’s lifestyle images, might bring up more accurate information about what a user might be looking for, like kitchen interiors. "This sort of context is crucial for the accuracy and quality of deep learning," he said.
Still, Buontempo said Google Lens "unlock tremendous potential for brands" for those delivering services through marketing or trying to add value to brand experiences. "It forces us to, once again, rethink the customer journey," he said, "which will have direct implications on where and how we market."