If you want to know where Google is headed, look through Google Lens. Lens was my favorite during Google I/O for its clear utility, and that’s why I am going into detail with it even after the Google’s developer conference wrapped up Friday (please don’t laugh). Google Lens is that kind of feature that could make the apps that contain it more uniquely useful. And with it is “the first time Artificial Intelligence (AI) is more than a gimmick.” This feature gives you a view into the meaning of “AI first.”
The artificially intelligent, augmented reality feature seemed to generate the most interest, not just from me alone, but many. Of all announcements, it best encapsulated what Google’s transition to an “AI first” company means.
Google CEO Sundar Pichai underscored the tool as a key reflection of Google’s direction, highlighting it in his Google I/O keynote as an example of Google being at an “inflection point with vision.”
“All of Google was built because we started understanding text and web pages. So the fact that computers can understand images and videos has profound implications for our core mission,” he said in his introduction of Lens.
The feature is first being added to Google Photos and the personalized AI software Assistant, which is available on an increasing number of devices. Lens uses machine learning to examine photos viewed through your phone’s camera, or on saved photos on your phone, and can use the images to complete tasks.
A few things Lens can do:
- Tell you what species a flower is just by viewing the flower through your phone’s camera;
- Read a complicated Wi-Fi password through your phone’s camera and automatically log you into the network;
- Offer you reviews and other information about the restaurant or retail store across the street, by you just flashing your camera over the physical place.
Typing a question into Assistant, for example, can feel like just using Google Search in a separate window. Add this new computer vision capability, though, and you have something a browser search box can’t do.
Lens brings Google’s use of AI into the physical world. It effectively acts as a search box, and shows Google’s adaption to the move among younger users toward visual media. That preference has made social network Snap a magnet for younger users, who prefer to communicate with pictures over text.
Lens affirms a consistency of focus for Google. Here is augmented reality at work doing exactly what people know Google can do, which is retrieve information from the web.
But in considering how this new visual search option may play out, compare it to voice search, which for now often returns readouts of whatever would appear at the top of search, sometimes resulting in answers that are inaccurate, offensive or lacking in context.
Google also cleverly incorporated Lens into one of the company’s most-used apps, Photos, which has gained half a billion users in the two years since its launch. The incorporation could help Google become more necessary to mobile users by making its mobile apps more essential. That means Google will have a place on users’ phones even if its own hardware like the Pixel phone fails to catch on.
Pichai said in his founders’ letter a year ago that part of this shift to being an AI first company meant computing would be less device-centric. Lens is an example of being less device-centric, on mobile.
In conclusion, the technology behind Lens is essentially nothing new, and that also tells us something about where Google is going. This is not to say that Google is done coming up with new technologies, but that there are a lot of capabilities the company is still putting together into useful products.
1 Comment
Pingback: Want to know where Google is headed…look through Google Lens | ANDROID DRIPS