Google has announced a range of new features and improvements across its suite of products, including Google Search, Maps, and Assistant at its annual developer conference Google I/O 2022 on May 11.
The internet giant is adding the ability to find local information with the recently launched multi-search feature that allows users to search the Web using text and images at the same time through Google Lens, its image recognition technology.
The feature will enable users to take a picture or a screenshot of products in categories such as food, apparel and home goods and add a 'near me' text query to browse through nearby local retailers or restaurants that may offer them.
For instance, one can take a picture of a food dish that she may want to try, following which the company will scan through "millions of images and reviews" posted on Web pages, and from its community of Maps contributors, to surface results about nearby spots that offer the dish.
Google said the feature will be available globally later this year in English, with plans to expand to other languages over time.
"We're redefining Google Search yet again, combining our understanding of all types of information — text, voice, visual and more — so you can find helpful information about whatever you see, hear and experience, in whichever ways are most intuitive to you. We envision a future where you can search your whole world, any way and anywhere," said Prabhakar Raghavan, Senior Vice-President, Google, in a blogpost.
Raghavan also noted that Google Lens is used 8 billion times a month, nearly tripling over the last year.
Google also previewed a visual search advancement called 'Scene Exploration' in its multi-search feature that will enable people to pan their camera and instantly view information about multiple objects in a scene at the same time.
The company gave an example of going shopping to buy a candy bar for a chocolate connoisseur friend who loves dark chocolate but has an aversion to nuts. With scene exploration, people will be able to scan the entire store shelf with their phone's camera and see helpful insights overlaid on candy bars in front of them.
Alternatively, one can go to a bookstore and scan the bookshelf to view additional information about these books overlaid on them.
"Scene Exploration is a powerful breakthrough in our devices ability to understand the world the way we do," Raghavan said.
"This technology could be used beyond everyday needs to help address societal challenges like supporting conservationalists in identifying plant species that need protection or helping disaster relief workers quickly sort through donations in times of need," he added.
New Google Maps features
Google Maps is getting a new immersive view that lets users explore and experience a place even before they visit the place.
With this view, one can see what an area looks like at different times of the day, view weather and traffic conditions along with the area's busy spots. Immersive view is being rolled out for landmarks, neighborhoods, restaurants, and popular venues among others, starting with major cities such as Los Angeles, London, New York, San Francisco, and Tokyo by the end of the year, with more cities expected to be added in the future.
A new ARCore geospatial API will enable developers to integrate Google Maps' augmented reality-driven navigation feature Live View to their respective apps. Among the companies who are already using this API include US micromobility firm Lime and Australian carrier Telstra.
Eco-friendly routing, which was introduced in the United States and Canada in October last year, is being expanded to Europe later this year. The feature lets drivers view and choose the most fuel efficient route when they are looking for driving directions. Google claims that people have already used this feature to travel 86 billion miles so far, which translates to savings of more than half a million metric tons of carbon emissions. It expects this number to double after the expansion to Europe.
Look and Talk
Google is also launching a new feature called Look and Talk, which will enable users to interact with Google Assistant more naturally and intuitively. Available as an opt-in feature on its smart home display Nest Hub Max, users will be able to simply look and communicate with Google Assistant without using any wake words such as "Hey, Google".
The company said it also uses Face Match alongside Voice Match and signals such as proximity, head orientation and gaze direction to recognize the user. It noted that video from these interactions are processed completely on device and is not shared with Google or anyone else. The feature will be available for Android users this week followed by iOS users in mid-May.
Google Assistant's quick phrases feature is also coming to Nest Hub Max with expanded functionality. The feature, which is currently available on its smartphone Pixel 6, enables users to do certain common daily tasks on Google Assistant such as setting a timer, turning on the lights, or asking for the weather, without saying the wake words.The tech giant said it is also building more powerful speech and language models that can understand the nuances of human speech like mid-conversation pauses, self-corrections and interruptions so as to mimic natural human conversations.