Google LLC is rolling out a set of artificial intelligence enhancements to its search engine and Google Maps that will enable users to find information more easily.
The new features made their debut today at a product event in Paris. According to Google, some of the features will make it simpler to find where a product is available for purchase. Another new capability, called Immersive View, creates three-dimensional models of popular landmarks that visitors can consult to plan their trips.
Google Search updates
Google’s search engine is receiving a new AI-powered feature called multisearch. According to the company, the feature provides the ability to create search queries that combine an image with text.
A user could, for example, create a query that comprises a photo of a chair and the phrase “matching table”. In response to such a query, Google could surface tables that are available for purchase online and have a similar design as the chair in the photo.
Multisearch support is available today on mobile devices that have Google’s Lens app installed. In the coming months, the feature will also become available for the web. Furthermore, Google plans to release a version of the feature called “multisearch near me” that will enable users to find nearby stores carrying the product they’re searching for.
“You can take a picture and add “near me” to find what you need, whether you’re looking to support neighborhood businesses or just need to find something in a hurry,” Google vice president of search Elizabeth Reid detailed in a blog post. “This is currently available in English in the U.S., and in the coming months, we’ll be expanding globally.”
Upgraded Google Maps
Google Maps is also receiving a set of new AI features as part of today’s update. The main highlight is Immersive View, a capability that provides access to 3D models of buildings. Users can virtually interact with a building model from a bird’s eye view to locate the entrance, check current weather conditions and find related information.
“You can virtually soar over the building and see where things like the entrances are,” Chris Phillips, Google’s vice president and general manager of Geo, detailed in a blog post. “With the time slider, you can see what the area looks like at different times of day and what the weather will be like. You can also spot where it tends to be most crowded so you can have all the information you need to decide where and when to go.”
Immersive View generates building models using an AI technique known as neural radiance fields, or NeRF. The technique enables a neural network to create a 3D model of an object from a few two-dimensional images. In the case of Immersive View, building models are generated using a combination of Street View footage and aerial photos.
Another new addition to Google Maps is an indoor version of the service’s existing Live View tool.
Originally launched in 2020, Live View enables users to point their handset’s camera in the direction of a location and receive navigation instructions. The instructions are overlaid on the footage recorded by the handset’s camera. Live View also provides other features, including the ability to access information about restaurants near the user.
The new version of Live View announced today brings similar features to indoor spaces. According to Google, an initial release of the tool is set to roll out in the coming months. On launch, it will work at more than 1,000 airports, train stations, and malls in a dozen cities worldwide. .
Google Maps is also receiving new features geared toward electric vehicle owners. When a user is taking a trip that requires a charging stop, Google Maps can display the most suitable charging stations along the travel route. The service can also find locations such as supermarkets that have on-site charging stations.
Google debuted its new search and navigation features alongside an update to Google Translate. When a user enters a word such as “novel” that has multiple meanings, the updated version of Google Translate can display multiple translations.
“So whether you’re trying to order bass for dinner, or play a bass during tonight’s jam session, you have the context you need to accurately translate and use the right turns of phrase, local idioms, or appropriate words depending on your intent,” explained Google Translate product manager Xinxing Gu.
The iOS version of the Google Translate app is also receiving an update. The update includes a new interface that will make it easier to perform common tasks such as selecting a target language for translations. Additionally, Google is making translation results more easily readable.
The new features are rolling out two days after Google introduced Bard, a conversational search chatbot based on its LaMDA large language model. Today, following reports that Bard answered a question incorrectly in a Google product demonstration, shares of parent company Alphabet dropped more than 8%. But at least one user has pointed out that the answer provided by the chatbot was correct in a certain respect.
On the same day Google debuted Bard, Microsoft Corp. announced plans to introduce a similar chatbot for its Bing search engine. The chatbot is based on an enhanced version of OpenAI LLC’s ChatGPT large language model. The chatbot-equipped version of Bing is currently available in preview.