A number of improvements to Google Search were unveiled at the company’s ‘Search On’ event; these improvements will provide users with more relevant results that are both rich in content and visually oriented.
“We’re thinking outside the box in order to make search experiences that are as multifaceted as human beings and function more like our minds. Now that we’ve entered a new era of search, you’ll be able to zero in on exactly what you need with the help of a wide variety of media types.
We call it making Search more natural and intuitive,” Prabhakar Raghavan, Google’s VP of Search, stated in his keynote address.
First, the multisearch feature, which Google released in test form in April of this year, is being rolled out internationally in English, with support for 70 other languages coming in the coming months.
The multisearch function allowed users to conduct many searches simultaneously, using either visual or textual cues. It also works in tandem with Google Lens. Google reports that its Lens function is used over eight billion times each month by users who are trying to find information about an image they have taken.
However, by integrating Lens into multisearch, customers will be able to snap a photo of a product, type in “near me,” and get results for stores in their immediate vicinity. Google claims that their “new way of searching” would make it easier for people to locate and communicate with local companies.
After the summer, the “Multisearch near me” feature in the United States will debut in English. Extensive familiarity with regional landmarks and available stock enables this. Raghavan added, “Multisearch and Lens are enlightened by the millions of photographs and reviews on the web.
Google is working to improve the translation overlay on images. Google reports that over 1 billion individuals a month use the service to translate text from photos into more than 100 different languages. Google can now “blend translated text into complicated pictures, so it looks and feels much more natural,” thanks to the addition of this new capability.
For the purpose of making the translated text blend in better with the source image and not draw undue attention to itself. Google claims that “generative adversarial networks” (GAN models) are what drive the technology underlying Magic Eraser on Pixel. This update is planned for later on in 2018.
Shortcuts under the search bar are also being added to the iOS app. Users may utilize their screenshots as shopping guides, translate any text using their camera, and search for music, among other things.
When searching for certain locations or topics, Google will provide more visually appealing results. Google demonstrated this by showing search results for a city in Mexico, which included videos, photos, and other material in addition to the standard text results.
According to Google, this will eliminate the need for users to switch between several tabs while researching a specific location or topic.
In the next month, it will also be able to deliver more pertinent facts even as a user begins typing a question. To aid visitors in refining their queries, Google will display “keyword or topic possibilities.”
For some of these categories—cities, for instance—it will also have material from open-web creators, along with travel advice and other useful information.
According to the company’s blog post, users will be presented with the “most relevant content, from a range of sources, regardless of the format the information arrives in – whether that’s text, images, or video.” The update will become available in the next months.
Google now provides visually richer results when you conduct a search for food, whether you’re looking for a specific meal or an item at a restaurant. It’s also improving the “visual richness and dependability” of “digital menus” and increasing their coverage.
The company claims that it is using “image and language understanding technologies, including the Multitask Unified Model,” in conjunction with “menu information provided by people and merchants, and found on restaurant websites that use open standards for data sharing,” to produce these improved search results.
According to a Google blog post, “these menus will highlight the most popular items and conveniently flag out different dietary options, beginning with vegetarian and vegan.”
Shopping results on Search will also be updated to be more visually appealing with accompanying links and the option to shop for a “full look.” Users will be able to view certain sneakers in 3D perspective in the search results and buy them with ease.
Similarly, Google Maps is gaining some new capabilities that will add visual information, though these will mostly be confined to specific cities. One of the features users may take advantage of is the ability to research the “Neighborhood feel” of a given area, i.e., the best restaurants, attractions, etc.
Vacationers will appreciate being armed with this knowledge of the area. Google claims to have used “AI with local expertise from Google Maps users” to provide this data. In the upcoming months, Android and iOS users everywhere will be able to experience the neighborhood vibe.
It is also enhancing its immersive view functionality by adding 250 photorealistic aerial images of famous places from across the world, including the Tokyo Tower and the Acropolis.
Google claims that “predictive modeling” is what allows immersive view to automatically pick up on patterns of behavior in a given location’s past. Los Angeles, London, New York City, San Francisco, and Tokyo will all get the immersive view on Android and iOS in the coming months.
The Live View function will also provide users with valuable information. While out and about, users can make use of the search with the Live View function to locate nearby businesses like grocery stores and restaurants. Live View search is coming to Android and iOS in the next months in major cities including London, New York, San Francisco, Paris, and Tokyo.