Google has just announced this Monday, November 1, 2022 the integration of its Lens image recognition technology directly into the home page of its famous search engine. Now, a button is available to launch a search by images with a single click.
In 2017, Google created a surprise by unveiling Google Lens, an image recognition tool boosted with artificial intelligence capable of identify objects in an image. To put it simply, just point your smartphone camera at a plant, animal, book or product in a store and they will be identified by Lens. Contextual suggestions are then displayed, such as reviews available online, explanations or the address of other merchants where to find the product in question, for example.
Since then, the Mountain View company has made sure to integrate Google Lens into its products, particularly in Google Photos, then in Chrome, its flagship browser. And since November 1, 2022, the company has gone further in integrating Lens into its services.
Also read: Google Lens – instant translation is now available offline
Google Lens is available directly in the search engine
In effect, the technology is now available directly in the homepage of its famous search engine. “Google’s homepage doesn’t change often, but today it does. We are constantly working to expand the type of questions you can ask and to improve the way we answer them. Now you can easily ask visual questions from your desktop.” writes on Twitter Rajan Patel, vice president of engineering at Google.
To summarize, a Lens button (which looks like a camera in Google colors) is now available directly in the search bar of Google Search. Here you can either drag and drop an image or import one from your files to search. It is also possible to paste an image link to do your visual search.
As a reminder, the search multiplies via Google Lens must land soon in France. This feature, called Multisearch, was previously available only in the United States and supported English only. This tool makes it possible to carry out searches by combining the image, the text but also the voice by pronouncing a specific request. The goal is to refine the results obtained as much as possible.
Source: The Verge