Google Lens will soon take a step forward. Indeed, the multiply search functionality also called Multisearch will soon be available in France. Until now, this tool was accessible only in the United States and supported English only.
In case you didn’t know, this September 28, 2022 was marked by the holding of the 2022 edition of Google Search On. This is the manufacturer’s conference dedicated to its visual text search technologies, such as Google Lens for example.
And precisely, the Mountain View firm has announced something new about its visual search engine. Indeed, the multiply search presented in April 2022 at the Google I/O conference will soon be available in 70 languages including French.
Simply put, Multiply Search offers a whole new way to search using both images and text, without forgetting the voice. For example, if you take a picture of a plant via Google Lens, you can add tips to the search results on how to maintain it, for example. Just type your request into Lens. The combination of image, voice and text in a single request will refine the results obtained.
Also read: Google Lens finally arrives on Google Chrome to facilitate research
Google Lens wants to make it easier to search near you
In addition, Google has announced the entry into the test phase of the function “Multisearch near to Me”, or in the language of Molière, multiple searches near me. The principle is the same, except that you can ask Google Lens to provide results close to your geographic location. An example: if you submit a photo of a dish to Google Lens, the visual search engine will be able to tell you if a restaurant near you serves it.
The idea is still to facilitate searches and to offer ever more relevant results. However, “Multisearch near to me” is currently only available in the United States and only supports English. It will be necessary to wait a little before enjoying it in France.
Finally, there is progress on translation via Google Lens. Indeed, this function passes a course. Thanks to a machine learning technology called “Generative Adversarial Network” or generative adversarial networks, Google is able to embed the translated text into the background image.
In other words, the translation will no longer be displayed floating above the original text, it will replace it directly on the photo. With this feature, Google wants to make Google Lens translation more natural and realistic.