Need Help? Talk to Our Experts
Sharing is caring!
Years ago, I wouldn’t have expected a search engine telling a searcher about objects in a photograph or video, but search engines have been evolving and getting better at what they do
In February, Google was granted a patent to help return image queries from searches involving identifying objects in photographs and videos. A search engine may have trouble trying to understand what a human may be asking, using a natural language query, and this patent focuses upon disambiguating image queries.
The patent provides the following example:
For example, a user may ask a question about a photograph that the user is viewing on the computing device, such as “What is this?”
The patent tells us that the process in it maybe for image queries, with text, or video queries, or any combination of those.
In response to a searcher asking to identify image queries, a computing device may:
The server may receive the transcription and the image from the computing device, and:
The Server may:
The process described in this patent includes:
Other Aspects of performing such image queries searches may involve:
The method may also include:
Further, the method may also include:
Performing the command can include:
Advantages of following the image query process described in the patent can include:L
This patent can be found at:
Contextually disambiguating queriesInventors: Ibrahim Badr, Nils Grimsmo, Gokhan H. Bakir, Kamil Anikiej, Aayush Kumar, and Viacheslav KuznetsovAssignee: Google LLCUS Patent: 10,565,256Granted: February 18, 2020Filed: March 20, 2017
Abstract
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for contextually disambiguating queries are disclosed. In an aspect, a method includes receiving an image being presented on a display of a computing device and a transcription of an utterance spoken by a user of the computing device, identifying a particular sub-image that is included in the image, and based on performing image recognition on the particular sub-image, determining one or more first labels that indicate a context of the particular sub-image. The method also includes, based on performing text recognition on a portion of the image other than the particular sub-image, determining one or more second labels that indicate the context of the particular sub-image, based on the transcription, the first labels, and the second labels, generating a search query, and providing, for output, the search query.
[ad_2] Source link
Digital Strategy Consultants (DSC) © 2019 - 2024 All Rights Reserved|About Us|Privacy Policy
Refund Policy|Terms & Condition|Blog|Sitemap