Then and now: 5 ways we’re continuing to improve Search

3. Understanding images, videos and more

There’s so much information in the world that isn’t text, and so many ways to ask for things that aren’t by typing words into a search box.

By applying the latest developments in natural language processing (NLP), in 2008 we launched the ability to search with your voice, making it more natural to search on mobile.

In 2015, advances in computer vision made it possible to search what you see with Lens. We turned your mobile phone camera into a way to explore and ask questions about the world around you, so you can learn more about that flower or insect you saw while you were out for a walk in your neighborhood. Today, people do more than 12 billion visual searches every month with Lens.

Last year, we launched multisearch, which advanced these capabilities to allow you to add text to your visual searches. Now, you can do things like take a picture of a couch you like, add the word “chair,” and Google will use the image and word to show you similar pieces to add to your living room set.

Breakthroughs with AI have also enabled us to understand the semantics of videos to automatically identify key moments – allowing you to navigate those moments like chapters in a book. Whether you’re looking for that one step in a home renovation tutorial, or a game-winning shot in a highlight reel, you can get to what you’re looking for.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *