Google announced Google Lens at I/O 2017.
Google Lens is an AI-powered technology that uses your smartphone camera and deep machine learning to detect an object as well as understanding it and offering actions based on what it sees.
The idea is that you can point your phone at something, the lens picks up what it is, tells you about it and also suggests next steps for you based on that information. It will see exactly what you see but helps you know more and do more with it.
Point it an object and it will tell you what it is, what it does and where you can get/purchase it yourself. Point it at a restaurant and it will tell you opening times, reviews and menu information. Point it at your WIFI SSID sticker and it connects you automatically.
All of this is not essential but is an aid to make our lives easier, in getting information fast and helping us make decisions quickly.
My question is how soon before brands are going to get on board with this? Consumers are already pointing at their product and been told where to buy it, but how can brands get involved to make sure they make the decision to buy it.
Limitations may come in as Google Lens is implemented on the Google phone to be used with Google Assistant and Google Photos only at present. This leaves all the rest of the android users interested but not yet able to get involved.
For those who are questioning how this is different to what Bixby does – the answer is its better, more intelligent and has a mountain of data to fall back on. The limitations of Bixby’s inaccurate identification of objects and necessity to have partnerships for information are all non-existent with Google Lens.
The success of Google Photos and its ability in image identification has most of us awaiting the implementation of Google Lens with optimism and excitement, though we do question whether this is just another clever development by Google to capture more ad revenues in the future.