I can not imagine I have to navigate in today's world while there are visual impairments. From streets to people to objects and even trivial things like sandwich making or knowing the toilet in a restaurant, everything will be endlessly harder without sight and I have a lot of admiration for those who have to deal with these situations every day. Smartphones can do this more easily, especially with AI on the handlebars. If Google Lens can identify a dog from a photo, there is nothing to prevent it from using the same technology to help visually impaired people, and this is where Lookout comes from.

Announced last year I / O, Lookout is finally available to users to try. It has three modes: one that helps to explore the world and helps cook, one for shopping for reading barcodes and seeing currency, and the last one for reading texts by post, signs, labels, and more. The app is well done right from the start: When you start it, it starts by asking which mode you want to use so you do not have to navigate the menus.

After testing it for a few minutes, I noticed it was quick to identify sites and tell me exactly where they were (1

2 hours, 3 hours etc) and whether they have text on them. There is a menu for all recently identified objects and a camera to take a picture and upload it, although the last one did not bring me any result. The voice was very fast to move from one item to the next without pause and sounded very machine-intensive, so I hope Google can improve this, especially in visually overcrowded scenes.

Lookout is now available for pixels in the US and English, although Google claims to be eager to extend it to more countries, languages ​​and platforms later. You can now download it to the Play Store or manually download it from APK Mirror if you do not live in the US or do not have Pixel and want to try it out.