It shouldn’t be surprising that the Google I/O developers conference was focused on the latest advancements in AI and new capabilities for Assistant – considering the company rebranded its Google Research division as Google AI just before the keynote began. As committed as the tech giant is to machine learning, computer vision and natural language processing, there weren’t many announcements that revealed brave new frontiers for artificial intelligence. Rather, the conference was all about showing how much better Google’s AI is at doing its job, and all the areas it can be applied.
Getting AI to assist with everything
Besides the expected news that Google Assistant would be built into its Smart Displays – think of the Google Home smart speaker, except with a screen – the company announced a number of other places where the AI-powered platform could provide suggestions for users.
In Photos, it will allow users to automatically improve the look and quality of photos, as well as predict the places it is most likely to be shared. In GMail, a new smart compose feature will make suggestions for ways users might be able to finish sentences – like AutoComplete for email. But some of the most extensive capabilities can be found in Maps, where AI will let you know if a business you are looking for is unexpectedly closed, what parking nearby is like and using AR functions to give you a live view of directions or where to turn while walking.
The company is also making conversation with Assistant more natural, allowing it to do tasks without users having to say “OK Google” before each one, and having it respond to more natural commands. On the “naturally talking with computers” front, Google’s Duplex – a system still in development that can carry out conversations over the phone – also received attention for how realistically it was able to conduct conversations during what the company claimed were recordings of actual phone calls to a hair salon and restaurant.
Lens expands its view
First announced at last year’s I/O, Google Lens is an AI-powered camera platform for the company’s Pixel phones. Now, its functionality will be available directly through the cameras of devices made by LG, Motorola, Xiaomi, Sony, Nokia and more, and the company showed off a slate of new and improved tasks its Lens can do for a user.
On top of improvements to Lens’ previously announced image recognition features – such as bringing up reviews of a store just by capturing the storefront, or being able to recognize different breeds of plants or animals – the company touted a “style match” feature that searches for clothing or interior decor items, similar to “visual search” capabilities that sites like eBay and Pinterest have been experimenting with. It’s also capable of live text detection, letting users copy and paste text they see printed in the physical world, like on a sign or piece of paper.
Android Auto hits the road with Volvo
Outside of AI, one of the biggest announcements at I/O was that Android Auto – Google’s connected car platform – would be directly built into new vehicles from Volvo and Audi. The addition will allow drivers to use the platform (even if they don’t have an Android-powered phone with them), as well as bring Google Assistant directly into the vehicles.