What you need to know from Google I/O

Three major AI and personal assistant developments to come out of the tech giant's annual developers conference.


Google is in the midst of I/O, the annual developers conference where it unveils its latest products and services. The conference continues until Friday, but the major announcements came Wednesday during the company’s keynote presentations. We’ve summarized the major ones you can expect consumers to be talking about in the coming months.

Image recognition gets some time to shine

Given its investments in the space, the fact that AI weaved throughout the keynote presentations should come as no surprise. What was notable, though, is that many proposed capabilities are going to be available in the very near future.

Google Lens is a technology that is built into Google Assistant (the company’s answer to Siri that was announced at last year’s I/O) and uses AI-powered image recognition for a litany of different tasks. The sheer range of demonstrated Lens abilities were impressive. In one demo, the AI analyzed a photo of a flower, identifying which species it was. In another, a photo of a barcode on the bottom of a WiFi router could be used to connect to the network. Taking a photo of a storefront brought up info about the businesses and online reviews, while doing the same for the marquee on a concert venue added the show to a calendar.

It’s notable that Google has invested heavily in approaching AI from an image recognition point of view even as the industry trend went more towards natural language recognition for use in things like personal assistants and chatbots. The applications the company demonstrated with Lens – if they work as advertised – showed an AI that recognizes images can be just as useful day-to-day as ones that recognize the way we speak.

Assistants get more helpful


That’s not to say, though, that Google hasn’t shown some impressive language recognition capabilities as well when it comes to its AI assistants. Given that 70% of conversations with Assistant are in more conversational language – as opposed to the keywords used for a Google search – the company says it is making Assistant “more conversational” by continuing to improve its ability to recognize the natural way people speak, recognize context and history and adding a feature that allows you to type requests, instead of just speaking to them out loud.

But the one on your phone is not the only assistant Google has to offer, and it also announced a number of new features that signal something of a cross-pollination between Assistant and its Google Home platform. “Actions on Google,” voice-activated apps on Google Home that are similar to the Skills that can be loaded onto Amazon’s Alexa, will now also be available on mobile devices, allowing you to quickly and easily access the services of other apps while in the Assistant interface. It will be able to do things like send payments from your banking app, play music from your preferred streaming service or call for a ride from Uber. Following the release of its third-party developers kit in late 2016, the amount of third-party apps that can be accessed by both Home and Assistant is expanding, giving more companies a chance to be part of the ecosystem Google is creating.

On the flip side, Google Home is getting a range of new features of its own. It will be able to offer proactive notifications and updates based on tasks you’ve asked it to do before, like reminding you of an appointment or offering traffic reports for a travel route you asked for. If a user query is better answered with an image or visual, that can be sent to your phone or Chromecast-connected TV, or you can give a prompt to Home to let it know you are heading out the door and that the directions it just found for you should be sent to your mobile device.

AI gets really, really accessible

On a more technical front, the company announced a new initiative dubbed Google.ai, which is meant to democratize the development of machine learning and artificial intelligence. The portal will give developers access to the latest AI research from Google, practical experiments the company has developed to test applied AI and the TensorFlow open-source library of production-ready machine learning software.

When it comes to hardware, Google also announced the release of its second-generation of Tensor Processing Units (TPUs), the chips that power its machine learning. Besides being significantly faster and capable of both data processing and running machine learning models from the same chip, accessibility was on the new chip side as well. The new chips will be packaged with subscriptions to the Google Cloud Platform, and can also be accessed through Google’s new TensorFlow Research Cloud program. Though there is still an application process, accepted researchers and developers will be given free access to 1,000 TPUs in exchange for sharing any research developments in journals and through open-sourced code.