Google I/O, the tech giant’s annual developer conference, has typically been the place where the company’s up-and-coming innovations are previewed.
This year, the keynote presentation was focused on hardware like new Pixel phones and a new smart display in its rebranded Google Nest line of connected home products, as well as updates to its Android operating system and some of its most popular apps. But it also announced features that will extend the reach of some its tools and services, as well as privacy efforts that will have an impact on its most important line of business: advertising.
Taking a bite out of cookies
One of Google’s biggest announcements of the week didn’t come during its annual keynote, but through a blog post on its website on the first day of the conference.
Google announced a number of updates to Chrome that aim to give users more control over which cookies are stored as they browse the web – and, more pressingly, the degree to which sites are able to track users. Chrome users will be able to manually block and delete cookies, as well as better distinguish between first- and third-party cookies. Using what’s called a “SameSite” attribute, developers will now have to disclose which cookies are able to be used across sites, and which ones can only be used on the site that issued them. Users will also be able to block or delete only third-party cookies used across sites, while keeping first-party cookies that saves things like login details, items in their shopping cart or site preferences. Users will also be given more information about which sites set which cookies, which the company says helps people make more informed decisions about how their data is used.
Google also said it is working on ways to crack down on so-called “fingerprinting” by tracking new methods that have arisen since Apple’s Safari browser began to take more aggressive cookie-blocking strategies. In the absence of cookies, fingerprinting uses data about a browser – which type and version is being used, the operating system that’s running it, plugins it is using, time zone it is in and settings for language and screen resolution – to make an educated guess to identify a unique user. These methods were described by Ben Galbraith, director of Chrome Product Management, and Justin Schuh, director of Chrome Engineering, as being “neither transparent nor under the user’s control” and creates “tracking that doesn’t respect user choice.”
Exact launch dates for both efforts haven’t been announced, but Google said it will begin previewing the new cookie controls later this year. The company also says it plans to eventually limit all cross-site cookies to ones that come through secure (HTTPS) connections.
While these controls answer growing consumer demand for greater transparency and control over how their data is used to track them across the internet, it provides some concern for ad tech solutions, many of which rely on third-party cookies to gather data from multiple sources in order to accurately target ads, especially when the majority of the world’s web traffic (with analysts putting it between 58% and 62%) comes from Chrome browsers. On the other hand, The Wall Street Journal pointed out in a story published before Google went public with its announcement that this could strengthen Google’s position within the advertising market, as it will still retain access to this consumer data.
Outside of Chrome, Google Ads will be releasing a browser extension in the coming months that allows users to get information about which companies were involved in targeting an ad to them – such as agencies and online ad exchanges – as well as the data used to target it to them. The company will also release an API so other companies can provide similar services for their own ads, as well as provide extensions to other browsers like Safari and Mozilla Firefox.
Bringing AI to more venues
Google didn’t announce many hyper-innovative AI tools and features during this year’s keynote, but instead seemed to be more focus on making the ones it has easily to use and access.
The next generation version of Google Assistant, for example, has been greatly reduced when it comes to the size of the AI models that power it, to the point that it can be stored directly in mobile devices. This means it will no longer need to send requests to remote servers, and process requests up to ten times faster, Google claims.
Google Duplex received a lot of attention last year for its ability to accurately mimic real phone conversations using AI and voice recognition, and this year it announced that it would be bringing it to the web. While this version of Duplex won’t speak, it will use intelligent language recognition to automatically browse the web and fill out forms for users, completing tasks like booking car rentals or movie tickets. An upcoming version of its Android operating system will also bring the Smart Reply feature to third-party messaging apps, offering quick suggestions for easy replies to messages or suggesting apps to launch, such as using Google Maps to find directions when a message mentions travel times or an address.
Finally, the Google Lens image recognition features will soon be getting smarter. Previously being able to doing things like recognize a business and pull up relevant info about just by being shown its storefront, Lens will now be able to recognize and contextualize more text-based information. For example, Google Lens can be shown a menu and pull up images of some of the dishes featured on it, or calculate a tip after being shown a receipt. A demo during the keynote also showed that images from partner publications, such as Bon Appétit, can automatically bring up content like recipes.