Google I/O 2019 keynote address review
This year’s I/O features a line-up of talks that were seemingly tailored to my peculiar interests, but this article will be dedicated to the ludicrous number of announcements and interesting topics in the keynote address – all covered, of course – from the splendid sunshine of our Hooton offices
We’re all the heroes of our own story, so it’s no surprise that Google’s annual conference is filled with references to Google’s ‘ongoing effort to make the world a better place’. While this might seem like the ever spreading tentacles of an evil mega-corporation to some, to those working hard and undoubtedly with the best of intentions on their own projects, media portrayal of them as privacy violating, small business suppressing, tax dodgers must be difficult to swallow. It must be filtering through, however, as – along with AR and AI wizardry – privacy featured regularly on the agenda.
Keynote – Sundar Pichai
Firstly, Sundar Pichai’s stage presence gets better and more polished every year – and his keynote, while more low-key and low energy than some of the other Silicone Valley CEOs, was as good as any I’ve seen – with the pacing and delivery of a TED talk. In addition to trailing some of the announcements to be spoken about later, he made a point of emphasising a series of mission statements – only one of which was creepy, I’ll let you decide which:
Search is the priority, he went on to say – showcasing the ‘full coverage’ feature they’re looking to release this year, as well as their aim to index podcasts.
Sadly, I was beaten to my hot take on this by Moz’s Dr Pete who got there about three hours before me with the following:
Google is going to start indexing podcasts and return audio results in SERPs. That means voice answers are going to be direct clips from audio content at some point soon.
— Dr. Pete Meyers (@dr_pete) May 7, 2019
Far from rendering all visits to websites pointless and using the content supplied free of charge by brands and people alike to create in-search meta-pages that will eventually render websites little more than data mines for the Google algorithm, this is all part of Google’s efforts to make search more ‘helpful’.
Following Pichai, the next section dealt with AR in the real world – and what this will mean for search results. Chennapragada announced to applause that certain search queries will return AR results – including a demonstration of the results for the great white shark.
Google Lens then made an appearance, with the announcement that it has been used more than a billion times – with Chennapragada stating that improvements will now be able to highlight the most popular dishes on a menu (in real time) and make your AR version clickable, it will also be able to make a manipulable document of a bill – allowing you to calculate tips, split the bill between members of a party and more. It will also be able to live translate or read signage or any text in the real world. Which is all part, Chennapragada states, of their ongoing campaign to index the real world. *shiver* The real-world use video, however, was genuinely touching.
These features will all, apparently, begin rolling out next month.
Back to Pichai
At a conference late last year, I used a clip of Pichai’s announcement on Google Duplex to illustrate the possibilities of voice interaction for FMCG brands – and Duplex returned this year, moving beyond restaurant reservations to carrying out actions on your behalf online as well as via voice. Duplex will soon be able to book car rentals, cinema tickets and more through the websites of companies online.
‘Duplex on the web’ will, apparently, require no action from brands – instead all interaction is carried out by Duplex on an otherwise ordinary site.
The Google Assistant proper, is coming to mobile devices – with the machine learning that serves the Google Home version set to be available on your phone. So fast, Pichai says, that tapping your phone to use it will seem slow.
“What if we could bring the AI that powers the Google Assistant,” Huffman begins, having already been scooped by Pichai, “to your phone.”
While Pichai may have stolen the thunder from his intro, Huffman follows this by demonstrating the 10X faster ‘Next Generation’ Google Assistant with back to back commands and some genuinely impressive, distinct interactions with the assistant that did not require the repetition of the ‘Hey Google’ prompt. While it’s not quite the level of conversation that I’ve stated previously would be required for the voice revolution, it is incredibly close and remarkably quick – and at least 12 months earlier than I expected based on last year’s Duplex demo. The coming ‘paradigm shift’, according to Huffman, will be partially due to the near zero latency and the ability to execute complex, differentiated tasks – the new assistant will be available with the new Pixel 3, he stated.
While it will need to appear across all Android powered devices before its real impact can be felt, it will be interesting to see the reviews of the new Pixel device as they appear.
Huffman then goes on to describe the slightly more Black Mirrorish ‘Personal References’ which will build a picture of you and the things and people important to you – while he says all of this data is controllable and removable in an updated ‘You’ tab, I’m hoping someone will be looking closely at the next lot of operating system T&Cs at whether anonymised data will be scraped from users – or how else this will feed in to various Google modelling.
Huffman also announces a new driving mode for the assistant which places Android phones in to voice activated mode and will be available from the summer. To rapturous applause, he was also able to announce that alarms set with the assistant can now be turned off without the ‘Okay Google’ command.
Back to Pichai again and AI for Everyone
Ethical AI was next up for the CEO of
Cyberdyne Systems Google, and he spoke about combating possible biases – including a way to identify the importance of certain things in an ML model (referred to as TCAV). He then moved on to discuss various enhancements to the Google offering on privacy and security – though the main development, from my perspective, is their decision to make privacy settings more easily accessible and making two step authentication far easier.
Another major announcement from a privacy perspective, was that of a machine learning method Pichai referred to as ‘Federated Learning’ (first announced by Google back in 2017), a relatively new technique whereby each user is equipped with the latest learning model and influences their version of the model through their activity before the global model is updated in large batches – not with an individuals data, but only what the model has learned. This model, updated by millions of users, is then re-transmitted to the user for the process to begin again.
This technique – where only the resulted learning is centralised rather than the data used – has the potential to vastly improve individual data security while still allowing Google to glean the insights of its billions of users.
This will, Pichai says, also allow the Gboard to keep up to date with modern language usage – automatically following trends in slang, word use and even emoji selection.
In addition, the live caption and speech recognition demos and the stories behind them, were also moments that hit me in the feels. It’s difficult to be cynical when the positive impact of a technology is quite so well encapsulated on video (even for me).
Android version 10 is now used on 2.5 Bn active devices, begins Cuthbertson, before going on to discuss the coming technological developments pertinent to the new version – including the foldable screen, 5G networks and more. She then goes on to discuss the fact that they have managed to make their cloud based speech recognition software small enough to run happily on even the lowest spec modern smart phone. Android Q has already been labelled as the best device in 23 of 26 categories by Gartner, she says, before then giving a thorough run through of the updates to privacy and security on the latest Android version – including modular updating for software.
The recommended use of some device controls brought some things to my attention that had passed me by – but they were mentioned in conjunction with ‘Focus Mode’ – a setting which allows people to essentially reduce distractions at a tap. Which, as someone prone to distraction, is a major benefit – as are the improved parental controls.
Osterloh begins by saying Google believes the future lies at the intersection of AI, Software and Hardware before running a video that shows the importance of the home as a preamble to the work that Google are putting in to creating a ‘Helpful Home’ under the ‘Nest’ brand name which will provide a personalised, seamless experience across devices and for all users.
He also announced the coming ‘Nest Hub Max’ a voice operated, screened device with a wide ranging number of integrations and has said helpful like he’s trying to rank for it in the 90s.
The Pixel 3 was Ellis’ subject, beginning with what was essentially a sales pitch – detailing various tech specs on the phone’s camera and speaker (including an amusing dig at an unnamed competitor).
The place it gets interesting (for me at least) is another tick in the checklist of things I’ve been predicting – AR directions in Google Maps navigation – paving the way for a lot of innovation both in paid and organic search.
Senior Fellow at Google AI Jeff Dean followed the Pixel demo, discussing language parsing – including BERT, which allows Google to understand a word using the words before and after it. One of the most interesting and high potential developments of the last few years, BERT may well be a key aspect of the future of of search. He then moved on to discuss TensorFlow and its open source nature – and what that’s allowed people to do outside of Google, something exemplified by a doctor that explained (or at least gave a simplified overview of some incredible innovation) her AI work in oncology.
Dean closed his section talking about Generalised AI – and how they can help with the whole helpful thing. It was an assurance that did little to overcome my worries about the paperclip problem, but it was nevertheless interesting to see ethics and helpfulness mentioned in the same breath as AI.
I’ll be writing up as much of the conference as I can, this week, so sign up to the blog to make sure you don’t miss anything from I/O 19 – or check out our resource section for more ‘helpful’ content.