Google I/O 2019 developer keynote review
Part motivational speech, part virtually-impenetrable dev-speak, the developer keynote was tough to follow, but a worthwhile watch
Including learning about a programming language I’d genuinely (and, probably, embarrassingly) never heard of, the developer keynote was a dense and information heavy watch, but has left me with a lot to think about. In truth – I’m not the target audience for this – my grasp of code is limited, but even though I was clinging on by fingertips in places, there was a lot of information with the potential to influence how we work in search and digital marketing.
I can’t find the gentleman’s name, but the intro for the developer keynote focused on the broad nature of the developer community – and the commitment that that community has to self improvement and the benefit the community provides Google, as well as the expansive number of new and emerging areas that make it a great time to be a developer. The most interesting part of this, though – though its delivery was passionate and heartfelt – was a referral to ‘ambient computing’ – a phrase I’ve not come across, but which is a nice way of referring to an era of pervasive technology.
The main announcement here, however, was that not only is android app development going to be easier thanks to this move, but that Android Studio was to be upgraded to enhance user experience – the new version was released on the day of the talk and is now available to download (v3.5).
In addition to this, however, there’s also going to be a new ‘updateIfRequired()’ extension for Kotlin developers that will allow developers to push a full screen update prompt when they make changes to the their app (which can be as sensitive as you choose to make it – determining the level of update required to trigger it) as well as a ‘flexible update API’ which will let developers allow users to download an update in the background while they continue to use the app.
Turkstra hit the ground running – expounding on the leaps Google has made to remove friction from interaction with the Google Assistant. With that in mind, he moved on to announce that ‘How to’ content will now result a new kind of rich result.
Video will also have a new ‘How To’ template – with the assistant allowing video creators to create interactive how to videos for the screened assistants.
Turkstra then moved on to announce four good to go ‘actions’ that will allow the assistant to interact directly with apps. These include:
The focus of much of the talk was, if I can sum it up as such, was on improving the ability of the Google assistant to interact with apps without expecting developers to do the bulk of the work. While app developers will need to do some work to implement the new options, the support system Google has put in place is an indication of how seriously they’re trying to drive the voice search aspects of their offering forward – which should give search marketers, as well as web developers – some serious food for thought.
Oppenheimer began her section of the keynote by restating Google’s commitment to updating Chrome every six weeks – as well as to the open source Chromium project. She then moved on to the progress that Google has made in the last 12 months – specifically reducing load time by 50% (due mostly to the improved ability of V8 to render JS). In addition to this, Chrome will be supporting a new image tag which has the potential to speed up sites without the need for JS solutions.
Chrome, if it sees this tag, will take connection speed and other factors in to account to decide when to load these images. While it’s not the ideal solution for speed issues – it’s a great fix if you’re looking at an additional action you can take, especially if most of your traffic comes from Chrome users (which, for many sites, will be the case). On the subject of speed, Oppenheimer also announced the addition of ‘performance budget support’ for Google Lighthouse, as well as improvements to PWAs across devices.
Finally, Oppenheimer moved on to ‘User trust and safety’ and announced Google’s intentions to make it easier to control privacy settings and a new website web.dev to help developers keep up to date with best practice – and the introduction of Linux for Chromebooks.
Talking AI and Machine Learning, Vijayakumar discussed Googles decision to make various AI and ML tools available to developers so that anyone can have access to common ML features for use as developers decide they need to.
The access that Google is giving to Cloud AutoML and various other complex machine learning models – and without the need to write a line of code – is, frankly, incredible. While I know this access has been given out for a while – I was listening to the Sleepwalkers podcast yesterday evening, which was discussing a few interesting experiments carried out using Google ML tools – the potential for both scientific and general advancement is amazing.
While it’s an area I’d love to look at more closely, a lot of this was sadly at a level above me – but, while I may not know enough to know exactly what some of this meant, I know enough to think that those who do know enough would be rightly excited by it.
Johnson was up next to talk about the app development platform ‘Firebase’ and how Google is allowing developers easy access to ML through the platform. She was quickly joined by team-mate Stella – who, in the Firebase platform, quickly created an app able to correctly identify dog breeds – in a Blue Peter ‘here’s one I made earlier’ kind of way’ – which was able to identify the breed of a Boarder Collie stuffed toy. The speed and seeming ease are almost certainly not representative of the work involved, but even if it’s half as easy it’s a phenomenal achievement.
Following this demonstration, Johnson took back the stage to announce that Google are rolling out performance monitoring for web-apps – with live metrics becoming available to developers.
Much of this focused on the ever present theme of speeding up the web – with the metrics measuring various load times to allow far greater oversight and, therefore, capacity to improve the speed of various apps.
Last up came Seligman – who, he says, leads developer relations – to speak about their response to developer feedback, give an overview of the improvements Google has made to I/O as a result of this feedback and announce that the Flaming Lips would be performing at the conference – something I’ll have to take up with our management team here for our Benchmark Conference. He then moved on to announce and demo a version of their app developer software Flutter for the web – allowing a single code base to be used to power app and web versions.
Ending much as the keynote started, Seligman proceeded to thank the community of gathered developers again.
While, as I’ve admitted, my technical abilities are way below the level needed to fully understand much of the higher level discussion that took place throughout this keynote, it was interesting to see the connective tissues of speed, security, machine learning and AI which joined the developer keynote to the main keynote and both to the future of search and digital marketing. Google seems to have set its sights firmly on these areas across the board, which is indicative of more changes to come for search.