Google I/O 2018 Opening Keynote Live-Blog (10am PT)
by Andrei Frumusanu on May 8, 2018 11:30 AM EST11:43AM EDT - Hello everybody and welcome to the Shoreline Amphitheater in Mountain View where we'll be live covering Google's IO 2018 opening keynote.
11:46AM EDT - The event will be starting at 10am PT - a little bit over an hour from now so stay tuned.
12:51PM EDT - The keynote should be kicking off in 10 minutes.
01:00PM EDT - The event is starting.
01:01PM EDT - Intro video with cubes - I wonder what they represent.
01:02PM EDT - Lots of pixels!
01:03PM EDT - Google CEO Sundar Pichai is taking the stage.
01:04PM EDT - Over 7000 people attending the event in the theater.
01:04PM EDT - Sundar addresses a major bug in one of the core products - reference to the Cheeseburger Emoji
01:05PM EDT - It's been an extraordinary year on all fronts - an inflexion point in computing.
01:06PM EDT - Google is focused on bringing digital skills to enable creating developers needed for the rising technology needs.
01:07PM EDT - AI enables solving problems in new ways. Last year Google AI has been announced to bring AI to everyone.
01:07PM EDT - Google is opening AI centers around the world.
01:08PM EDT - Healthcare is the most important fields AI is going to transform.
01:08PM EDT - Machine learning assisted diagnosis offers a lot opportunity.
01:08PM EDT - Machine learning assisted diagnosis offers a lot opportunity.
01:09PM EDT - AI can help predict medical events before a patient will become sick.
01:10PM EDT - In general there is a lot of enthusiasm in the medical space with AI/Machine learning diagnosis as it's one of the aspects that can vastly outperform human diagnosis.
01:11PM EDT - Next example for AI - when two people talk over each other, machine learning can separate the audio through audio analysis and image cues.
01:14PM EDT - Now rolling a video on Gboard - Google's Keyboard - and how adding support for Morse Code improved accessibility
01:15PM EDT - GMail's new refreshed design is also bringing AI features.
01:16PM EDT - SmartCompose starts suggesting phrases when starting an email - you just have to press tab to continue to it to autocomplete.
01:16PM EDT - It analyses the context of the email conversation for prediction. It will be rolling out publicly end of this month.
01:17PM EDT - Every day over 5B photos are viewed by users. A new feature called Suggested Actions a new set of actions is presented based on the image contents.
01:17PM EDT - It for example offers already sharing capability to the users who are recognized in the picture.
01:18PM EDT - Say the image is underexposed - the app will offer options to correct this.
01:18PM EDT - If you take a picture of a document, it will convert it into a PDF and clean it up.
01:19PM EDT - An astounding new feature is the colorize feature for black & white pictures. This is a key demonstration of machine learning into a very real use-case.
01:19PM EDT - Sundar is announceing a new TPU generation - TPU3.
01:20PM EDT - The new TPU3 is said to be 8x more performant than TPU2 and will require liquid cooling in the datacenter.
01:20PM EDT - Over 100Petaflops ML capability currently deployed.
01:21PM EDT - Wavenet models the human voice to create a new synthesized voice.
01:21PM EDT - This results in 6 new voices to the Google Assistant - all sounding very natural.
01:23PM EDT - John Legend's voice is coming to the assistant - Wavenet allows to capture his voice and to synthesize it. Coming later this year.
01:23PM EDT - New video demonstrating Google Assistant scenarios.
01:24PM EDT - Scott Huffman taking the stage.
01:25PM EDT - Two years ago the Assistant was announced at IO - today it's available on 500M devices.
01:25PM EDT - It's available in over 40 auto brand and 5000 Assitant enabled devices.
01:25PM EDT - Last year it was launched in India and has gained a 3x uptake in adoption.
01:26PM EDT - We're now covering new enhancements to the Assitant.
01:26PM EDT - Natural conversation is an important aspect.
01:27PM EDT - There's no longer a need to say "Hey Google" everytime to enable it - it will naturally follow the conversation.
01:27PM EDT - The Assistant now understands when you are talking to it and when you're talking to somebody else.
01:28PM EDT - The feature is called Continued Conversation and will be rolling out in the coming future.
01:29PM EDT - "Multiple Actions" follows the conversation and stiches together several requests- it's very hard to parse sentences like these.
01:31PM EDT - "Pretty please" is a feature to make GA require to be polite for requests.
01:32PM EDT - Visually assistive features now coming into focus.
01:32PM EDT - We're looking at the first demo - a Lenovo smart display.
01:33PM EDT - The device is controlled by voice but because it has a screen it's a much more rich assistant.
01:33PM EDT - YouTube & YouTube TV will be added onto Smart Displays.
01:34PM EDT - There's many more of ways that SmartDisplays can make home actions easier.
01:35PM EDT - Inspired by the smart display experiences, this translated into improvements on phone based Assistant.
01:35PM EDT - The assistant takes full advantage of the screen for content response.
01:36PM EDT - Demonstration of food pickup & delivery experience. I'm wondering how far this will actually be adopted outside of small areas.
01:37PM EDT - The new visual assitant will be launched on Android later in the summer and iOS later in the year.
01:38PM EDT - Google Assitant will be integrated into Google Maps this summer.
01:38PM EDT - Sundar taking back the stage.
01:38PM EDT - The mission of Assistant is to help you to get things done.
01:38PM EDT - Making calls is a core aspect of this.
01:39PM EDT - Connecting users to businesses is also a core aspect of this - AI can help with this with connecting you with businesses who do not have an online presence.
01:40PM EDT - For example making an appointment through Assitant to a hair dresser - the Assistant will actually call the business to make a real appointment.
01:41PM EDT - This is very amazing as it demonstrates a full fledged voice "AI" that's able to have conversations.
01:42PM EDT - Another demo demonstrating a restaurant reservation which doesn't go as one would think- the assistant still managed to navigate through the conversation succesfully.
01:43PM EDT - The feature will be rolling out as an experiment in the comings weeks.
01:44PM EDT - Giving users back time is a core goal for Google.
01:44PM EDT - "Digital well-being" is being addressed now.
01:44PM EDT - People are anxious about missing information - "FOMO" Fear of missing out.
01:45PM EDT - JOMO - Joy of missing out.
01:45PM EDT - Google understands your habits and focuses on the content which you are interested in.
01:46PM EDT - I wonder if this creates an echo-chamber in terms of information....
01:46PM EDT - Family Link enables managing kids' screen time.
01:47PM EDT - This is part of the Digital Wellbeing site rolling out soon.
01:47PM EDT - Sundar is addressing the issue of journalism - fake news being brought up?
01:48PM EDT - "Support quality journalism" - We agree!
01:48PM EDT - Google News initiative invests $300M over the next 3 years to collaborate with news outlets to improve the quality of news.
01:48PM EDT - Google has a responsibility to provide news.
01:49PM EDT - AI has been used to improve the relevancy and quality of the news sources.
01:49PM EDT - The new Google News - Trystan Upstill taking the stage to explain more.
01:49PM EDT - With the new GN - 3 things are the key aspects that are improved.
01:50PM EDT - As soon as you open GN - the top five stories are always the stories that you care most about.
01:50PM EDT - Google pulls from various sources, for example it also pulls from local news outlets as it knows your location.
01:51PM EDT - The headlines category contains the large worldwide stories.
01:52PM EDT - The new GN app is builed with the new Material Theme.
01:52PM EDT - There's a new visual format called Newscasts.
01:52PM EDT - It uses natural language processing to bring news content on a topic together in a visually expressive presentation.
01:53PM EDT - Understanding the full story- it's a lot of effort to bring together a lot of sources and to sum it up in its fullest.
01:54PM EDT - Google organizes news stories based on their "temporal co-locality".
01:55PM EDT - It uses its real-time event analysis to create a timeline of an story and its events.
01:56PM EDT - Full coverage is not filtered based on the user's habits, but rather something that will be identical to all users universally.
01:57PM EDT - Newsstand allows you to subscribe with Google to paid publications.
01:57PM EDT - Google build this with over 60 publishers around the world and will be rolled out in the coming weeks.
01:57PM EDT - It's an interesting take on aggregating paid content.
01:58PM EDT - It will be rolled out on Android, iOS in 127 starting today.
01:58PM EDT - We're focusing towards Android now.
01:59PM EDT - Introduction video showcasing Android's strengths over the years.
01:59PM EDT - Android's "openness" is a key aspect that Google's focusing on here.
02:00PM EDT - Dave Burke taking the stage.
02:00PM EDT - 10 years of Android - The T-Mobile G1 was the first phone with Android.
02:01PM EDT - Partners have launched tens of thoustands of phones being used by billions of people.
02:01PM EDT - The growth of Android has helped to fuel the shift from desktop towards mobile.
02:01PM EDT - Android P.
02:01PM EDT - P is an important step towards vision and AI in the mobile space.
02:02PM EDT - On-device machine learning can enable a lot of feature about the OS learning your habits.
02:02PM EDT - Battery life is a key aspect. "Adaptive Battery" is designed to give a more consistent machine learning experience.
02:03PM EDT - It will analyse which apps you will use and depending on the usage patterns it will only enable app wakeups for those relevant apps.
02:03PM EDT - This results into a 30% less wakeups on average.
02:03PM EDT - Moving into screen brightness - most smartphones adapt brightness based on the environmental lightning.
02:04PM EDT - P introduces adaptive brightness which learns how you like to adjust the brightness and adjusts it based on your habits by itself.
02:05PM EDT - Google Launcher - which introduced predictive apps which introduced the predicted apps you'd like to use.
02:05PM EDT - P now introduces predictive actions which does something similar but applies to actions within apps.
02:05PM EDT - This requires developers to introduce these actions onto the app.
02:06PM EDT - This provides deep links into the apps from the launcher.
02:06PM EDT - "Slices" are interactive snippets of app UIs.
02:07PM EDT - Early access for app actions and slices begin next month into the developer previews for P.
02:08PM EDT - Google introduces "ML Kit" which is a new set of APIs for machine learning.
02:08PM EDT - This enables both on-device ML as well as Google's cloud based services.
02:08PM EDT - It's based on TensorFlow Lite and uses the NN API - but it will be also available on iOS.
02:09PM EDT - One of the key goals over the last few years it to simplify the UI of Android for expand it to more users.
02:09PM EDT - Google is demonstrating new feature's on Dave's phone.
02:10PM EDT - The demonstration here is about the transition from the navigation bar towards a simplified home button with contextual actions and gestures.
02:11PM EDT - This is similar to the navigation of the iPhone X's bar - at least Google's take on it.
02:11PM EDT - The new navigation system is said to be a lot faster.
02:12PM EDT - The volume slider now changes the media volume by default now in P. Seems to be another feature that Samsung had first introduced on Android O.
02:13PM EDT - Sameer Samat taking the stage to demonstrate "Digital Wellbeing"
02:14PM EDT - "Helping people with their digital wellbeing is more important to us than ever"
02:15PM EDT - Android P will show a dashboard based on the amount of time you spent on applications and a usage history thereof.
02:16PM EDT - YouTube will be showing total watchtime across mobile and desktop.
02:17PM EDT - P will introduce time limits on apps - it will give you a notification and grey out the app after you exceed the limit. The goal here is to limit your usage.
02:18PM EDT - "Shush" will disable all notifications on the phone when you turn the screen downwards. It will still enable notifications and calls from important poeple.
02:19PM EDT - "Wind down" mode puts the phone into silent mode and greys out the screen after a set time to enable you to help you to go to bed.
02:19PM EDT - The Android P beta is announced today and will be available on the latest Google Pixels.
02:20PM EDT - Alongside the Pixels there's 7 OEMs that will provide beta firmware builds.
02:21PM EDT - Oneplus, Nokia, Vivo, Sony, Xiaomi, Oppo and Essential are part of the beta program.
02:21PM EDT - Jen Fitzpatrick taking the stage to cover computer vision.
02:21PM EDT - Google Maps has covered over 220 countries.
02:22PM EDT - Advancements in AI now will be able to automatically add new streets and buildings.
02:24PM EDT - A new updated Google Maps will show content and events in your area.
02:24PM EDT - This new tab in maps is called "For you", with new places to visit and personal recommendations such as trending restaurants.
02:24PM EDT - It encourages visiting new places.
02:25PM EDT - The locations are scored based on a system "Your match" based on your history and why it recommends it for you.
02:26PM EDT - Group matching is a feature to organize events with your friends. You can share a shortlist of places, and your friends will vote on locations.
02:29PM EDT - Google now explaining location determination based on the phone's camera.
02:30PM EDT - This is an AR type feature which uses the camera to analyse the location and correlate that with the location on the map.
02:30PM EDT - It will overlay interesting information such as business locations over the camera viewfinder.
02:30PM EDT - Basically this really is proper augmented reality for Google Maps.
02:32PM EDT - Google Lens is also introducing these capabilities - Starting next week Lens will be natively integrated into OEM camera applications. A list of vendors take part - Samsung seems to be notably missing here.
02:32PM EDT - Google Lens is also introducing these capabilities - Starting next week Lens will be natively integrated into OEM camera applications. A list of vendors take part - Samsung seems to be notably missing here.
02:33PM EDT - Lens allows OCR and copy and paste functionality.
02:33PM EDT - It will automatically looks up the text that you're selecting and give you contextualized content.
02:33PM EDT - It will automatically looks up the text that you're selecting and give you contextualized content.
02:35PM EDT - .
02:35PM EDT - Last thing: making lens work in real-time
02:36PM EDT - All the Lens features will be coming the next few weeks.
02:37PM EDT - CEO of Waymo (Alphabets's self driving car business) John Krafcik
02:38PM EDT - Waymo is the only company in the world with a fleet of self-driving cars.
02:38PM EDT - Video demonstrating the experience in a trial in Phoenix, AZ;
02:40PM EDT - Phoenix will be first public program where it will enable people to call Waymo calls to get around.
02:41PM EDT - Waymo uses deep learning and machine learning for things such as pedestrian detection.
02:42PM EDT - This allowed for a 100x reduction in the error rate.
02:43PM EDT - Explaining "Perception" and "Prediction" AI uses in autonomous driving.
02:44PM EDT - Dimitri is explaining the use of ML on the combination of sensor data to enable high precision perception and classificaiton of objects.
02:45PM EDT - Waymo cars have driven 6 million miles and have gained a lot of data in terms of real scenarios.
02:46PM EDT - The neural networks are trained on the TPUs and Google's datacenters.
02:46PM EDT - In simulation, Waymo has "driven" over 5B miles.
02:47PM EDT - Google's demonstrating how ML allows for filtering sensor noise - such as snow picked up by the LIDAR system.
02:49PM EDT - The keynote wrapped up - Thank you everybody for being with us. We'll be covering more news and sessions over the following days.
02:49PM EDT - Thanks!
18 Comments
View All Comments
evilspoons - Tuesday, May 8, 2018 - link
It'd be helpful if you could reverse the direction of updates (newest at bottom). Reading updates that are written top-to-bottom in a bottom-to-top order is really confusing, at least to me.jordanclock - Tuesday, May 8, 2018 - link
It makes sense when you're getting the feed of updates and then switching after the live blog is over.Ryan Smith - Tuesday, May 8, 2018 - link
And that's exactly what we do. All of this is flipped at the end.=)WinterCharm - Tuesday, May 8, 2018 - link
Apple should kill Siri and rewrite her from the ground up. They’re so far behind it’s not even funny.Sivar - Wednesday, May 9, 2018 - link
That would put them even further behind. They should dump Siri and admit defeat. Cortana is more useful to me, but they should work a deal with Google as both are behind and not catching up. I am frequently astonished at how consistently terrible Siri is; it alone is reason to dump my iPhone.peevee - Thursday, May 10, 2018 - link
"Cortana is more useful to me, but they should work a deal with Google as both are behind and not catching up."Yeah, nothing stimulates progress more than eliminating competition.
prophet001 - Tuesday, May 8, 2018 - link
You're not taking over my life Google!SirPerro - Wednesday, May 9, 2018 - link
Nobody forces you to use Google products, Android, or the awesome Google Assistant black magic fuckery.If you want to stay away of all that you are very free to do so. Heck, you are even free to do it while using Android if you want.
prophet001 - Thursday, May 10, 2018 - link
I know. I'm just letting you guys know that I know.:D D:<
III-V - Tuesday, May 8, 2018 - link
The ML retina thing was really cool