This browser does not support the Video element.
MOUNTAIN VIEW, Calif. (KTVU) - Google’s annual developer conference kicked off Tuesday at the Shoreline Amphitheatre with a keynote that packed in the tech giant’s progress in hardware, software, and AI. Google’s AI powered assistant was central to a lot of the announcements made today, and last year’s showstopper, Google Duplex made a comeback as well. The Pixel 3a leak from Monday turned out to be completely accurate.
Security, privacy and accessibility were other key themes that were repeated often – the phrase “on device machine learning”- where no data goes to the cloud, was used often, to reiterate Google’s commitment to privacy, and boast capabilities that its competitors lack. “We are moving from a company that helps you find answers, to a company that helps you get things done,” Google CEO Sundar Pichai said at the keynote. Here’s a look at some of the top highlights from today’s event.
Google’s next generation Assistant is 10x faster
Google Assistant is used by over a billion people globally in over 30 languages across 80 countries. At the keynote, Scott Huffman, VP Engineering, Google Assistant showed the next generation of the assistant, which is coming to Pixel phones later this year. It uses new speech recognition and language understanding models that can be packed into half a gigabyte, and fit on a phone, instead of a data center on the cloud. This translates into lower latency times and an Assistant that can function without an internet connection. At the demo shown on stage, the Assistant app was able to instantaneously do multiple tasks seamlessly, making typing and swiping between apps seem tedious and outmoded.
Duplex comes to the web
Google Duplex’s restaurant reservation demo was one of the biggest talking points from last year’s event. It made a comeback this year with a feature called Duplex on the web, which helps users complete tasks, like booking a car rental or a movie ticket. In the demo shown, Google Assistant was able to navigate the site and input information such as trip details and payment information seamlessly, with a single command. This feature will roll out later this year on Android phones in the US and UK.
Google Assistant gets a driving mode
Google wants to help you focus on driving when you are driving, and a voice activated driving mode will let you screen and answer calls while keeping your eyes on the road. The app automatically pulls your schedule from the calendar and maps your destination. Driving mode is coming to Android phones in a few months.
AR capabilities in Google search
Google Search will get augmented reality capabilities later this month, letting users interact with 3D objects right from the search page. At the Keynote, we got a demo of how a search for “muscle flexion” displayed an animated model of the human body.
Google Lens goes mainstream
Google Lens, first announced in I/O 2017 is an app that uses the phone’s camera to read or identify objects, text, QR and barcodes. At Tuesday’s keynote, a demo of the app showed it highlighting popular dishes from a restaurant menu. It was only available on Pixel phones initially. At the keynote, Google Lens was positioned as a tool for the illiterate – helping them read out text and signs. Google Lens is now coming to Google’s search app, and will work on entry-level phones.
AI-driven accessibility features
Google announced several initiatives under its AI for Social Good program. Project Euphonia is using AI to help people with impaired speech to communicate. Google has partnered with non-profit organizations to record the voices of people who have ALS, a neuro-degenerative condition that can result in the inability to speak and move. The goal is to reliably transcribe words spoken by people with these kinds of speech difficulties.
“The reason why speech recognition has gotten so great recently is because we've improved the technology by collecting sample voices from people. The speech recognition models get better and better because the more examples it sees the better it gets,” said Julie Cattiau, Product Manager, Google AI, speaking to KTVU. “But for people who have certain types of disabilities that makes their voice impaired, we may not have that many voice samples. We want to inspire more people to contribute to the research, and read some simple phrases that we give them so that we can then make the model more robust to their speech,” she added.
Live Relay is a feature to help people who are deaf or hard of hearing to take phone calls. The feature uses speech recognition and text to speech technologies to let the phone speak on the user’s behalf. Another initiative, called Project Diva uses Google Assistant to give people with disabilities independence and autonomy.
Live Caption - an Android Q feature demoed at the keynote applies to about hundreds of millions of people who are deaf or hard of hearing. The feature uses on-device machine learning to transcribe videos, podcasts and audio messages in real-time.
Android Q focuses on security and privacy
The latest beta version of Android is being rolled out to 21 devices from 13 brands. Android Q has 50 new features and changes focused on security and privacy. There’s now a dedicated privacy section in the settings menu, with granular control over apps. As a part of its Digital Wellbeing initiative, Google announced Focus Mode, which helps you silence apps that you don’t want to be distracted by.
Android is at the forefront of hardware innovations, said Sameer Samat, Vice President, Product Management at Google, highlighting Android Q’s support for 5G and foldable phones. “This is a really exciting area of computing. And while it's early, it might be the next evolution of mobile computing. These are screens that can actually fold and allow you to create devices that have brand new form factors - it can be a phone in one moment and it could be a tablet in another moment. So, we work very closely with our partners to build in a lot of functionality into Android Q,” he told KTVU.
Google rebranded its Home Hub family of products to Nest Hub, five years after it acquired a company called Nest Labs for $3.2 billion. The Nest Hub Max, announced today is a smart home device with a 10-inch display, a smart camera and speakers. Google is positioning this device as your kitchen TV. The device responds to gestures, and can be used to perform a variety of functions- make video calls, view photos, keep an eye on your home, stream videos, listen to music, and get personalized messages. Nest Hub Max will be available this summer for $229 in the US.