Google just concluded its I/O 2017 keynote, where executives led by CEO Sundar Pichai laid out the company's future roadmap for Android, Google Assistant, Google Home, virtual reality, and much more. Instead, the company has settled into a pattern of releasing information about what it's doing (and what it wants to do) for developers at I/O, instead of trying to wow consumers or the press. You might call that boring, but that's also a misguided notion, because there was much to glean from Pichai and the rest of the Googlers who presented onstage. So here are the 10 most important takeaways from today's I/O keynote. Android's mobile dominance hasn't stopped growing.
At Google's 2017 I/O keynote today, CEO Sundar Pichai introduced new products and shared more information about the company's "AI first" future. Here's a running list of what happened that matters. Google is rethinking "all" of its products for an AI-first world. That's the high-level promise from Pichai, and the change Google must successfully navigate to continue its dominance. Examples: Google Search now ranks differently using machine learning, Google Maps Street View automatically recognizes signs, video calling uses machine learning for low-bandwidth situations, etc. Google can now use your camera as an input device.
There were whoops and cheers from developers as Google announced the incremental ways it is strengthening its grip on many aspects of people's lives at its annual developer conference, Google I/O. There were no jaw-dropping major product launches nor executives proclaiming their utopian vision of the future (ahem, Mark Zuckerberg). Instead there was a showcase of features, powered by artificial intelligence, designed to make people more connected – and more reliant on Google. "We are focused on our core mission of organising the world's information for everyone and approach this by applying deep computer science and technical insights to solve problems at scale," said CEO Sundar Pichai. By combining the personal data harvested from its users with industry leading (and human Go player beating) artificial intelligence, Google is squeezing itself into spaces in our everyday interactions it hasn't been before, filling in the gaps and oozing into new territory like a sticky glue that is becoming harder and harder to escape.
If you follow tech news often, you'll be more than aware of the promise offered by artificial intelligence (AI) and machine learning. Often, though, it feels like a far-away goal. It will get there, but right now it's primitive. At Google's annual developer conference, held this week near its Mountain View headquarters, the company showed off some of the best practical applications of AI and machine learning I've seen yet. They may not make your jaw drop - or, thankfully, put you out of a job - but it's an incremental change that shows how Google is putting its immense computing power to work.
Google has unveiled a raft of new features for Android including a radical image recognition app giving phones'eyes'. Called Google Lens, it will be able to do everything from recognise flowers in a garden to translate menus in a foreign language. The firm also unveil a new iOS version of its smart assistant for the iPhone, taking on Siri, along with updates to its Home speaker turning it into a hands free phone and a smart reply service for Gmail. CEO Sundar Pichai first revealed over 2 billion people are now using Android, and said the future was about speech and vision. 'We are clearly at an inflection point with vision, so we are announcing Google Lens.