On October 4th, roughly one year after the introduction of its branded line of hardware products, Google unveiled a second iteration of "Made by Google" hardware. This was a major product launch, but more than that, the presenters repeatedly hammered home Google's AI first messaging mantra with proof points in the form of a second generation branded product line built around AI and machine learning.
Artificial intelligence has gone from the imagination of people like Philip K. Dick and Arthur C. Clarke, and is now a part of every aspect of technology. The future of smartphones revolves around terminologies like machine learning, artificial intelligence and augmented reality. We're starting to see this happen already, as most smartphone manufacturers now stress that their devices have AI baked in. But is the hype justified, or are we hearing about AI now because the hardware seems to have reached a plateau? What's clear is that the next revolution lies in software, in bringing actual intelligence to "smart" phones, and that's why AI has to be implemented at all stages of the smartphone experience.
If you're one of the few people who own a Google Pixel phone, you'll soon be able to experience voice recognition without the internet. Google has announced the rollout of "an end-to-end, all-neural, on-device speech recognizer to power speech input in Gboard", the company's keyboard with Google Search baked in. The technology could give Google an edge over Siri and Alexa in convincing people to talk to machines through phones and home speakers that can deliver answers faster, by cutting down the latency that comes with sending a request from a device to a remote server and waiting for a response. The company has enabled on-device voice recognition by miniaturizing a machine-learning model that can do the task on a phone rather than handing off the job to a server in the cloud. Google researchers detailed the on-device technique in a paper published on arXiv.org in November called'Streaming End-to-end Speech Recognition For Mobile Devices'.
Google on Tuesday kicked off the first day of its 2019 I/O developer conference with a keynote highlighting the company's latest projects and products, as well as its accessibility initiatives powered by machine learning. Those more ambitious projects, like developing speech recognition software for people with speech impediments and using machine learning to detect diseases in their early stages, are all built atop the company's research into machine learning and computer vision. While the conference is ostensibly geared toward developers, there was plenty for Google fans and Android users to get excited about. After more than a few leaked images heralding the cheaper Android phones' arrival, Google officially unveiled the comparatively affordable Pixel 3a and 3a XL smartphones. On their surface, they don't look much different from the original Pixel 3 and 3 XL counterparts, save for their plastic construction compared to the previous version's glass and metal build.
On Wednesday, at the 2017 Google I/O developer conference in Mountain View, CA, Google CEO Sundar Pichai said that the company is rethinking all of its products with a renewed focus on machine learning and artificial intelligence (AI). One recent example of the company's use of machine learning is in Google Home, the company's smart speaker powered by Google Assistant, which uses deep learning to allow multiple users to share a single Google Home unit. Pichai also announced that the machine learning-driven Smart Reply feature is coming to Gmail on iOS and Android as well. One of the big announcements at I/O was Google Lens, a set of vision-based computing capabilities that seeks to understand what a user is looking at with their smartphone's camera, and help them take action based on that information. For example, a user can take a picture of a flower, and Lens will tell the user the kind of flower it is, Pichai said.