Among the first things you might ask the cloud-based voice activated Google Assistant inside Google Home is to "tell me about my day." Google Assistant will then rattle off the local weather, upcoming appointments, and connect you to preferred news sources. Until now, though, the standalone artificial intelligence-infused $129 speaker--Google's rival to Amazon's popular Alexa voice-based Echo speaker--couldn't distinguish your voice from that of a spouse, partner or roommate. On Wednesday, Google began rolling out a feature to remedy the situation in households with a shared Google Home unit: the ability for up to six people to connect their account to that unit and, following a brief training period, have the speaker recognize each person's voice independently. Google Home can then deliver their commute times, calendars, playlists, and so on--not yours.
Today, Google is starting to seed to devs a new developer beta (8.1) of Android Oreo. The big highlight here is the new Neural Networks API, which brings hardware-accelerated inference to the phone for quickly executing previously trained machine learning models. Bringing these calculations to the edge can bring a lot of utility to the end user by reducing latency and load on the network, while also keeping more sensitive data on-device. This can come in handy when it comes to allowing the apps on your phone to do things like classify images or learn from how your habits predict behavior. Google said they designed the Neural Networks API as a "foundational layer" for frameworks like TensorFlow Lite, Caffe2 and others.
A team of researchers at the Google Brain office have been working on a project that involved creating three separate neural networks that between them have the ability to create and send encrypted messages. This type of machine learning will become more prominent in the world of AI over the next few years, particularly when it comes to handling private or sensitive information. Two of the researchers involved, Martin Abadi and David Anderson, wrote in their paper that "The learning does not require prescribing a set of cryptographic algorithms, nor indicating ways of applying these algorithms: it is based only on a secrecy specification represented by the training objectives." After several thousand simulations, Alice and Bob were each able to send and decrypt messages securely. Eve on the other hand, was unable to fully decrypt the messages.
Since originally writing this article, many people with far more expertise in these fields than myself have indicated that, while impressive, what Google have achieved is evolutionary, not revolutionary. In the very least, it's fair to say that I'm guilty of anthropomorphising in parts of the text. I've left the article's content unchanged, because I think it's interesting to compare the gut reaction I had with the subsequent comments from experts in the field. I strongly encourage readers to browse the comments beneath the version of this piece published on Medium.com In the closing weeks of 2016, Google published an article which quietly sailed under most people's radar.
Though its devices are often more on the simple side of things (e.g. Nexus 7 tablet), Google is no pushover when it comes to technological innovations. For one, its Android operating system is the most dominant platform on the market. And it has several fascinating projects lined up for future release, such as Google Glasses. Just a few months ago, Google again proved this with its Self-Taught Software, which would make Google's operating systems and gadgets even "smarter."