Today, Google is starting to seed to devs a new developer beta (8.1) of Android Oreo. The big highlight here is the new Neural Networks API, which brings hardware-accelerated inference to the phone for quickly executing previously trained machine learning models. Bringing these calculations to the edge can bring a lot of utility to the end user by reducing latency and load on the network, while also keeping more sensitive data on-device. This can come in handy when it comes to allowing the apps on your phone to do things like classify images or learn from how your habits predict behavior. Google said they designed the Neural Networks API as a "foundational layer" for frameworks like TensorFlow Lite, Caffe2 and others.
Among the first things you might ask the cloud-based voice activated Google Assistant inside Google Home is to "tell me about my day." Google Assistant will then rattle off the local weather, upcoming appointments, and connect you to preferred news sources. Until now, though, the standalone artificial intelligence-infused $129 speaker--Google's rival to Amazon's popular Alexa voice-based Echo speaker--couldn't distinguish your voice from that of a spouse, partner or roommate. On Wednesday, Google began rolling out a feature to remedy the situation in households with a shared Google Home unit: the ability for up to six people to connect their account to that unit and, following a brief training period, have the speaker recognize each person's voice independently. Google Home can then deliver their commute times, calendars, playlists, and so on--not yours.
A team of researchers at the Google Brain office have been working on a project that involved creating three separate neural networks that between them have the ability to create and send encrypted messages. This type of machine learning will become more prominent in the world of AI over the next few years, particularly when it comes to handling private or sensitive information. Two of the researchers involved, Martin Abadi and David Anderson, wrote in their paper that "The learning does not require prescribing a set of cryptographic algorithms, nor indicating ways of applying these algorithms: it is based only on a secrecy specification represented by the training objectives." After several thousand simulations, Alice and Bob were each able to send and decrypt messages securely. Eve on the other hand, was unable to fully decrypt the messages.
Since originally writing this article, many people with far more expertise in these fields than myself have indicated that, while impressive, what Google have achieved is evolutionary, not revolutionary. In the very least, it's fair to say that I'm guilty of anthropomorphising in parts of the text. I've left the article's content unchanged, because I think it's interesting to compare the gut reaction I had with the subsequent comments from experts in the field. I strongly encourage readers to browse the comments beneath the version of this piece published on Medium.com In the closing weeks of 2016, Google published an article which quietly sailed under most people's radar.
Today, if you ask the Google search engine on your desktop a question like "How big is the Milky Way," you'll no longer just get a list of links where you could find the answer -- you'll get the answer: "100,000 light years." While this question/answer tech may seem simple enough, it's actually a complex development rooted in Google's powerful deep neural networks. These networks are a form of artificial intelligence that aims to mimic how human brains work, relating together bits of information to comprehend data and predict patterns. Google's new search feature's deep neural network uses sentence compression algorithms to extract relevant information from big bulks of text. Essentially, the system learned how to answer questions by repeatedly watching humans do it -- more specifically, 100 PhD linguists from across the world -- a process called supervised learning.