Google's artificial intelligence research arm DeepMind has launched a unit focused on ethics and society. The group will conduct and fund research that covers the humanities and social sciences and run public discussion events, DeepMind announced earlier this week. The unit has already released five'core principals' to guide future AI research: that technologies be developed in ways that serve the global social and environmental good; that research be'rigorous and evidence-based' as well as'transparent and open' (including with funding arrangements); that work includes a diversity of voices; and that public opinion will feature in all developments. "This new unit will help us explore and understand the real-world impacts of AI," the group wrote in a blog post earlier this week. "It has a dual aim: to help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all."
A key trend in contemporary healthcare is the emergence of an ambitious new cadre of corporate entrants: digital technology companies. Google, Microsoft, IBM, Apple and others are all preparing, in their own ways, bids on the future of health and on various aspects of the global healthcare industry. This article focuses on the Google conglomerate, Alphabet Inc. (referred to as Google for convenience). We examine the first healthcare deals of its British-based artificial intelligence subsidiary, DeepMind Technologies Limited,1 in the period between July 2015 and October 2016.2 In particular, the article assesses the first year of a deal between Google DeepMind and the Royal Free London NHS Foundation Trust, which involved the transfer of identifiable patient records across the entire Trust, without explicit consent, for the purpose of developing a clinical alert app for kidney injury.
Artificial intelligence (AI) researchers have a long history of going back in time to explore old ideas, and now researchers at OpenAI, which is backed by Elon Musk, have revisited "Neuroevolution," a field that has been around since the 1980s, and they've achieved state of the art results. The group, which was led by OpenAI's research director Ilya Sutskever, explored the use of a set of algorithms called "Evolution strategies," which are aimed at solving "optimisation" problems. Optimisation problems are just like they sound, think of something that needs optimising, such as your route to work, a flight plan, or even a healthcare treatment and optimise it. On an abstract level, the technique the team used works by letting successful algorithms to pass their characteristics on to future generations – in short, each successive generation gets better and better at whatever tasks they've been assigned. However, coming back into the present day, the researchers took these algorithms and reworked them so they'd work better with today's deep neural networks and run better on large scale distributed computing systems.
The co-founder of DeepMind, the high-profile artificial intelligence lab owned by Google, has been placed on leave after controversy over some of the projects he led. Mustafa Suleyman runs DeepMind's "applied" division, which seeks practical uses for the lab's research in health, energy and other fields. Suleyman is also a key public face for DeepMind, speaking to officials and at events about the promise of AI and the ethical guardrails needed to limit malicious use of the technology. "Mustafa is taking time out right now after 10 hectic years," a DeepMind spokeswoman said. She didn't say why he was put on leave.
Disclaimer: this is a re-implementation of the model described in the WaveNet paper by Google Deepmind. This repository is not associated with Google Deepmind. Note: this installs a modified version of Keras and the dev version of Theano. Once the first model checkpoint is created, you can start sampling. A pretrained model is included, so sample away!