Network measurement--because it is typically at arm's length from humans--does not comfortably fit into the usual human-centered models for evaluating ethical research practices. Nonetheless, the network measurement community increasingly finds its work potentially affects humans' well-being and itself poorly prepared to address the resulting ethical issues. Here, we discuss why ethical issues are different for network measurement versus traditional human-subject research and propose requiring measurement papers to include a section on ethical considerations. Some of the ideas will also prove applicable to other areas of computing systems measurement, where a researcher's attempt to measure a system could indirectly or even directly affect humans' well-being. A conference program committee (PC) is usually the first outside independent organization to evaluate research work that measures network systems.
In a recent article, WIRED senior writer Tom Simonite talked to Kate Crawford, author of Atlas of AI, to explore the ethical issues facing artificial intelligence and machine learning technologies. "We're relying on systems that don't have the sort of safety rails you would expect for something so influential in everyday life," notes Crawford. "There are tools actually causing harm that are completely unregulated." When people that aren't in the industry hear me say that artificial intelligence and machine learning can become forces for positive change in society, they ask me to explain why these technologies have been mired in controversy for more than a decade. And why ethical issues seem to be getting worse versus getting better. Indeed, in recent years several high-profile cases of ML technologies causing harm to marginalized parts of society have captured headlines.
Right now, these systems are fairly superficial, but they are becoming more complex and life-like. Could we consider a system to be suffering when its reward functions give it negative input? What's more, so-called genetic algorithms work by creating many instances of a system at once, of which only the most successful "survive" and combine to form the next generation of instances. This happens over many generations and is a way of improving a system. The unsuccessful instances are deleted.
Artificial intelligence (AI) is an extremely large and complex field technically, while at the same time it captures our imagination and prompts us to explore major philosophical and ethical questions concerning humanity and human intelligence. Teaching a course that does justice to all these aspects of the field is a big challenge. However, due to the increase in computational capability with a commensurate decrease in cost, a wealth of products and materials are available that can be used to provide students with rich, meaningful, and memorable experiences within the context of a primarily technical course in AI. Toys, articles, and movies can all be used to foster student exploration of key questions in the technical, philosophical, and ethical issues of AI.
A year ago, the world was reeling from the news that a woman in China had given birth to two genetically edited girls. Many questions remain over the episode, including details of the experiment on the "CRISPR babies", what happened to a second pregnancy and the fate and whereabouts of the Chinese researcher behind the trial, He Jiankui. But recent revelations that He's paper was sent to two major scientific journals also raise questions for scientific publishers, the traditional gatekeepers of science who are increasingly pressured to be ethical police too. Nature had the paper in November 2018, before the experiment became public at the end of the month, and it was also submitted to JAMA, it was reported earlier this month. Nature told New Scientist it neither confirmed nor denied it had received the paper.