Ethical Considerations for AI Researchers

AAAI Conferences

Use of artificial intelligence is growing and expanding into applications that impact people's lives. People trust their technology without really understanding it or its limitations. There is the potential for harm and we are already seeing examples of that in the world. AI researchers have an obligation to consider the impact of intelligent applications they work on. While the ethics of AI is not clear-cut, there are guidelines we can consider to minimize the harm we might introduce.

Turing's Red Flag

Communications of the ACM

The 19th-century U.K. Locomotive Act, also known as the Red Flag Act, required motorized vehicles to be preceded by a person waving a red flag to signal the oncoming danger. Movies can be a good place to see what the future looks like. According to Robert Wallace, a retired director of the CIA's Office of Technical Service: "... When a new James Bond movie was released, we always got calls asking, 'Do you have one of those?' If I answered'no', the next question was, 'How long will it take you to make it?' Folks didn't care about the laws of physics or that Q was an actor in a fictional series--his character and inventiveness pushed our imagination ..."3 As an example, the CIA successfully copied the shoe-mounted spring-loaded and poison-tipped knife in From Russia With Love. It's interesting to speculate on what else Bond movies may have led to being invented. For this reason, I have been considering what movies predict about the future of artificial intelligence (AI).

Issue #57 H Weekly


This week, self-driving Tesla had a fatal crash. Other than that – a lot about robots, can AI create an art, cloning animals and more! Ray Kurzweil and people like him believe the Singularity is just behind the corner and promise the new perfect world. They are very optimistic about the future. But sometimes you should listen to the other side to better understand the problem or vision.

Global Bigdata Conference


News concerning Artificial Intelligence (AI) abounds again. The progress with Deep Learning techniques are quite remarkable with such demonstrations of self-driving cars, Watson on Jeopardy, and beating human Go players. This rate of progress has led some notable scientists and business people to warn about the potential dangers of AI as it approaches a human level. Exascale computers are being considered that would approach what many believe is this level. However, there are many questions yet unanswered on how the human brain works, and specifically the hard problem of consciousness with its integrated subjective experiences.

Artificial Intelligence Is Here: Now What?


The topic of "artificial intelligence" has recently brought a confluence of nationally significant announcements. In September, Stanford University released its One Hundred Year Study on Artificial Intelligence, which was quickly followed by the announcement in early October that five firms -- Amazon, DeepMind of Google, Facebook, IBM, and Microsoft -- have formed a nonprofit named the Partnership on Artificial Intelligence to Benefit People and Society (Partnership on AI). A week after the Partnership on AI announced its formation, the National Science and Technology Council (NSTC), which is overseen by the Executive Office of the President, released Preparing for the Future of Artificial Intelligence. For the release of the Stanford and the NSTC reports, perhaps, but the formation of the Partnership on AI is no coincidence. The members of the Partnership on AI realize the marketplace is at an important "tipping point" when it comes to the increasing utilization of AI in the U.S. AI is already used in automobiles to enable enhanced driving safety features and GPS services, in smartphone apps, and in wearable medical device -- to name just a few examples.