Defense Advanced Research Projects Agency (DARPA) officials will include a panel discussion on ethics and legal issues at the Artificial Intelligence (AI) Colloquium being held March 6-7 in Alexandria, Virginia. "We're looking at the ethical, legal and social implications of our technologies, particularly as they become powerful and democratized in a way," reveals John Everett, deputy director of DARPA's Information Integration Office. Questions abound regarding the ethics and legal implications of AI, such as who is responsible if an self-driving automobile runs over a pedestrian, or whether military weapon systems should have a "human in the loop" controlling unmanned systems to prevent mistakes on the battlefield. Those questions become more acute as AI becomes more prevalent. "A lot of the technology of the 20th century was not widely accessible to people. You have high school students editing genes," Everett notes.
Ethics, as applied to the business world, are nothing new, and ethics itself has been a topic of conversation and debate for thousands of years. However, the rapid development of technology in the modern world brings with it both potential harm and benefits. As automated decision-making systems become ever more ubiquitous across all industries, what are the key questions organizations need to address, now and in the future? How can organizations create a sustainable future through managing ethical concerns at every stage of development? Ethical frameworks must be more than a way to define digital ethics; they must create an'ethics of action' by proactively influencing approaches to technology development and implementation.
The growth of big data during the last decade has opened the door to a lot of opportunities and threats alike. Big data is not just big and powerful, it is also prone to errors. We can currently process more than terabytes of data at a lightning and superficial speed. This presents a lot of opportunities but also means that we are at risk of making bad decisions in a short period of time, with an impact greater than what mankind had ever imagined in the past. Besides the threat of bad decisions and their impact, people have started placing too much faith and trust in technology, something which we might end up ruing if a conundrum arises.
Scientists who build artificial intelligence and autonomous systems need a strong ethical understanding of the impact their work could have. More than 100 technology pioneers recently published an open letter to the United Nations on the topic of lethal autonomous weapons, or "killer robots". These people, including the entrepreneur Elon Musk and the founders of several robotics companies, are part of an effort that began in 2015. The original letter called for an end to an arms race that it claimed could be the "third revolution in warfare, after gunpowder and nuclear arms". The UN has a role to play, but responsibility for the future of these systems also needs to begin in the lab.
Are you ready for this secret about the implementing values into AI systems? New breakthroughs allow scientists discovering methods that help intelligent systems gain an understanding of human values. When there is no values-based framework for artificial intelligence, then the biases will identify the code of human ethics. The risk of unclear global ethical standards is that expectation can prevent innovation for artificial intelligence. AI systems may solve world's problems, and manage the global economy.