Goto

Collaborating Authors

Results


The 2018 Survey: AI and the Future of Humans

#artificialintelligence

"Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties. Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today? Please explain why you chose the answer you did and sketch out a vision of how the human-machine/AI collaboration will function in 2030.


Artificial Intelligence Governance and Ethics: Global Perspectives

arXiv.org Artificial Intelligence

Artificial intelligence (AI) is a technology which is increasingly being utilised in society and the economy worldwide, and its implementation is planned to become more prevalent in coming years. AI is increasingly being embedded in our lives, supplementing our pervasive use of digital technologies. But this is being accompanied by disquiet over problematic and dangerous implementations of AI, or indeed, even AI itself deciding to do dangerous and problematic actions, especially in fields such as the military, medicine and criminal justice. These developments have led to concerns about whether and how AI systems adhere, and will adhere to ethical standards. These concerns have stimulated a global conversation on AI ethics, and have resulted in various actors from different countries and sectors issuing ethics and governance initiatives and guidelines for AI. Such developments form the basis for our research in this report, combining our international and interdisciplinary expertise to give an insight into what is happening in Australia, China, Europe, India and the US.



5 core principles to keep AI ethical – World Economic Forum – Medium

#artificialintelligence

Science-fiction thrillers, like the 1980s classic film The Terminator, illuminate our imaginations, but they also stoke fears about autonomous, intelligent killer robots eradicating the human race. And while this scenario might seem far-fetched, last year, over 100 robotics and artificial intelligence technology leaders, including Elon Musk and Google's DeepMind co-founder Mustafa Suleyman, issued a warning about the risks posed by super-intelligent machines. In an open letter to the UN Convention on Certain Conventional Weapons, the signatories said that once developed, killer robots -- weapons designed to operate autonomously on the battlefield -- "will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend." The letter states: "These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act.