Issues


Elon Musk & The Dangers Of AI - WebSystemer.no

#artificialintelligence

Elon Musk has been a vocal critic of artificial intelligence, calling it an "existential threat to humanity" back in 2014, and saying "AI…


Should we be less worried about the dangers of artificial intelligence?

#artificialintelligence

The idea of super-intelligent AI turning on its human creators has been a popular fictional trope for a long time, with examples including blockbuster films like The Terminator. However a researcher, has argued that our fear of this happening in the real world is holding back the advancement of beneficial AI technology. Anthony J. Bradley, the vice president of Gartner AI Research made the argument earlier this week in an intriguing blog post. In it he claims he's asked regularly about the potentially damaging future of artificial intelligence and told about how "scary" it is. In turn, he added that the technology is only scary because our expectations are based on popular fiction.


IBM computer struggles in Cambridge debate on the dangers of AI

#artificialintelligence

AI has the potential to power driverless cars and smart cities. But critics say the technology could perpetuate bias, put people out of work and even threaten human existence. About 500 university students attended Thursday's debate, hinting at just how controversial the technology is. The IBM machine, which was defeated by a human in a one-on-one debate nine months ago, delivered each team's 4-minute opening speech using submissions sourced ahead of time from over 1,000 people. The rebuttals by each side were done by the human debaters, who also delivered the closing arguments.


A Guided Tour of AI and the Murky Ethical Issues It Raises

#artificialintelligence

As I read Melanie Mitchell's "Artificial Intelligence: A Guide for Thinking Humans," I found myself recalling John Updike's 1986 novel "Roger's Version.'' One of its characters, Dale, is determined to use a computer to prove the existence of God. Dale's search leads him into a mind-bending labyrinth where religious-metaphysical questions overwhelm his beloved technology and leave the poor fellow discombobulated. I sometimes had a similar experience reading "Artificial Intelligence." In Mitchell's telling, artificial intelligence (AI) raises extraordinary issues that have disquieting implications for humanity. AI isn't for the faint of heart, and neither is this book for nonscientists. To begin with, artificial intelligence -- "machine thinking," as the author puts it -- raises a pair of fundamental questions: What is thinking and what is intelligence? Since the end of World War II, scientists, philosophers, and scientist-philosophers (the two have often seemed to merge during the past 75-odd years) have been grappling with those very questions, offering up ideas that seem to engender further questions and profound moral issues. Mitchell, a computer science professor at Portland State University and the author of "Complexity: A Guided Tour," doesn't resolve these questions and issues -- she as much acknowledges that they are irresolvable at present -- but provides readers with insightful, common-sense scrutiny of how these and related topics pervade the discipline of artificial intelligence. Mitchell traces the origin of modern AI research to a 1956 Dartmouth College summer study group: its members included John McCarthy (who was the group's catalyst and coined the term artificial intelligence); Marvin Minsky, who would become a noted artificial intelligence theorist; cognitive scientists Herbert Simon and Allen Newell; and Claude Shannon ("the inventor of information theory"). Mitchell describes McCarthy, Minsky, Simon, and Newell as the "big four'' pioneers of AI.


Let Elon Musk-Jack Ma Debate About the Future of AI. But Its Business Impact Is Already Here Today

#artificialintelligence

In addition, artificial intelligence can be used to understand customer satisfaction using automated feedback methods or by using natural language processing to monitor the customer's tone and feelings in interactions.


The future of war: could lethal autonomous weapons make conflict more ethical?

#artificialintelligence

Lethal Autonomous Weapons (LAWs) are robotic weapon systems, primarily of value to the military, that could engage in offensive or defensive actions without human intervention. This paper assesses and engages the current arguments for and against the use of LAWs through the lens of achieving more ethical warfare. Specific interest is given particularly to ethical LAWs, which are artificially intelligent weapon systems that make decisions within the bounds of their ethics-based code. To ensure that a wide, but not exhaustive, survey of the implications of employing such ethical devices to replace humans in warfare is taken into account, this paper will engage on matters related to current scholarship on the rejection or acceptance of LAWs--including contemporary technological shortcomings of LAWs to differentiate between targets and the behavioral and psychological volatility of humans--and current and proposed regulatory infrastructures for developing and using such devices. After careful consideration of these factors, this paper will conclude that only ethical LAWs should be used to replace human involvement in war, and, by extension of their consistent abilities, should remove humans from war until a more formidable discovery is made in conducting ethical warfare.


Towards EU collaboration on Conversational AI, Data & Robotics

#artificialintelligence

I was really interested to read the BDVA – Big Data Value Association's and euRobotics' recent report on "Strategic Research, Innovation and Deployment Agenda for an AI PPP: A focal point for collaboration on Artificial Intelligence, Data and Robotics", which you can find here. Of particular relevance to me was the Section on Physical and Human Action and Interaction (pp. You can find the excellent and very comprehensive report here.


AI and the law

#artificialintelligence

Artificial intelligence and automation are responsible for a growing number of decisions by pubic authorities in areas like criminal justice, security and policing and public administration, despite having proven flaws and biases. Facial recognition systems are entering public spaces without any clear accountability or oversight. Lawyers must play a greater role in ensuring the safety and accountability of advanced data and analytics technologies, says Karen Yeung at the University of Birmingham. The dream of artificial intelligence stretches back seven decades, to a seminal paper by Alan Turing. But only recently has AI been commercialized and industrialized at scale, weaving its way into every nook and cranny of our lives.


Full Professor in Explainable Artificial Intelligence

#artificialintelligence

We are the Department of Data Science and Knowledge Engineering (DKE) at Maastricht University, the Netherlands: an international community of 50 researchers at various stages of their career, embedded in the Faculty of Science and Engineering (FSE). Our department has nearly 30 years' experience with research and teaching in the fields of Artificial Intelligence, Computer Science and Mathematics, and we do so in a highly collaborative and cross-disciplinary manner. To strengthen our team, we are looking for a full professor who will work on AI systems that are able to explain the decisions and actions they recommend or take in a human-understandable way. Our department is growing rapidly. This position is one of multiple job openings: you are more than welcome to browse through our other vacancies.


The Transformative Impact of AI In Financial Markets

#artificialintelligence

Artificial Intelligence (AI) is having a transformative impact on the financial markets. While delivering enormous efficiencies and lowering barriers to entry, the technology also brings challenges and risks with it – and particularly so when it comes to the overall stability of the market. Distilling insights from the 15 sources including Accenture, Deloitte, HFS Research, McKinsey, Thomson Reuters & PWC, this Impact Brief provides time-poor professionals with insights that are easy-to-read and digest, and can be read in less than 10 minutes.