Goto

Collaborating Authors

The 2018 Survey: AI and the Future of Humans

#artificialintelligence

"Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties. Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today? Please explain why you chose the answer you did and sketch out a vision of how the human-machine/AI collaboration will function in 2030.



A 20-Year Community Roadmap for Artificial Intelligence Research in the US

arXiv.org Artificial Intelligence

Decades of research in artificial intelligence (AI) have produced formidable technologies that are providing immense benefit to industry, government, and society. AI systems can now translate across multiple languages, identify objects in images and video, streamline manufacturing processes, and control cars. The deployment of AI systems has not only created a trillion-dollar industry that is projected to quadruple in three years, but has also exposed the need to make AI systems fair, explainable, trustworthy, and secure. Future AI systems will rightfully be expected to reason effectively about the world in which they (and people) operate, handling complex tasks and responsibilities effectively and ethically, engaging in meaningful communication, and improving their awareness through experience. Achieving the full potential of AI technologies poses research challenges that require a radical transformation of the AI research enterprise, facilitated by significant and sustained investment. These are the major recommendations of a recent community effort coordinated by the Computing Community Consortium and the Association for the Advancement of Artificial Intelligence to formulate a Roadmap for AI research and development over the next two decades.


Future of AI VI. Discussion of 'Superintelligence: Paths, Dangers, Strategies'

#artificialintelligence

This post is a discussion of Nick Bostrom's book "Superintelligence". The book has had an effect on the thinking of many of the world's thought leaders. In that light, and given this series of blog posts is about the "Future of AI", it seemed important to read the book and discuss his ideas. In an ideal world, this post would certainly have contained more summaries of the books arguments and perhaps a later update will improve on that aspect. For the moment the review focuses on counter-arguments and perceived omissions (the post already got too long with just covering those). Bostrom considers various routes we have to forming intelligent machines and what the possible outcomes might be from developing such technologies. He is a professor of philosophy but has an impressive array of background degrees in areas such as mathematics, logic, philosophy and computational neuroscience. So let's start at the beginning and put the book in context by trying to understand what is meant by the term "superintelligence" In common with many contributions to the debate on artificial intelligence, Bostrom never defines what he means by intelligence. Obviously, this can be problematic. On the other hand, superintelligence is defined as outperforming humans in every intelligent capability that they express. Personally, I've developed the following definition of intelligence: "Use of information to take decisions which save energy". Here by information I might mean data or facts or rules, and by saving energy I mean saving'free' energy.1 However, accepting Bostrom's lack of definition of intelligence (and perhaps taking note of my own), we can still consider the routes to superintelligence Bostrom proposes.


This is for you, Elon Musk: 5 threats to humanity greater than artificial intelligence

Mashable

We need to have a little talk. I understand that you're very worried about what artificial intelligence could mean for our future. In fact, just the other week you said that it's the "greatest risk we face as a civilization," an idea that has been echoed by high-profile futurists around the world. But I'm here to tell you that it's time to take a deep breath and maybe get a little perspective on A.I. for a minute, when compared to the wide array of threats we face. SEE ALSO: Elon Musk says Mark Zuckerberg has'limited understanding' of AI Elon, it's nice that you have the privilege of focusing on an existential threat that might rear its ugly head far off in the future, but not all of us are in that position.