Goto

Collaborating Authors

The Dangers Of Not Aligning Artificial Intelligence With Human Values

#artificialintelligence

In artificial intelligence (AI), the "alignment problem" refers to the challenges caused by the fact that machines simply do not have the same values as us. In fact, when it comes to values, then at a fundamental level, machines don't really get much more sophisticated than understanding that 1 is different from 0. As a society, we are now at a point where we are starting to allow machines to make decisions for us. So how can we expect them to understand that, for example, they should do this in a way that doesn't involve prejudice towards people of a certain race, gender, or sexuality? Or that the pursuit of speed, or efficiency, or profit, has to be done in a way that respects the ultimate sanctity of human life? Theoretically, if you tell a self-driving car to navigate from point A to point B, it could just smash its way to its destination, regardless of the cars, pedestrians, or buildings it destroys on its way.


Should we trust machine learning?

#artificialintelligence

For better or worse, says Brian Christian, questions that link ethics and technology, particularly in the field of machine learning "are not going away. In some ways I see this as one of the defining challenges of the decade ahead of us." By'this' he is referring to the core subject of his new book'The Alignment Problem', which tackles the question of how we can ensure that the growth industry of machine learning "is behaving in the way we expect it to. How do we make sure that we can trust it and that we are safe and comfortable?" Machine learning, says the author, whose previous books have included'The Most Human Human' and'Algorithms to Live By,' "is the fastest-growing sub-field in artificial intelligence and one of the most exciting things happening in science today, full stop".


Understanding the AI alignment problem

#artificialintelligence

Welcome to AI book reviews, a series of posts that explore the latest literature on artificial intelligence. For decades, we've been trying to develop artificial intelligence in our own image. And at every step of the way, we've managed to create machines that can perform marvelous feats and at the same time make surprisingly dumb mistakes. After six decades of research and development, aligning AI systems with our goals, intents, and values continues to remain an elusive objective. Every major field of AI seems to solve part of the problem of replicating human intelligence while leaving out holes in critical areas.


Amazon.com: The Alignment Problem: Machine Learning and Human Values (Audible Audio Edition): Brian Christian, Brian Christian, Brilliance Audio: Audible Books & Originals

#artificialintelligence

A jaw-dropping exploration of everything that goes wrong when we build AI systems and the movement to fix them. Today's "machine-learning" systems, trained by data, are so effective that we've invited them to see and hear for us - and to make decisions on our behalf. But alarm bells are ringing. Recent years have seen an eruption of concern as the field of machine learning advances. When the systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge.


The Alignment Problem: Machine Learning and Human Values: Christian, Brian: 9780393635829: Amazon.com: Books

#artificialintelligence

Finalist for the Los Angeles Times Book Prize A jaw-dropping exploration of everything that goes wrong when we build AI systems and the movement to fix them. Today's "machine-learning" systems, trained by data, are so effective that we've invited them to see and hear for us―and to make decisions on our behalf. But alarm bells are ringing. Recent years have seen an eruption of concern as the field of machine learning advances. When the systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge.