AI Alignment through Anthropology

#artificialintelligence 

If an advanced AI system were instructed to make paper clips, or to fetch coffee, we would not want it to carry out this task at any cost. For example, we would rather the AI not kill anyone in the process or use valuable resources that ought to be used for other purposes. Rather, we want the AI to know how to achieve this goal in a way that's consistent with human values (i.e. Figuring out how to design AI systems so that they do not inadvertently act in ways that would be contrary to human values is known as the Value Alignment Problem. It's no revelation to point out misalignment between ANI and humans today, nor that AI designers need to better understand their users' values.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found