Evolutionary Game Theory Could Predict Dangerous AI
There isn't a day that goes by without hearing about some fascinating development in artificial intelligence research, whether that might be an AI that can process and produce language in a human-like way, or an AI that can unlock the mysteries folded up within a protein, or automatically make scientific discoveries. But in the headlong rush in the to find the next breakthrough, there are legitimate concerns that the competitive nature of the "AI race" might mean that things like safety and ethics are being inadvertently overlooked, resulting in phenomena like algorithmic bias, or an escalating AI arms race between rival military powers to build lethal autonomous weapons. All of these recent developments point to a need for better regulations when it comes to engineering and implementing AI. Of course, too much regulation might stifle innovation, but too little might also bring what could have been a preventable disaster. As an international research team from Teesside University, Universidade Nova de Lisboa, and Université Libre de Bruxelles now suggest, AI can also be used to navigate this delicate balance by determining which types of AI research projects might need more regulation than others. "Whether real or not, the belief in such a race for domain supremacy through AI, can make it real simply from its consequences," wrote the team in a paper that was published in the Journal of Artificial Intelligence Research.
May-30-2022, 09:01:01 GMT
- Genre:
- Research Report (0.72)
- Industry:
- Government (0.36)
- Leisure & Entertainment > Games (0.43)
- Technology: