OpenAI co-founder warns 'superintelligent' AI must be controlled to prevent possible human extinction
American Accountability Foundation spokesman Robert Donachie says the left is trying to use AI to'push their agenda on the American people.' A co-founder of artificial intelligence leader OpenAI is warning that superintelligence must be controlled in order to prevent the extinction of the human race. "Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world's most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction," Ilya Sutskever and head of alignment Jan Leike wrote in a Tuesday blog post, saying they believe such advancements could arrive as soon as this decade. They said managing such risks would require new institutions for governance and solving the problem of superintelligence alignment: ensuring AI systems much smarter than humans "follow human intent."
Jul-7-2023, 17:34:52 GMT