AI firms 'unprepared' for dangers of building human-level systems, report warns
Artificial intelligence companies are "fundamentally unprepared" for the consequences of creating systems with human-level intellectual performance, according to a leading AI safety group. The Future of Life Institute (FLI) said none of the firms on its AI safety index scored higher than a D for "existential safety planning". One of the five reviewers of the FLI's report said that, despite aiming to develop artificial general intelligence (AGI), none of the companies scrutinised had "anything like a coherent, actionable plan" to ensure the systems remained safe and controllable. AGI refers to a theoretical stage of AI development at which a system is capable of matching a human in carrying out any intellectual task. OpenAI, the developer of ChatGPT, has said its mission is to ensure AGI "benefits all of humanity".
Jul-17-2025, 07:00:32 GMT
- Country:
- Asia > China (0.06)
- North America > United States
- Massachusetts (0.06)
- New York (0.06)
- Industry:
- Energy > Power Industry (0.33)
- Technology: