The E.U. Has Passed the World's First Comprehensive AI Law
AI-generated deepfake pictures, video or audio of existing people, places or events must be labeled as artificially manipulated. There's extra scrutiny for the biggest and most powerful AI models that pose "systemic risks," which include OpenAI's GPT4 -- its most advanced system -- and Google's Gemini. The EU says it's worried that these powerful AI systems could "cause serious accidents or be misused for far-reaching cyberattacks." They also fear generative AI could spread "harmful biases" across many applications, affecting many people. Companies that provide these systems will have to assess and mitigate the risks; report any serious incidents, such as malfunctions that cause someone's death or serious harm to health or property; put cybersecurity measures in place; and disclose how much energy their models use. Brussels first suggested AI regulations in 2019, taking a familiar global role in ratcheting up scrutiny of emerging industries, while other governments scramble to keep up. In the U.S., President Joe Biden signed a sweeping executive order on AI in October that's expected to be backed up by legislation and global agreements. In the meantime, lawmakers in at least seven U.S. states are working on their own AI legislation.
Mar-13-2024, 14:50:09 GMT