US, UK and a dozen more countries unveil pact to make AI 'secure by design'
The United States, the United Kingdom and more than a dozen other countries on Sunday unveiled what a senior US official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are "secure by design". In a 20-page document unveiled on Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse. The agreement is non-binding and carries mostly general recommendations such as monitoring AI systems for abuse, protecting data from tampering and vetting software suppliers. Still, the director of the US Cybersecurity and Infrastructure Security Agency, Jen Easterly, said it was important that so many countries put their names to the idea that AI systems needed to put safety first. "This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs," Easterly said, adding that the guidelines represented "an agreement that the most important thing that needs to be done at the design phase is security".
Nov-27-2023, 17:13:00 GMT
- Country:
- Africa > Nigeria (0.06)
- Asia
- Middle East > Israel (0.06)
- Singapore (0.06)
- Europe
- North America > United States (1.00)
- Oceania > Australia (0.06)
- South America > Chile (0.06)
- Industry:
- Technology: