UK's AI Safety Institute 'needs to set standards rather than do testing'
The UK should concentrate on setting global standards for artificial intelligence testing instead of trying to carry out all the vetting itself, according to a company assisting the government's AI Safety Institute. Marc Warner, the chief executive of Faculty AI, said the newly established institute could end up "on the hook" for scrutinising an array of AI models – the technology that underpins chatbots like ChatGPT – owing to the government's world-leading work in AI safety. Rishi Sunak announced the formation of the AI Safety Institute (AISI) last year ahead of the global AI safety summit, which secured a commitment from big tech companies to cooperate with the EU and 10 countries, including the UK, US, France and Japan, on testing advanced AI models before and after their deployment. The UK has a prominent role in the agreement because of its advanced work on AI safety, underlined by the establishment of the institute. Warner, whose London-based company has contracts with the UK institute that include helping it test AI models on whether they can be prompted to breach their own safety guidelines, said the institute should be a world leader in setting test standards.
Feb-11-2024, 11:25:46 GMT
- Country:
- Asia > Japan (0.26)
- Europe
- France (0.26)
- United Kingdom > England
- Buckinghamshire > Milton Keynes (0.06)
- North America > United States (0.32)
- Industry:
- Technology: