UK campaigners raise alarm over report of Meta plan to use automation for risk checks
Internet safety campaigners have urged the UK's communications watchdog to limit the use of artificial intelligence in crucial risk assessments after a report that Mark Zuckerberg's Meta was planning to automate checks. Ofcom said it was "considering the concerns" raised by the campaigners' letter, after a report last month that up to 90% of all risk assessments at the owner of Facebook, Instagram and WhatsApp would soon be carried out by AI. Social media platforms are required under the UK's Online Safety Act to gauge how harm could take place on their services and how they plan to mitigate those potential harms – with a particular focus on protecting child users and preventing illegal content from appearing. The risk assessment process is viewed as key aspect of the act. In a letter to Ofcom's chief executive, Melanie Dawes, organisations including the Molly Rose Foundation, the NSPCC and the Internet Watch Foundation described the prospect of AI-driven risk assessments as a "retrograde and highly alarming step".
Jun-9-2025, 08:19:22 GMT
- Country:
- Europe > United Kingdom (0.96)
- Industry:
- Information Technology
- Security & Privacy (1.00)
- Services (0.96)
- Information Technology
- Technology: