Ex-Google safety lead calls for AI algorithm transparency, warns of 'serious consequences for humanity'

FOX News 

SmartNews' Head of Global Trust and Safety is calling for new regulation on artificial intelligence (AI) to prioritize user transparency and ensure human oversight remains a crucial component for news and social media recommender systems. "We need to have guardrails," Arjun Narayan said. "Without humans thinking through everything that could go wrong, like bias creeping into the models or large language models falling into the wrong hands, there can be very serious consequences for humanity." Narayan, who previously worked on Trust and Safety for Google and Bytedance, the company behind TikTok, said it is essential for companies to recognize opt-in and opt-outs when using large language models (LLMs). As a default, anything being fed to an LLM will be assumed training data and collected by the model.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found