Who's liable for AI-generated lies? – TechCrunch
Who will be liable for harmful speech generated by large language models? As advanced AIs such as OpenAI's GPT-3 are being cheered for impressive breakthroughs in natural language processing and generation -- and all sorts of (productive) applications for the tech are envisaged from slicker copywriting to more capable customer service chatbots -- the risks of such powerful text-generating tools inadvertently automating abuse and spreading smears can't be ignored. Nor can the risk of bad actors intentionally weaponizing the tech to spread chaos, scale harm and watch the world burn. Indeed, OpenAI is concerned enough about the risks of its models going "totally off the rails", as its documentation puts it at one point (in reference to a response example in which an abusive customer input is met with a very troll-esque AI reply), to offer a free content filter that "aims to detect generated text that could be sensitive or unsafe coming from the API" -- and to recommend that users don't return any generated text that the filter deems "unsafe". But, given the novel nature of the technology, there are no clear legal requirements that content filters must be applied.
Jun-1-2022, 18:20:24 GMT