Can AI chatbots be reined in by a legal duty to tell the truth?

New Scientist 

Can artificial intelligence be made to tell the truth? Probably not, but the developers of large language model (LLM) chatbots should be legally required to reduce the risk of errors, says a team of ethicists. "What we're just trying to do is create an incentive structure to get the companies to put a greater emphasis on truth or accuracy when they are creating the systems," says Brent Mittelstadt at the University of Oxford. How does ChatGPT work and do AI-powered chatbots "think" like us? LLM chatbots, such as ChatGPT, generate human-like responses to users' questions, based on statistical analysis of vast amounts of text. But although their answers usually appear convincing, they are also prone to errors – a flaw referred to as "hallucination".

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found