What we've been getting wrong about AI's truth crisis

MIT Technology Review 

What we've been getting wrong about AI's truth crisis Even when content is revealed to be manipulated, it still shapes our beliefs. The defenders of truth are hopelessly behind. What would it take to convince you that the era of truth decay we were long warned about--where AI content dupes us, shapes our beliefs even when we catch the lie, and erodes societal trust in the process--is now here? A story I published last week pushed me over the edge. It also made me realize that the tools we were sold as a cure for this crisis are failing miserably. On Thursday, I reported the first confirmation that the US Department of Homeland Security, which houses immigration agencies, is using AI video generators from Google and Adobe to make content that it shares with the public.