On Nov. 25, an article headlined "Spot the deepfake. The editors would not have placed this piece on the front page a year ago. If they had, few would have understood what its headline meant. This technology, one of the most worrying fruits of rapid advances in artificial intelligence (AI), allows those who wield it to create audio and video representations of real people saying and doing made-up things. As this technology develops, it becomes increasingly difficult to distinguish real audio and video recordings from fraudulent misrepresentations created by manipulating real sounds and images. "In the short term, detection will be reasonably effective," says Subbarao Kambhampati, a professor of computer science at Arizona State University. "In the longer run, I think it will be impossible to distinguish between the real pictures and the fake pictures."2 The longer run may come as early as later this year, in time for the presidential election.