As Good as a Coin Toss: Human Detection of AI-Generated Content

Communications of the ACM 

Membership in ACM includes a subscription to Communications of the ACM (CACM), the computing industry's most trusted source for staying connected to the world of advanced computing. With only a 50-50 chance of detecting synthetic media online, users are more vulnerable than ever to being duped. Advances in generative AI technology have made it easier than ever for anyone to manufacture increasingly realistic synthetic media (colloquially known as deepfakes) at faster speeds, larger scales, and with more customization than ever. This in turn has led to synthetic media increasingly being used for harmful purposes, including disinformation campaigns, nonconsensual pornography, financial fraud, child sexual abuse and exploitation, and espionage. As of today, the principal defense to combat deceptive synthetic media depends in large part on the human observer's perceptual detection capabilities--their ability to visually or auditorily identify AI-generated content when they encounter it. Yet the growing realism of synthetic media impedes this ability, heightening people's vulnerability to weaponized synthetic content. Moreover, people overestimate how capable they are at identifying synthetic media, further exacerbating the problem. As synthetic media continues to advance in sophistication, so too does the threat posed by its growing weaponization, from financial fraud to the production of nonconsensual intimate materials of adults and children.