Deceptive uses of Artificial Intelligence in elections strengthen support for AI ban

Jungherr, Andreas, Rauchfleisch, Adrian, Wuttke, Alexander

arXiv.org Artificial Intelligence 

All over the world, political parties, politicians, and campaigns explore how Artificial Intelligence (AI) can help them win elections. However, the effects of these activities are unknown. We propose a framework for assessing AI's impact on elections by considering its application in various campaigning tasks. The electoral uses of AI vary widely, carrying different levels of concern and need for regulatory oversight. To account for this diversity, we group AI-enabled campaigning uses into three categories - campaign operations, voter outreach, and deception. Using this framework, we provide the first systematic evidence from a preregistered representative survey and two preregistered experiments (n=7,635) on how Americans think about AI in elections and the effects of specific campaigning choices. We provide three significant findings. There is a misalignment of incentives for deceptive practices and their externalities. We cannot count on public opinion to provide strong enough incentives for parties to forgo tactical advantages from AI-enabled deception. There is a need for regulatory oversight and systematic outside monitoring of electoral uses of AI. Still, regulators should account for the diversity of AI uses and not completely disincentivize their electoral use. Elections are times of high public attention on campaigns and their tools of communication. A representative survey of Americans shows that people dislike all kinds of AI uses in campaigns but are more critical of deceptive uses than those improving campaign operations or voter outreach (Study 1, n = 1,199). A survey experiment shows that when learning about specific AI uses in campaigns, American respondents reacted much more negatively to deceptive uses (Study 2, n = 1,985). Our study identifies a misalignment of incentives for deceptive practices and their externalities. We cannot count on public opinion to provide strong enough incentives for parties to forgo tactical advantages from AI-enabled deception.