Can Language Model Moderators Improve the Health of Online Discourse?

Cho, Hyundong, Liu, Shuai, Shi, Taiwei, Jain, Darpan, Rizk, Basem, Huang, Yuyang, Lu, Zixun, Wen, Nuan, Gratch, Jonathan, Ferrara, Emilio, May, Jonathan

arXiv.org Artificial Intelligence 

Human moderation of online conversation is essential to maintaining civility and focus in a dialogue, but is challenging to scale and harmful to moderators. The inclusion of sophisticated natural language generation modules as a force multiplier aid moderators is a tantalizing prospect, but adequate evaluation approaches have so far been elusive. In this paper, we establish a systematic definition of conversational moderation effectiveness through a multidisciplinary lens that incorporates insights from social science. We then propose a comprehensive evaluation framework that uses this definition to asses models' moderation capabilities independently of human intervention. With our framework, we conduct the first known study Figure 1: While banning users or deleting their comments of conversational dialogue models as moderators, may push them towards echo chambers (left), conversational finding that appropriately prompted models moderation can guide users towards more can provide specific and fair feedback on constructive behavior (right). Recent developments in toxic behavior but struggle to influence users to conversational AI present an opportunity to perform this increase their levels of respect and cooperation.