Users question AI's ability to moderate online harassment
New Cornell University research finds that both the type of moderator--human or AI--and the "temperature" of harassing content online influence people's perception of the moderation decision and the moderation system. Now published in Big Data & Society, the study used a custom social media site, on which people can post pictures of food and comment on other posts. The site contains a simulation engine, Truman, an open-source platform that mimics other users' behaviors (likes, comments, posts) through preprogrammed bots created and curated by researchers. The Truman platform--named after the 1998 film "The Truman Show"--was developed at the Cornell Social Media Lab led by Natalie Bazarova, professor of communication. "The Truman platform allows researchers to create a controlled yet realistic social media experience for participants, with social and design versatility to examine a variety of research questions about human behaviors in social media," Bazarova said.
Oct-31-2022, 20:35:38 GMT