Understanding AI-generated misinformation and evaluating algorithmic and human solutions

AIHub 

Existing machine learning (ML) models used to detect online misinformation are less effective when matched against content created by ChatGPT or other large language models (LLMs), according to new research from Georgia Tech. Current ML models designed for, and trained on, human-written content have significant performance discrepancies in detecting paired human-generated misinformation and misinformation generated by artificial intelligence (AI) systems, said Jiawei Zhou, a PhD student in Georgia Tech's School of Interactive Computing. Zhou's paper detailing the findings has received a best paper honorable mention award at the 2023 ACM CHI Conference on Human Factors in Computing Systems. Advised by Associate Professor Munmun De Choudhury, Zhou's research demonstrates that LLMs can manipulate tone and linguistics to allow AI-generated misinformation to slip through the cracks. "We found the AI-generated misinformation carried more emotions and cognitive processing expressions than its human-created counterparts," Zhou said.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found