Researchers Warn Of 'Dangerous' Artificial Intelligence-Generated Disinformation At Scale - Breaking Defense

#artificialintelligence 

A "like" icon seen through raindrops. WASHINGTON: Researchers at Georgetown University's Center for Security and Emerging Technology (CSET) are raising alarms about powerful artificial intelligence technology now more widely available that could be used to generate disinformation at a troubling scale. The warning comes after CSET researchers conducted experiments using the second and third versions of Generative Pre-trained Transformer (GPT-2 and GPT-3), a technology developed by San Francisco company OpenAI. GPT's text-generation capabilities are characterized by CSET researchers as "autocomplete on steroids." "We don't often think of autocomplete as being very capable, but with these large language models, the autocomplete is really capable, and you can tailor what you're starting with to get it to write all sorts of things," Andrew Lohn, senior research fellow at CSET, said during a recent event where researchers discussed their findings.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found