Researchers Warn Of 'Dangerous' Artificial Intelligence-Generated Disinformation At Scale - Breaking Defense
A "like" icon seen through raindrops. WASHINGTON: Researchers at Georgetown University's Center for Security and Emerging Technology (CSET) are raising alarms about powerful artificial intelligence technology now more widely available that could be used to generate disinformation at a troubling scale. The warning comes after CSET researchers conducted experiments using the second and third versions of Generative Pre-trained Transformer (GPT-2 and GPT-3), a technology developed by San Francisco company OpenAI. GPT's text-generation capabilities are characterized by CSET researchers as "autocomplete on steroids." "We don't often think of autocomplete as being very capable, but with these large language models, the autocomplete is really capable, and you can tailor what you're starting with to get it to write all sorts of things," Andrew Lohn, senior research fellow at CSET, said during a recent event where researchers discussed their findings.
Sep-30-2021, 15:25:15 GMT
- Country:
- Asia
- China (0.15)
- South Korea > Seoul
- Seoul (0.04)
- North America > United States
- California > San Francisco County
- San Francisco (0.24)
- District of Columbia > Washington (0.04)
- California > San Francisco County
- Asia
- Industry:
- Government
- Military (1.00)
- Regional Government (0.69)
- Information Technology (1.00)
- Leisure & Entertainment > Games
- Chess (0.69)
- Media > News (0.77)
- Government
- Technology: