Propaganda-as-a-service may be on the horizon if large language models are abused
Large, AI-powered language models (LLMs) like OpenAI's GPT-3 have enormous potential in the enterprise. For example, GPT-3 is now being used in over 300 apps by thousands of developers to produce more than 4.5 billion words per day. And Naver, the company behind the eponymous search engine Naver, is employing LLMs to personalize search results on the Naver platform -- following on the heels of Bing and Google. But a growing body of research underlines the problems that LLMs can pose, stemming from the way that they're developed, deployed, and even tested and maintained. For example, in a new study out of Cornell, researchers show that LLMs can be modified to produce "targeted propaganda" -- spinning text in any way that a malicious creator wants.
Dec-14-2021, 22:15:45 GMT
- Country:
- Asia
- Europe > Russia (0.05)
- North America > United States
- Maryland (0.05)
- Oceania > Papua New Guinea (0.06)
- Industry:
- Technology: