OpenAI says it stopped multiple covert influence operations that abused its AI models
OpenAI said that it stopped five covert influence operations over the last three months that used its AI models for deceptive activities across the internet. These operations, which originated from Russia, China, Iran and Israel, attempted to manipulate public opinion and influence political outcomes without revealing their true identities or intentions, the company said on Thursday. "As of May 2024, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of our services," OpenAI said in a report about the operation, and added that it worked with people across the tech industry, civil society and governments to cut off these bad actors. OpenAI's report comes amidst concerns about the impact of generative AI on multiple elections around the world slated for this year including in the US. In its findings, OpenAI revealed how networks of people engaged in influence operations have used generative AI to generate text and images at much higher volumes than before, and fake engagement by using AI to generate fake comments on social media posts.
May-30-2024, 22:51:15 GMT
- Country:
- Asia
- China (0.26)
- Middle East
- Russia (0.26)
- Europe
- North America
- Canada (0.06)
- United States (0.29)
- Asia
- Industry:
- Information Technology (0.55)
- Technology: