OpenAI says Russian and Israeli groups used its tools to spread disinformation

The Guardian 

OpenAI on Thursday released its first ever report on how its artificial intelligence tools are being used for covert influence operations, revealing that the company had disrupted disinformation campaigns originating from Russia, China, Israel and Iran. Malicious actors used the company's generative AI models to create and post propaganda content across social media platforms, and to translate their content into different languages. None of the campaigns gained traction or reached large audiences, according to the report. As generative AI has become a booming industry, there has been widespread concern among researchers and lawmakers over its potential for increasing the quantity and quality of online disinformation. Artificial intelligence companies such as OpenAI, which makes ChatGPT, have tried with mixed results to assuage these concerns and place guardrails on their technology.