covert influence operation
Meta says AI had only 'modest' impact on global elections in 2024
Despite fears that artificial intelligence (AI) could influence the outcome of elections around the world, the United States technology giant Meta said it detected little impact across its platforms this year. That was in part due to defensive measures designed to prevent coordinated networks of accounts, or bots, from grabbing attention on Facebook, Instagram and Threads, Meta president of global affairs Nick Clegg told reporters on Tuesday. "I don't think the use of generative AI was a particularly effective tool for them to evade our trip wires," Clegg said of actors behind coordinated disinformation campaigns. In 2024, Meta says it ran several election operations centres around the world to monitor content issues, including during elections in the US, Bangladesh, Brazil, France, India, Indonesia, Mexico, Pakistan, South Africa, the United Kingdom and the European Union. Most of the covert influence operations it has disrupted in recent years were carried out by actors from Russia, Iran and China, Clegg said, adding that Meta took down about 20 "covert influence operations" on its platform this year.
- North America > United States (1.00)
- Europe > Russia (0.26)
- Asia > Russia (0.26)
- (12 more...)
- Media (1.00)
- Government > Voting & Elections (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
Meta says it has taken down about 20 covert influence operations in 2024
Meta has intervened to take down about 20 covert influence operations around the world this year, it has emerged – though the tech firm said fears of AI-fuelled fakery warping elections had not materialised in 2024. Nick Clegg, the president of global affairs at the company that runs Facebook, Instagram and WhatsApp, said Russia was still the No 1 source of the adversarial online activity but said in a briefing it was "striking" how little AI was used to try to trick voters in the busiest ever year for elections around the world. The former British deputy prime minister revealed that Meta, which has more than 3 billion users, had to take down just over 500,000 requests to generate images on its own AI tools of Donald Trump and Kamala Harris, JD Vance and Joe Biden in the month leading up to US election day. But the firm's security experts had to tackle a new operation using fake accounts to manipulate public debate for a strategic goal at the rate of more than one every three weeks. The "coordinated inauthentic behaviour" incidents included a Russian network using dozens of Facebook accounts and fictitious news websites to target people in Georgia, Armenia and Azerbaijan.
- North America > United States (1.00)
- Europe > Russia (0.31)
- Asia > Russia (0.31)
- (13 more...)
OpenAI says it stopped multiple covert influence operations that abused its AI models
OpenAI said that it stopped five covert influence operations over the last three months that used its AI models for deceptive activities across the internet. These operations, which originated from Russia, China, Iran and Israel, attempted to manipulate public opinion and influence political outcomes without revealing their true identities or intentions, the company said on Thursday. "As of May 2024, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of our services," OpenAI said in a report about the operation, and added that it worked with people across the tech industry, civil society and governments to cut off these bad actors. OpenAI's report comes amidst concerns about the impact of generative AI on multiple elections around the world slated for this year including in the US. In its findings, OpenAI revealed how networks of people engaged in influence operations have used generative AI to generate text and images at much higher volumes than before, and fake engagement by using AI to generate fake comments on social media posts.
- North America > United States (0.29)
- Europe > Russia (0.26)
- Asia > Russia (0.26)
- (6 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
OpenAI says Russian and Israeli groups used its tools to spread disinformation
OpenAI on Thursday released its first ever report on how its artificial intelligence tools are being used for covert influence operations, revealing that the company had disrupted disinformation campaigns originating from Russia, China, Israel and Iran. Malicious actors used the company's generative AI models to create and post propaganda content across social media platforms, and to translate their content into different languages. None of the campaigns gained traction or reached large audiences, according to the report. As generative AI has become a booming industry, there has been widespread concern among researchers and lawmakers over its potential for increasing the quantity and quality of online disinformation. Artificial intelligence companies such as OpenAI, which makes ChatGPT, have tried with mixed results to assuage these concerns and place guardrails on their technology.
- Asia > Middle East > Israel (0.65)
- Europe > Russia (0.64)
- Asia > Russia (0.64)
- (4 more...)
- Media > News (0.89)
- Government > Voting & Elections (0.67)