Goto

Collaborating Authors

 clegg


Can we repair the internet?

MIT Technology Review

Can we repair the internet? Three new books propose remedies that run the gamut from government regulation to user responsibility. From addictive algorithms to exploitative apps, data mining to misinformation, the internet today can be a hazardous place. Books by three influential figures--the intellect behind "net neutrality," a former Meta executive, and the web's own inventor--propose radical approaches to fixing it. But are these luminaries the right people for the job? Though each shows conviction, and even sometimes inventiveness, the solutions they present reveal blind spots.


Meta says AI had only 'modest' impact on global elections in 2024

Al Jazeera

Despite fears that artificial intelligence (AI) could influence the outcome of elections around the world, the United States technology giant Meta said it detected little impact across its platforms this year. That was in part due to defensive measures designed to prevent coordinated networks of accounts, or bots, from grabbing attention on Facebook, Instagram and Threads, Meta president of global affairs Nick Clegg told reporters on Tuesday. "I don't think the use of generative AI was a particularly effective tool for them to evade our trip wires," Clegg said of actors behind coordinated disinformation campaigns. In 2024, Meta says it ran several election operations centres around the world to monitor content issues, including during elections in the US, Bangladesh, Brazil, France, India, Indonesia, Mexico, Pakistan, South Africa, the United Kingdom and the European Union. Most of the covert influence operations it has disrupted in recent years were carried out by actors from Russia, Iran and China, Clegg said, adding that Meta took down about 20 "covert influence operations" on its platform this year.


Meta says it has taken down about 20 covert influence operations in 2024

The Guardian

Meta has intervened to take down about 20 covert influence operations around the world this year, it has emerged – though the tech firm said fears of AI-fuelled fakery warping elections had not materialised in 2024. Nick Clegg, the president of global affairs at the company that runs Facebook, Instagram and WhatsApp, said Russia was still the No 1 source of the adversarial online activity but said in a briefing it was "striking" how little AI was used to try to trick voters in the busiest ever year for elections around the world. The former British deputy prime minister revealed that Meta, which has more than 3 billion users, had to take down just over 500,000 requests to generate images on its own AI tools of Donald Trump and Kamala Harris, JD Vance and Joe Biden in the month leading up to US election day. But the firm's security experts had to tackle a new operation using fake accounts to manipulate public debate for a strategic goal at the rate of more than one every three weeks. The "coordinated inauthentic behaviour" incidents included a Russian network using dozens of Facebook accounts and fictitious news websites to target people in Georgia, Armenia and Azerbaijan.


Meta says AI-generated content was less than 1 precent of election misinformation

Engadget

AI-generated content played a much smaller role in global election misinformation than what many officials and researchers had feared, according to a new analysis from Meta. In an update on its efforts to safeguard dozens of elections in 2024, the company said that AI content made up only a fraction of election-related misinformation that was caught and labeled by its fact checkers. "During the election period in the major elections listed above, ratings on AI content related to elections, politics and social topics represented less than 1% of all fact-checked misinformation," the company shared in a blog post, referring to elections in the US, UK, Bangladesh, Indonesia, India, Pakistan, France, South Africa, Mexico and Brazil, as well as the EU's Parliamentary elections. The update comes after numerous government officials and researchers for months raised the alarm about the role generative AI could play in supercharging election misinformation in a year when more than 2 billion people were expected to go to the polls. But those fears largely did not play out -- at least on Meta's platforms -- according to the company's President of Global Affairs, Nick Clegg.


Meta wants its Llama AI in Britain's public healthcare system

Engadget

Meta is making a pitch to get its AI into the UK's public health system. The Guardian reported on Tuesday that the company held a hackathon in Europe, tasking over 200 developers to use its Llama AI to improve the country's health services. The company awarded funds for developing AI that shortens wait times in Britain's A&E rooms (ERs in the US). The UK's AI minister, Feryal Clark, told The Guardian that the "government can adopt AI, such as Meta's open-source model, to support our key missions." Earlier this month, Meta CEO Mark Zuckerberg gave the green light for Llama to work with the US government.


Meta pushes AI bid for UK public sector forward with technology aimed at NHS

The Guardian

Meta's push to deploy its artificial intelligence system inside Britain's public sector has taken a step forward after the tech giant awarded development funding to technology aimed at shortening NHS A&E waiting times. Amid rival efforts by Silicon Valley tech companies to work with national and local government, Meta ran its first "hackathon" in Europe asking more than 200 programmers to devise ways to use its Llama AI system in UK public services and, one senior Meta executive said, "focused on the priorities of the Labour party". The event came after it emerged that Palantir, another US tech company, has been lobbying the Ministry of Justice and government ministers including the chancellor, Rachel Reeves. Microsoft also recently agreed a five-year deal with Whitehall departments to supply its AI Copilot technology to civil servants. Meta's hackathon was addressed by Nick Clegg, the former deputy prime minister and now Meta's president of global affairs based in California.


Meta to let US national security agencies and defense contractors use Llama AI

The Guardian

Meta announced Monday that it would allow US national security agencies and defense contractors to use its open-source artificial intelligence model, Llama. The announcement came days after Reuters reported an older version of Llama had been used by researchers to develop defense applications for the military wing of the Chinese government. Meta's policies typically prohibit the use of its open-source large language model for "military, warfare, nuclear industries or applications, [and] espionage". The company is making an exception for US agencies and contractors as well as similar national security agencies in the UK, Canada, Australia and New Zealand, according to Bloomberg. "These kinds of responsible and ethical uses of open source AI models like Llama will not only support the prosperity and security of the United States, they will also help establish US open-source standards in the global race for AI leadership," Nick Clegg, Meta's president of global affairs, wrote in a blog post.


Meta says AI-generated election content is not happening at a "systemic level"

MIT Technology Review

As voters will head to polls this year in more than 50 countries, experts have raised the alarm over AI-generated political disinformation and the prospect that malicious actors will use generative AI and social media to interfere with elections. Meta has previously faced criticism over its content moderation policies around past elections--for example, when it failed to prevent the January 6 rioters from organizing on its platforms. Clegg defended the company's efforts at preventing violent groups from organizing, but he also stressed the difficulty of keeping up. "This is a highly adversarial space. You remove one group, they rename themselves, rebrand themselves, and so on," he said.


Join me at EmTech Digital this week!

MIT Technology Review

Between the world leaders gathering in Seoul for the second AI Safety Summit this week and Google and OpenAI's launches of their supercharged new models, Astra and GPT-4o, the timing could not be better. AI feels hotter than ever. This year's EmTech will be all about how we can harness the power of generative AI while mitigating its risks,and how the technology will affect the workforce, competitiveness, and democracy. We will also get a sneak peek into the AI labs of Google, OpenAI, Adobe, AWS, and others. This year's top speakers include Nick Clegg, the president of global affairs at Meta, who will talk about what the platform intends to do to curb misinformation.


AI race heats up as OpenAI, Google and Mistral release new models

The Guardian

OpenAI, Google, and the French artificial intelligence startup Mistral have all released new versions of their frontier AI models within 12 hours of one another, as the industry prepares for a burst of activity over the summer. The unprecedented flurry of releases come as the sector readies for the expected launch of the next major version of GPT, the system that underpins OpenAI's hit chatbot Chat-GPT. The first came only hours after Nick Clegg appeared on stage at an event in London, where he confirmed that the third version of Meta's own AI model, Llama, would be published in a matter of weeks. Seven hours after Clegg left the stage, Google's Gemini Pro 1.5, his competitor's most advanced large language model, was released to the general public, with a free tier limited to 50 requests a day. An hour later, OpenAI released its own frontier model, the final version of GPT-4 Turbo.