Goto

Collaborating Authors

 voluntary commitment


Do AI Companies Make Good on Voluntary Commitments to the White House?

Wang, Jennifer, Huang, Kayla, Klyman, Kevin, Bommasani, Rishi

arXiv.org Artificial Intelligence

Voluntary commitments are central to international AI governance, as demonstrated by recent voluntary guidelines from the White House to the G7, from Bletchley Park to Seoul. How do major AI companies make good on their commitments? We score companies based on their publicly disclosed behavior by developing a detailed rubric based on their eight voluntary commitments to the White House in 2023. We find significant heterogeneity: while the highest-scoring company (OpenAI) scores a 83% overall on our rubric, the average score across all companies is just 53%. The companies demonstrate systemically poor performance for their commitment to model weight security with an average score of 17%: 11 of the 16 companies receive 0% for this commitment. Our analysis highlights a clear structural shortcoming that future AI governance initiatives should correct: when companies make public commitments, they should proactively disclose how they meet their commitments to provide accountability, and these disclosures should be verifiable. To advance policymaking on corporate AI governance, we provide three directed recommendations that address underspecified commitments, the role of complex AI supply chains, and public transparency that could be applied towards AI governance initiatives worldwide.


AI Models Are Getting Smarter. New Tests Are Racing to Catch Up

TIME - Tech

Despite their expertise, AI developers don't always know what their most advanced systems are capable of--at least, not at first. To find out, systems are subjected to a range of tests--often called evaluations, or'evals'--designed to tease out their limits. But due to rapid progress in the field, today's systems regularly achieve top scores on many popular tests, including SATs and the U.S. bar exam, making it harder to judge just how quickly they are improving. A new set of much more challenging evals has emerged in response, created by companies, nonprofits, and governments. Yet even on the most advanced evals, AI systems are making astonishing progress.


White House gets voluntary commitments from AI companies to curb deepfake porn

Engadget

The White House released a statement today outlining commitments that several AI companies are making to curb the creation and distribution of image-based sexual abuse. The participating businesses have laid out the steps they are taking to prevent their platforms from being used to generate non-consensual intimate images (NCII) of adults and child sexual abuse material (CSAM). Specifically, Adobe, Anthropic, Cohere, Common Crawl, Microsoft and OpenAI said they'll be: All of the aforementioned except Common Crawl also agreed they'd be: "incorporating feedback loops and iterative stress-testing strategies in their development processes, to guard against AI models outputting image-based sexual abuse" It's a voluntary commitment, so today's announcement doesn't create any new actionable steps or consequences for failing to follow through on those promises. But it's still worth applauding a good faith effort to tackle this serious problem. The notable absences from today's White House release are Apple, Amazon, Google and Meta. Many big tech and AI companies have been making strides to make it easier for victims of NCII to stop the spread of deepfake images and videos separately from this federal effort.


The Download: AI's self-regulation promises, and predicting the weather

MIT Technology Review

One year ago, seven leading AI companies--Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI--committed with the White House to a set voluntary commitments on how to develop AI in a safe and trustworthy way. The eight commitments included promises to do things like improve the testing and transparency around AI systems, and share information on potential harms and risks. On the first anniversary of the voluntary commitments, MIT Technology Review asked the AI companies that signed the commitments for details on their work so far. Their replies show that the tech sector has made some welcome progress--with some pretty big caveats. To read more about how the US is approaching AI regulation, check out the latest edition of The Algorithm, our weekly newsletter untangling the complicated world of AI.


How's AI self-regulation going?

MIT Technology Review

But AI nerds may remember that exactly a year ago, on July 21, 2023, Biden was posing with seven top tech executives at the White House. He'd just negotiated a deal where they agreed to eight of the most prescriptive rules targeted at the AI sector at that time. A lot can change in a year! The voluntary commitments were hailed as much-needed guidance for the AI sector, which was building powerful technology with few guardrails. Since then, eight more companies have signed the commitments, and the White House has issued an executive order that expands upon them--for example, with a requirement that developers share safety test results for new AI models with the US government if the tests show that the technology could pose a risk to national security.


AI companies promised the White House to self-regulate one year ago. What's changed?

MIT Technology Review

On the first anniversary of the voluntary commitments, MIT Technology Review asked the AI companies that signed the commitments for details on their work so far. Their replies show that the tech sector has made some welcome progress, with big caveats. The voluntary commitments came at a time when generative AI mania was perhaps at its frothiest, with companies racing to launch their own models and make them bigger and better than their competitors'. A vocal lobby of influential tech players, such as Geoffrey Hinton, had also raised concerns that AI could pose an existential risk to humanity. Suddenly, everyone was talking about the urgent need to make AI safe, and regulators everywhere were under pressure to do something about it. Until very recently, AI development has been a Wild West.


How Commerce Secretary Gina Raimondo Became America's Point Woman on AI

TIME - Tech

Until mid-2023, artificial intelligence was something of a niche topic in Washington, largely confined to small circles of tech-policy wonks. That all changed when, nearly two years into Gina Raimondo's tenure as Secretary of Commerce, ChatGPT's explosive popularity catapulted AI into the spotlight. Raimondo, however, was ahead of the curve. "I make it my business to stay on top of all of this," she says during an interview in her wood-paneled office overlooking the National Mall on May 21. "None of it was shocking to me." But in the year since, even she has been startled by the pace of progress.


The AI Crackdown Is Coming

The Atlantic - Technology

In April, lawyers for the airline Avianca noticed something strange. A passenger, Robert Mata, had sued the airline, alleging that a serving cart on a flight had struck and severely injured his left knee, but several cases cited in Mata's lawsuit didn't appear to exist. The judge couldn't verify them, either. It turned out that ChatGPT had made them all up, fabricating names and decisions. One of Mata's lawyers, Steven A. Schwartz, had used the chatbot as an assistant--his first time using the program for legal research--and, as Schwartz wrote in an affidavit, "was unaware of the possibility that its content could be false."


Amazon, Google, Meta, Microsoft And Others Agree To AI Safeguards Set By The White House

Huffington Post - Tech news and opinion

Amazon, Google, Meta, Microsoft and other companies that are leading the development of artificial intelligence technology have agreed to meet a set of AI safeguards brokered by President Joe Biden's administration. The White House said Friday that it has secured voluntary commitments from seven U.S. companies meant to ensure their AI products are safe before they release them. Some of the commitments call for third-party oversight of the workings of commercial AI systems, though they don't detail who will audit the technology or hold the companies accountable. A surge of commercial investment in generative AI tools that can write convincingly human-like text and churn out new images and other media has brought public fascination as well as concern about their ability to trick people and spread disinformation, among other dangers. The four tech giants, along with ChatGPT-maker OpenAI and startups Anthropic and Inflection, have committed to security testing "carried out in part by independent experts" to guard against major risks, such as to biosecurity and cybersecurity, the White House said in a statement.


Top tech firms commit to AI safeguards amid fears over pace of change

The Guardian

Top players in the development of artificial intelligence, including Amazon, Google, Meta, Microsoft and OPenAI, will announce new safeguards for the fast-moving technology at the White House on Friday. Among the guidelines brokered by the Biden administration are watermarks for AI content to make it easier to identify and third-party testing of the technology that will try to spot dangerous flaws. The White House said on Friday that it had secured voluntary commitments from seven US companies meant to ensure their AI products are safe before they release them. Joe Biden is expected to meet with the executives at 1.30pm ET and unveil a package of measures. The announcement comes as critics charge AI's breakneck expansion threatens to allow real damage to occur before laws catch up.