Goto

Collaborating Authors

 scrutiny


OpenAI Rolls Out Teen Safety Features Amid Growing Scrutiny

WIRED

CEO Sam Altman announced an age-prediction system and new parental controls in a blog post on Tuesday. OpenAI announced new teen safety features for ChatGPT on Tuesday as part of an ongoing effort to respond to concerns about how minors engage with chatbots . The company is building an age-prediction system that identifies if a user is under 18 years old and routes them to an " age-appropriate " system that blocks graphic sexual content. If the system detects that the user is considering suicide or self-harm, it will contact the user's parents. In cases of imminent danger, if a user's parents are unreachable, the system may contact the authorities.


Intentionally Unintentional: GenAI Exceptionalism and the First Amendment

Atkinson, David, Hwang, Jena D., Morrison, Jacob

arXiv.org Artificial Intelligence

This paper challenges the assumption that courts should grant First Amendment protections to outputs from large generative AI models, such as GPT-4 and Gemini. We argue that because these models lack intentionality, their outputs do not constitute speech as understood in the context of established legal precedent, so there can be no speech to protect. Furthermore, if the model outputs are not speech, users cannot claim a First Amendment speech right to receive the outputs. We also argue that extending First Amendment rights to AI models would not serve the fundamental purposes of free speech, such as promoting a marketplace of ideas, facilitating self-governance, or fostering self-expression. In fact, granting First Amendment protections to AI models would be detrimental to society because it would hinder the government's ability to regulate these powerful technologies effectively, potentially leading to the unchecked spread of misinformation and other harms.


Drones, cameras and metal detectors: Edison faces new scrutiny over start of Eaton fire

Los Angeles Times

Armed with drones, long-distance camera lenses and metal detectors, a hillside in Eaton Canyon has become the focus of intense scrutiny over the last month by teams of private investigators now seeking clues on whether Southern California Edison equipment caused the massive fire that destroyed large swaths of Altadena. Some of the findings and theories of these privately hired teams of fire investigators and electrical engineers have emerged in more than 40 lawsuits that residents have filed against the utility. Much of the focus has been centered on a group of transmission towers where the first flames were seen just as the Eaton fire exploded. Earlier this week, a new lawsuit alleged that an idle transmission tower on the hillside -- one that has not been in use for more than 50 years -- might have sparked the devastating blaze. With more than 9,000 homes lost and 17 people killed, liability is going to be a costly question that could affect how Altadena is rebuilt.


Enabling External Scrutiny of AI Systems with Privacy-Enhancing Technologies

Beers, Kendrea, Toner, Helen

arXiv.org Artificial Intelligence

This article describes how technical infrastructure developed by the nonprofit OpenMined enables external scrutiny of AI systems without compromising sensitive information. Independent external scrutiny of AI systems provides crucial transparency into AI development, so it should be an integral component of any approach to AI governance. In practice, external researchers have struggled to gain access to AI systems because of AI companies' legitimate concerns about security, privacy, and intellectual property. But now, privacy-enhancing technologies (PETs) have reached a new level of maturity: end-to-end technical infrastructure developed by OpenMined combines several PETs into various setups that enable privacy-preserving audits of AI systems. We showcase two case studies where this infrastructure has been deployed in real-world governance scenarios: "Understanding Social Media Recommendation Algorithms with the Christchurch Call" and "Evaluating Frontier Models with the UK AI Safety Institute." We describe types of scrutiny of AI systems that could be facilitated by current setups and OpenMined's proposed future setups. We conclude that these innovative approaches deserve further exploration and support from the AI governance community. Interested policymakers can focus on empowering researchers on a legal level.


Review for NeurIPS paper: Bayesian Deep Learning and a Probabilistic Perspective of Generalization

Neural Information Processing Systems

After much discussion, the reviewers largely converged towards recommending to accept this submission. The reviewers appreciate the merits of the paper, believe it investigates important open questions, and will thus be a significant contribution to our understanding of BNNs, but only when the experimental issues mentioned in the reviews are resolved. I would draw the author's attention to the fact that the reviewers raised concerns about the supplementary material containing a number of sections which are not connected to results in the main paper (on tempered posteriors, sampling from the prior, discussions of what's Bayesian, PAC Bayes etc.). Per reviewing guidelines, since these sections were not relevant for understanding the main paper, these were not reviewed with scrutiny. However, the reviewers found strong statements in the unreviewed supplementary material involving other recent work which they believe deserve close scrutiny if they are to be published.


AI's hype and antitrust problem is coming under scrutiny

MIT Technology Review

Last Thursday, Senators Elizabeth Warren and Eric Schmitt introduced a bill aimed at stirring up more competition for Pentagon contracts awarded in AI and cloud computing. Amazon, Microsoft, Google, and Oracle currently dominate those contracts. "The way that the big get bigger in AI is by sucking up everyone else's data and using it to train and expand their own systems," Warren told the Washington Post. The new bill would "require a competitive award process" for contracts, which would ban the use of "no-bid" awards by the Pentagon to companies for cloud services or AI foundation models. While Big Tech is hit with antitrust investigations--including the ongoing lawsuit against Google about its dominance in search, as well as a new investigation opened into Microsoft--regulators are also accusing AI companies of, well, just straight-up lying.


TikTok owner sacks intern for allegedly sabotaging AI project

The Guardian

The owner of TikTok has sacked an intern for allegedly sabotaging an internal artificial intelligence project. ByteDance said it had dismissed the person in August after they "maliciously interfered" with the training of artificial intelligence (AI) models used in a research project. Thanks to the video-sharing app TikTok and its Chinese counterpart, Douyin, which rank among the world's most popular mobile apps, ByteDance has risen to become one of the world's most important social media companies. Like other big players in the tech sector, ByteDance has raced to embrace generative AI. Its Doubao chatbot earlier this year took over from the competitor Baidu's Ernie in the race to produce a Chinese rival to OpenAI's ChatGPT.


Pushing Buttons: With the safety of Roblox under scrutiny, how worried should parents be?

The Guardian

Right before last week's newsletter went out, a short-selling firm called Hindenburg Research published an extremely critical report on Roblox. In it they accused the publicly traded company of inflating its metrics (and thereby its valuation) and, more worryingly for the parents of the millions of children who use Roblox, also called it a "pedophile hellscape". The report alleges some hair-raising discoveries within the game. The researchers found chatrooms of people purporting to trade images and videos of children, and users claiming to be children and teens offering such material in exchange for Robux, the in-game currency. Roblox strongly rejects the claims that Hindenburg made in its report.


The US government is right to investigate Nvidia for alleged unfair practices Max von Thun

The Guardian

When a company triples in value in just a few months, as computer chip company Nvidia has, investors take notice. But regulators do too, because they know from experience how monopolies engage in illegal anti-competitive behavior that squashes competitors and manipulates the market to expand their dominance. The US Department of Justice (as well as other competition authorities and tech observers) suspects Nvidia has used such tactics to entrench its chips monopoly, and last month it was reported that the Department of Justice was opening an antitrust investigation. Before the pandemic, few beyond video game enthusiasts – whose top-of-the-line gaming computers and consoles are built on high-capacity Nvidia chips – had ever heard of the company. But thanks to the generative AI boom, Nvidia has become one of the fastest-growing companies ever, and its chips have powered every important AI milestone – including OpenAI's development of ChatGPT, which holds two-thirds of the AI business tools market.


Ransomware Attacks Are Getting Worse

WIRED

Despite years worth of efforts to eliminate the scourge of ransomware targeting schools, hospitals, and critical infrastructure worldwide, experts are warning that the crisis is only heating up, with criminal gangs growing ever more aggressive in their tactics. The threat of real-world violence now looms, some experts warn, as the data stolen grows increasingly sensitive and millions in potential profits hang in the balance. "We know where your CEO lives," read a message reportedly received by one victim. Attacks targeting the medical sector are blooming in response to the 44 million payout by Change Healthcare this March. United States lawmakers and intelligence officials are circling their wagons following the revelation of Israel's involvement in a malign influence campaign that targeted US voters--an attempt by America's Middle East ally to artificially boost support for an increasingly unpopular war that was kicked off by Hamas' unprecedented Oct. 7th attack.