Social & Ethical Issues
Trump praised by faith leaders for AI leadership as they warn of technology's 'potential peril'
Sen. Ted Cruz, R-Texas, joins'America's Newsroom' along with 15-year-old AI deep fake victim Elliston Berry to discuss the importance of the'Take It Down' bill, warning the issue is'rising every day.' Evangelical leaders praised President Donald Trump for his leadership on artificial intelligence ("AI") in an open letter published last week, while cautioning him to ensure the technology is developed responsibly. Dubbing Trump the "AI President," the religious leaders wrote that they believe Trump is there by "Divine Providence" to guide the world on the future of AI. The signatories said they are "pro-science" and fully support the advancement of technology which benefits their own ministries around the world. "We are also pro-economic prosperity and economic leadership for America and our friends. We do not want to see the AI revolution slowing, but we want to see the AI revolution accelerating responsibly," the letter says.
Consent in Crisis: The Rapid Decline of the AI Data Commons, Ariel Lee
General-purpose artificial intelligence (AI) systems are built on massive swathes of public web data, assembled into corpora such as C4, RefinedWeb, and Dolma. To our knowledge, we conduct the first, large-scale, longitudinal audit of the consent protocols for the web domains underlying AI training corpora. Our audit of 14, 000 web domains provides an expansive view of crawlable web data and how codified data use preferences are changing over time. We observe a proliferation of AIspecific clauses to limit use, acute differences in restrictions on AI developers, as well as general inconsistencies between websites' expressed intentions in their Terms of Service and their robots.txt. We diagnose these as symptoms of ineffective web protocols, not designed to cope with the widespread re-purposing of the internet for AI.
A Taxonomy of Challenges to Curating Fair Datasets Dora Zhao Morgan Klaus Scheuerman
Despite extensive efforts to create fairer machine learning (ML) datasets, there remains a limited understanding of the practical aspects of dataset curation. Drawing from interviews with 30 ML dataset curators, we present a comprehensive taxonomy of the challenges and trade-offs encountered throughout the dataset curation lifecycle. Our findings underscore overarching issues within the broader fairness landscape that impact data curation. We conclude with recommendations aimed at fostering systemic changes to better facilitate fair dataset curation practices.
Why Do Employers Pay You To Leave?
This week: Felix Salmon left Axios. He, Emily Peck and Elizabeth Spiers discuss the opaque language and politics around parting ways with an employer and the motivation behind giving severance packages. Then, it seems the threat of AI taking over jobs is becoming real as the hosts examine the role state of AI in tech and other industries and its effect on the job market. Finally, major stablecoin issuer Circle is going public. So what is a stablecoin and why do people want them?
Longevity experts reveal when humans will start living to 1,000... and it's sooner than you think
What if you could live forever, staying healthy and young for centuries? Scientists and tech pioneers now believe this dream could become reality. In Silicon Valley, entrepreneurs like Bryan Johnson follow intense routines, like his'Blueprint' plan, to slow or reverse aging, and companies like Altos Labs are testing treatments that have already extended the lives of mice. Experts say we're on the cusp of technologies that could make immortality possible, and they've even set dates for when this future might arrive. Three visionaries stand out in this quest: futurologist Dr. Ian Pearson, Google's Ray Kurzweil, and biomedical researcher Aubrey de Grey.
How the Loudest Voices in AI Went From 'Regulate Us' to 'Unleash Us'
On May 16, 2023, Sam Altman appeared before a subcommittee of the Senate Judiciary. The title of the hearing was "Oversight of AI." The session was a lovefest, with both Altman and the senators celebrating what Altman called AI's "printing press moment"--and acknowledging that the US needed strong laws to avoid its pitfalls. "We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models," he said. The legislators hung on Altman's every word as he gushed about how smart laws could allow AI to flourish--but only within firm guidelines that both lawmakers and AI builders deemed vital at that moment.
SLIM: Style-Linguistics Mismatch Model for Generalized Audio Deepfake Detection
Audio deepfake detection (ADD) is crucial to combat the misuse of speech synthesized by generative AI models. Existing ADD models suffer from generalization issues to unseen attacks, with a large performance discrepancy between in-domain and out-of-domain data. Moreover, the black-box nature of existing models limits their use in real-world scenarios, where explanations are required for model decisions. To alleviate these issues, we introduce a new ADD model that explicitly uses the Style-LInguistics Mismatch (SLIM) in fake speech to separate them from real speech. SLIM first employs self-supervised pretraining on only real samples to learn the style-linguistics dependency in the real class. The learned features are then used in complement with standard pretrained acoustic features (e.g., Wav2vec) to learn a classifier on the real and fake classes. When the feature encoders are frozen, SLIM outperforms benchmark methods on out-of-domain datasets while achieving competitive results on in-domain data. The features learned by SLIM allow us to quantify the (mis)match between style and linguistic content in a sample, hence facilitating an explanation of the model decision.
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Explainable AI (XAI) is a rapidly growing domain with a myriad of proposed methods as well as metrics aiming to evaluate their efficacy. However, current studies are often of limited scope, examining only a handful of XAI methods and ignoring underlying design parameters for performance, such as the model architecture or the nature of input data. Moreover, they often rely on one or a few metrics and neglect thorough validation, increasing the risk of selection bias and ignoring discrepancies among metrics. These shortcomings leave practitioners confused about which method to choose for their problem. In response, we introduce LATEC, a large-scale benchmark that critically evaluates 17 prominent XAI methods using 20 distinct metrics.
Poll: Banning state regulation of AI is massively unpopular
Federal lawmakers in the Senate are poised to take up the One Big Beautiful Bill Act next week, but a new poll suggests that one of its controversial provisions is clearly unpopular with voters on both sides of the aisle. That measure would ban states from regulating artificial intelligence for a decade. Proponents say that U.S. tech companies won't be able to succeed on the global stage if they're restrained by a patchwork of state laws that address concerns over artificial intelligence, like deepfakes, fraud, and youth safety. But critics argue that a lengthy blanket ban would harm consumers, especially given that Congress has no plan to pass a bill with protections. The new poll asked 1,022 registered voters across the country about their opinion on a state regulatory moratorium, and the results show that American voters largely oppose it.
The State of Data at An Assessment of Development Practices in the and Benchmarks Track
If labels are obtained from elsewhere: documentation discusses where they were obtained from, how they were reused, and how the collected annotations and labels are combined with existing ones. DATA QUALITY 10 Suitability Suitability is a measure of a dataset's Documentation discusses how the dataset Documentation discusses how quality with regards to the purpose is appropriate for the defined purpose.