Issues
Copycats: the many lives of a publicly available medical imaging dataset Amelia Jiménez-Sánchez 1
Medical Imaging (MI) datasets are fundamental to artificial intelligence in healthcare. The accuracy, robustness, and fairness of diagnostic algorithms depend on the data (and its quality) used to train and evaluate the models. MI datasets used to be proprietary, but have become increasingly available to the public, including on community-contributed platforms (CCPs) like Kaggle or HuggingFace. While open data is important to enhance the redistribution of data's public value, we find that the current CCP governance model fails to uphold the quality needed and recommended practices for sharing, documenting, and evaluating datasets. In this paper, we conduct an analysis of publicly available machine learning datasets on CCPs, discussing datasets' context, and identifying limitations and gaps in the current CCP landscape. We highlight differences between MI and computer vision datasets, particularly in the potentially harmful downstream effects from poor adoption of recommended dataset management practices. We compare the analyzed datasets across several dimensions, including data sharing, data documentation, and maintenance. We find vague licenses, lack of persistent identifiers and storage, duplicates, and missing metadata, with differences between the platforms. Our research contributes to efforts in responsible data curation and AI algorithms for healthcare.
The One Big Beautiful Bill Act would ban states from regulating AI
Buried in the Republican budget bill is a proposal that will radically change how artificial intelligence develops in the U.S., according to both its supporters and critics. The provision would ban states from regulating AI for the next decade. Opponents say the moratorium is so broadly written that states wouldn't be able to enact protections for consumers affected by harmful applications of AI, like discriminatory employment tools, deepfakes, and addictive chatbots. Instead, consumers would have to wait for Congress to pass its own federal legislation to address those concerns. Currently it has no draft of such a bill.
'Frasier' star Kelsey Grammer voices growing alarm over AI manipulation
While artificial intelligence (AI) is playing a bigger role than ever in Hollywood, award-winning actor Kelsey Grammer is warning it may be "dangerous." The "Karen: A Brother Remembers" author opened up about his growing concern over AI deepfakes and the potential blurred lines between reality and manipulation. "What I'm a little sad about is our prevalence these days to come up with so many, as they try to say deepfakes," he told Fox News Digital. "You know, the ones who say it usually are the ones who are actually doing it. "Karen: A Brother Remembers" author Kelsey Grammer warns about the dangers of AI deepfakes in Hollywood, expressing concerns over the blurred lines between reality and manipulation. AI-generated images, known as "deepfakes," often involve editing videos or photos of people to make them look like someone else by using artificial intelligence. While the "Frasier" star has acknowledged AI to be beneficial in some capacity, including in the medical field, Grammer shared his reservations about how the system can potentially fabricate someone's identity in seconds. WATCH: KELSEY GRAMMER WARNS AI WILL'NEVER REFLECT THE SAME SPONTANEITY' AS HUMANS "I recognize the validity and the potential in AI," Grammer said. "I recognize the validity and the potential in AI, especially in medicine and a number of other things." Grammer warned, "But AI still is...
Unpacking the Flaws of Techbro Dreams of the Future
Cutaway view of a fictional space colony concept painted by artist Rick Guidice as part of a NASA art program in the 1970s. This story was originally published by Undark and is reproduced here as part of the Climate Desk collaboration. Elon Musk once joked: "I would like to die on Mars. Musk is, in fact, deadly serious about colonizing the Red Planet. Part of his motivation is the idea of having a "back-up" planet in case some future catastrophe renders the Earth uninhabitable. Musk has suggested that a million people may be calling Mars home by 2050 -- and he's hardly alone in his enthusiasm. Venture capitalist Marc Andreessen believes the world can easily support 50 billion people, and more than that once we settle other planets. And Jeff Bezos has spoken of exploiting the resources of the moon and the asteroids to build giant space stations. "I would love to see a trillion humans living in the solar system," he has said. Not so fast, cautions science journalist Adam Becker.
Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual Predictions of Complex Models
Shapley values underlie one of the most popular model-agnostic methods within explainable artificial intelligence. These values are designed to attribute the difference between a model's prediction and an average baseline to the different features used as input to the model. Being based on solid game-theoretic principles, Shapley values uniquely satisfy several desirable properties, which is why they are increasingly used to explain the predictions of possibly complex and highly non-linear machine learning models. Shapley values are well calibrated to a user's intuition when features are independent, but may lead to undesirable, counterintuitive explanations when the independence assumption is violated. In this paper, we propose a novel framework for computing Shapley values that generalizes recent work that aims to circumvent the independence assumption. By employing Pearl's do-calculus, we show how these'causal' Shapley values can be derived for general causal graphs without sacrificing any of their desirable properties. Moreover, causal Shapley values enable us to separate the contribution of direct and indirect effects. We provide a practical implementation for computing causal Shapley values based on causal chain graphs when only partial information is available and illustrate their utility on a real-world example.
How Peter Thiel's Relationship With Eliezer Yudkowsky Launched the AI Revolution
It would be hard to overstate the impact that Peter Thiel has had on the career of Sam Altman. After Altman sold his first startup in 2012, Thiel bankrolled his first venture fund, Hydrazine Capital. Thiel saw Altman as an inveterate optimist who stood at "the absolute epicenter, maybe not of Silicon Valley, but of a Silicon Valley zeitgeist." As Thiel put it, "If you had to look for the one person who represented a millennial tech person, it would be Altman." Each year, Altman would point Thiel toward the most promising startup at Y Combinator–Airbnb in 2012, Stripe in 2013, Zenefits in 2014–and Thiel would swallow hard and invest, even though he sometimes felt like he was being swept up in a hype cycle.
Fox News Politics Newsletter: Bondi Backs the Blue
Welcome to the Fox News Politics newsletter, with the latest updates on the Trump administration, Capitol Hill and more Fox News politics content. The Justice Department (DOJ) is moving funds formerly granted to groups supporting transgender ideology and diversity, equity and inclusion (DEI) initiatives to law enforcement, Fox News Digital has confirmed. A Justice Department official told Fox News Digital that the DOJ, under Attorney General Pam Bondi's watch, will "not waste" funds on DEI. "The Department of Justice under Pam Bondi will not waste discretionary funds on DEI passion projects that do not make Americans safer," the official told Fox News Digital. "We will use our money to get criminals off the streets, seize drugs, and in some cases, fund programs that deliver a tangible impact for victims of crime."…READ
UN revisits 'killer robot' regulations as concerns about AI-controlled weapons grow
The CyberGuy Kurt Knutsson joins'Fox & Friends' to discuss the U.S.-Saudi investment summit and the debate over regulation as artificial intelligence continues to advance. Several nations met at the United Nations (U.N.) on Monday to revisit a topic that the international body has been discussing for over a decade: the lack of regulations on lethal autonomous weapons systems (LAWS), often referred to as "killer robots." This latest round of talks comes as wars rage in Ukraine and Gaza. While the meeting was held behind closed doors, U.N. Secretary-General António Guterres released a statement doubling down on his 2026 deadline for a legally binding solution to threats posed by LAWS. "Machines that have the power and discretion to take human lives without human control are politically unacceptable, morally repugnant and should be banned by international law," Guterres said in a statement.
Google Is Using On-Device AI to Spot Scam Texts and Investment Fraud
Digital scammers have never been so successful. Last year Americans lost 16.6 billion to online crimes, with almost 200,000 people reporting scams like phishing and spoofing to the FBI. More than 470 million was stolen in scams that started with a text message last year, according to the Federal Trade Commission. And as the biggest mobile operating system maker in the world, Google has been scrambling to do something, building out tools to warn consumers about potential scams. Ahead of Google's Android 16 launch next week, the company said on Tuesday that it is expanding its recently launched AI flagging feature for the Google Messages app, known as Scam Detection, to provide alerts on potentially nefarious messages like possible crypto scams, financial impersonation, gift card and prize scams, technical support scams, and more.
Pope Leo XIV calls this a challenge to 'human dignity' in first address to cardinals
Newly elected Pope Leo XIV addressed the College of Cardinals in the New Synod Hall at the Vatican on Saturday, May 10. He credits his Papal name choice as a response to the digital age facing the Catholic Church. In his first official remarks as pope, Leo XIV delivered a powerful message to the College of Cardinals on Saturday, warning that artificial intelligence (AI) presents serious new risks to human dignity. He called on the Catholic Church to step up and respond to these challenges with moral clarity and bold action. Speaking at the New Synod Hall, the Pope said the Catholic Church has faced similar moments before.