Goto

Collaborating Authors

 ai developer


Governor Hochul signs New York's AI safety act

Engadget

LG TVs add'delete' option for Copilot Governor Hochul signs New York's AI safety act The RAISE Act establishes transparency requirements for large AI developers. New York governor Kathy Hochul signed legislation on Friday aimed at holding large AI developers accountable for the safety of their models. The RAISE Act establishes rules for greater transparency, requiring these companies to publish information about their safety protocols and report any incidents within 72 hours of their occurrence. It comes a few months after California adopted similar legislation. But, the penalties aren't going to be nearly as steep as they were initially presented when the bill passed back in June.


A 100 Million AI Super PAC Targeted New York Democrat Alex Bores. He Thinks It Backfired

WIRED

Leading the Future said it will spend millions to keep Alex Bores out of Congress. It might be helping him instead. It turns out that when an AI-friendly super PAC with $100 million in backing from Silicon Valley bigwigs identifies you as its first target, it ends up generating a lot of attention. "I want to thank [the PAC] for their partnership in raising up the issue of how we regulate an incredibly powerful technology so that the future is one that benefits all of us," says Alex Bores, a New York Assembly member and Democratic congressional candidate, in an interview with WIRED. "I couldn't imagine a better partner this week."


Trump Takes Aim at State AI Laws in Draft Executive Order

WIRED

The draft order, obtained by WIRED, instructs the US Justice Department to sue states that pass laws regulating AI. US President Donald Trump is considering signing an executive order that would seek to challenge state efforts to regulate artificial intelligence through lawsuits and the withholding federal funding, WIRED has learned. A draft of the order viewed by WIRED directs US Attorney General Pam Bondi to create an "AI Litigation Task Force," whose purpose is to sue states in court for passing AI regulations that allegedly violate federal laws governing things like free speech and interstate commerce. Trump could sign the order, which is currently titled "Eliminating State Law Obstruction of National AI Policy," as early as this week, according to four sources familiar with the matter. A White House spokesperson told WIRED that "discussion about potential executive orders is speculation."


UK seeking to curb AI child sex abuse imagery with tougher testing

BBC News

The UK government will allow tech firms and child safety charities to proactively test artificial intelligence tools to make sure they cannot create child sexual abuse imagery. An amendment to the Crime and Policing Bill announced on Wednesday would enable authorised testers to assess models for their ability to generate illegal child sexual abuse material (CSAM) prior to their release. Technology Secretary Liz Kendall said the measures would ensure AI systems can be made safe at the source - though some campaigners argue more still needs to be done. It comes as the Internet Watch Foundation (IWF) said the number of AI-related CSAM reports had doubled over the past year. The charity, one of only a few in the world licensed to actively search for child abuse content online, said it had removed 426 pieces of reported material between January and October 2025.


The Company Quietly Funneling Paywalled Articles to AI Developers

The Atlantic - Technology

"You shouldn't have put your content on the internet if you didn't want it to be on the internet," Common Crawl's executive director says. Listen to more stories on the Noa app. T he Common Crawl Foundation is little known outside of Silicon Valley. For more than a decade, the nonprofit has been scraping billions of webpages to build a massive archive of the internet. This database--large enough to be measured in petabytes--is made freely available for research.


A Design Framework for operationalizing Trustworthy Artificial Intelligence in Healthcare: Requirements, Tradeoffs and Challenges for its Clinical Adoption

Moreno-Sánchez, Pedro A., Del Ser, Javier, van Gils, Mark, Hernesniemi, Jussi

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) holds great promise for transforming healthcare, particularly in disease diagnosis, prognosis, and patient care. The increasing availability of digital medical data, such as images, omics, biosignals, and electronic health records, combined with advances in computing, has enabled AI models to approach expert-level performance. However, widespread clinical adoption remains limited, primarily due to challenges beyond technical performance, including ethical concerns, regulatory barriers, and lack of trust. To address these issues, AI systems must align with the principles of Trustworthy AI (TAI), which emphasize human agency and oversight, algorithmic robustness, privacy and data governance, transparency, bias and discrimination avoidance, and accountability. Yet, the complexity of healthcare processes (e.g., screening, diagnosis, prognosis, and treatment) and the diversity of stakeholders (clinicians, patients, providers, regulators) complicate the integration of TAI principles. To bridge the gap between TAI theory and practical implementation, this paper proposes a design framework to support developers in embedding TAI principles into medical AI systems. Thus, for each stakeholder identified across various healthcare processes, we propose a disease-agnostic collection of requirements that medical AI systems should incorporate to adhere to the principles of TAI. Additionally, we examine the challenges and tradeoffs that may arise when applying these principles in practice. To ground the discussion, we focus on cardiovascular diseases, a field marked by both high prevalence and active AI innovation, and demonstrate how TAI principles have been applied and where key obstacles persist.


The 2025 OpenAI Preparedness Framework does not guarantee any AI risk mitigation practices: a proof-of-concept for affordance analyses of AI safety policies

Coggins, Sam, Saeri, Alexander K., Daniell, Katherine A., Ruster, Lorenn P., Liu, Jessie, Davis, Jenny L.

arXiv.org Artificial Intelligence

The 2025 OpenAI Preparedness Framework does not guarantee any AI risk mitigation practices: a proof-of-concept for affordance analyses of AI safety policies. Abstract Prominent AI companies are producing'safety frameworks' as a type of voluntary self-governance. These statements purport to establish risk thresholds and safety procedures for the development and deployment of highly capable AI. Understanding which AI risks are covered and what actions are allowed, refused, demanded, encouraged, or discouraged by these statements is vital for assessing how these frameworks actually govern AI development and deployment. We draw on affordance theory to analyse the OpenAI'Preparedness Framework Version 2' (April 2025) using the Mechanisms & Conditions model of affordances and the MIT AI Risk Repository. We find that this safety policy requests evaluation of a small minority of AI risks, encourages deployment of systems with'Medium' capabilities for unintentionally enabling'severe harm' (which OpenAI defines as >1000 deaths or >$100B in damages), and allows OpenAI's CEO to deploy even more dangerous capabilities. These findings suggest that effective mitigation of AI risks requires more robust governance interventions beyond current industry self-regulation. Our affordance analysis provides a replicable method for evaluating what safety frameworks actually permit versus what they claim.


Economic Competition, EU Regulation, and Executive Orders: A Framework for Discussing AI Policy Implications in CS Courses

Weichert, James, Eldardiry, Hoda

arXiv.org Artificial Intelligence

The growth and permeation of artificial intelligence (AI) technologies across society has drawn focus to the ways in which the responsible use of these technologies can be facilitated through AI governance. Increasingly, large companies and governments alike have begun to articulate and, in some cases, enforce governance preferences through AI policy. Yet existing literature documents an unwieldy heterogeneity in ethical principles for AI governance, while our own prior research finds that discussions of the implications of AI policy are not yet present in the computer science (CS) curriculum. In this context, overlapping jurisdictions and even contradictory policy preferences across private companies, local, national, and multinational governments create a complex landscape for AI policy which, we argue, will require AI developers able adapt to an evolving regulatory environment. Preparing computing students for the new challenges of an AI-dominated technology industry is therefore a key priority for the CS curriculum. In this discussion paper, we seek to articulate a framework for integrating discussions on the nascent AI policy landscape into computer science courses. We begin by summarizing recent AI policy efforts in the United States and European Union. Subsequently, we propose guiding questions to frame class discussions around AI policy in technical and non-technical (e.g., ethics) CS courses. Throughout, we emphasize the connection between normative policy demands and still-open technical challenges relating to their implementation and enforcement through code and governance structures. This paper therefore represents a valuable contribution towards bridging research and discussions across the areas of AI policy and CS education, underlining the need to prepare AI engineers to interact with and adapt to societal policy preferences.


"We are not Future-ready": Understanding AI Privacy Risks and Existing Mitigation Strategies from the Perspective of AI Developers in Europe

Klymenko, Alexandra, Meisenbacher, Stephen, Kelley, Patrick Gage, Peddinti, Sai Teja, Thomas, Kurt, Matthes, Florian

arXiv.org Artificial Intelligence

The proliferation of AI has sparked privacy concerns related to training data, model interfaces, downstream applications, and more. We interviewed 25 AI developers based in Europe to understand which privacy threats they believe pose the greatest risk to users, developers, and businesses and what protective strategies, if any, would help to mitigate them. We find that there is little consensus among AI developers on the relative ranking of privacy risks. These differences stem from salient reasoning patterns that often relate to human rather than purely technical factors. Furthermore, while AI developers are aware of proposed mitigation strategies for addressing these risks, they reported minimal real-world adoption. Our findings highlight both gaps and opportunities for empowering AI developers to better address privacy risks in AI.


Can AI be Auditable?

Verma, Himanshu, Padh, Kirtan, Thelisson, Eva

arXiv.org Artificial Intelligence

Auditability is defined as the capacity of AI systems to be independently assessed for compliance with ethical, legal, and technical standards throughout their lifecycle. The chapter explores how auditability is being formalized through emerging regulatory frameworks, such as the EU AI Act, which mandate documentation, risk assessments, and governance structures. It analyzes the diverse challenges facing AI auditability, including technical opacity, inconsistent documentation practices, lack of standardized audit tools and metrics, and conflicting principles within existing responsible AI frameworks. The discussion highlights the need for clear guidelines, harmonized international regulations, and robust socio-technical methodologies to operationalize auditability at scale. The chapter concludes by emphasizing the importance of multi-stakeholder collaboration and auditor empowerment in building an effective AI audit ecosystem. It argues that auditability must be embedded in AI development practices and governance infrastructures to ensure that AI systems are not only functional but also ethically and legally aligned.