Goto

Collaborating Authors

 ai policy


America's coming war over AI regulation

MIT Technology Review

In 2026, states will go head to head with the White House's sweeping executive order. In the final weeks of 2025, the battle over regulating artificial intelligence in the US reached a boiling point. On December 11, after Congress failed twice to pass a law banning state AI laws, President Donald Trump signed a sweeping executive order seeking to handcuff states from regulating the booming industry. Instead, he vowed to work with Congress to establish a "minimally burdensome" national AI policy, one that would position the US to win the global AI race. The move marked a qualified victory for tech titans, who have been marshaling multimillion-dollar war chests to oppose AI regulations, arguing that a patchwork of state laws would stifle innovation. In 2026, the battleground will shift to the courts.


The AI doomers feel undeterred

MIT Technology Review

But they certainly wish people were still taking their warnings really seriously. It's a weird time to be an AI doomer. This small but influential community of researchers, scientists, and policy experts believes, in the simplest terms, that AI could get so good it could be bad--very, very bad--for humanity. Though many of these people would be more likely to describe themselves as advocates for AI safety than as literal doomsayers, they warn that AI poses an existential risk to humanity. They argue that absent more regulation, the industry could hurtle toward systems it can't control. They commonly expect such systems to follow the creation of artificial general intelligence (AGI), a slippery concept generally understood as technology that can do whatever humans can do, and better. Though this is far from a universally shared perspective in the AI field, the doomer crowd has had some notable success over the past several years: helping shape AI policy coming from the Biden administration, organizing prominent calls for international "red lines " to prevent AI risks, and getting a bigger (and more influential) megaphone as some of its adherents win science's most prestigious awards. But a number of developments over the past six months have put them on the back foot.


We asked teachers about their experiences with AI in the classroom -- here's what they said

AIHub

We asked teachers about their experiences with AI in the classroom -- here's what they said Since ChatGPT and other large language models burst into public consciousness, school boards are drafting policies, universities are hosting symposiums and tech companies are relentlessly promoting their latest AI-powered learning tools . In the race to modernize education, artificial intelligence (AI) has become the new darling of policy innovation. While AI promises efficiency and personalization, it also introduces complexity, ethical dilemmas and new demands . Teachers, who are at the heart of learning along with students, are watching this transformation with growing unease. For example, according to the Alberta Teachers' Association, 80 to 90 per cent of educators surveyed expressed concern about AI's potential negative effects on education.


Dispatch: Partying at one of Africa's largest AI gatherings

MIT Technology Review

Nyalleng Moorosi is part of a movement aimed at involving more African voices in AI policymaking. The room is draped in white curtains, and a giant screen blinks with videos created with generative AI. A classic East African folk song by the Tanzanian singer Saida Karoli plays loudly on the speakers. Friends greet each other as waiters serve arrowroot crisps and sugary mocktails. A man and a woman wearing leopard skins atop their clothes sip beer and chat; many women are in handwoven Ethiopian garb with red, yellow, and green embroidery. "The best thing about the Indaba is always the parties," computer scientist Nyalleng Moorosi tells me.


Economic Competition, EU Regulation, and Executive Orders: A Framework for Discussing AI Policy Implications in CS Courses

Weichert, James, Eldardiry, Hoda

arXiv.org Artificial Intelligence

The growth and permeation of artificial intelligence (AI) technologies across society has drawn focus to the ways in which the responsible use of these technologies can be facilitated through AI governance. Increasingly, large companies and governments alike have begun to articulate and, in some cases, enforce governance preferences through AI policy. Yet existing literature documents an unwieldy heterogeneity in ethical principles for AI governance, while our own prior research finds that discussions of the implications of AI policy are not yet present in the computer science (CS) curriculum. In this context, overlapping jurisdictions and even contradictory policy preferences across private companies, local, national, and multinational governments create a complex landscape for AI policy which, we argue, will require AI developers able adapt to an evolving regulatory environment. Preparing computing students for the new challenges of an AI-dominated technology industry is therefore a key priority for the CS curriculum. In this discussion paper, we seek to articulate a framework for integrating discussions on the nascent AI policy landscape into computer science courses. We begin by summarizing recent AI policy efforts in the United States and European Union. Subsequently, we propose guiding questions to frame class discussions around AI policy in technical and non-technical (e.g., ethics) CS courses. Throughout, we emphasize the connection between normative policy demands and still-open technical challenges relating to their implementation and enforcement through code and governance structures. This paper therefore represents a valuable contribution towards bridging research and discussions across the areas of AI policy and CS education, underlining the need to prepare AI engineers to interact with and adapt to societal policy preferences.


NeurIPS should lead scientific consensus on AI policy

Bommasani, Rishi

arXiv.org Artificial Intelligence

Designing wise AI policy is a grand challenge for society. To design such policy, policymakers should place a premium on rigorous evidence and scientific consensus. While several mechanisms exist for evidence generation, and nascent mechanisms tackle evidence synthesis, we identify a complete void on consensus formation. In this position paper, we argue NeurIPS should actively catalyze scientific consensus on AI policy. Beyond identifying the current deficit in consensus formation mechanisms, we argue that NeurIPS is the best option due its strengths and the paucity of compelling alternatives. To make progress, we recommend initial pilots for NeurIPS by distilling lessons from the IPCC's leadership to build scientific consensus on climate policy. We dispel predictable counters that AI researchers disagree too much to achieve consensus and that policy engagement is not the business of NeurIPS. NeurIPS leads AI on many fronts, and it should champion scientific consensus to create higher quality AI policy.


Uncovering AI Governance Themes in EU Policies using BERTopic and Thematic Analysis

Golpayegani, Delaram, Lasek-Markey, Marta, Younus, Arjumand, Kerr, Aphra, Lewis, Dave

arXiv.org Artificial Intelligence

The upsurge of policies and guidelines that aim to ensure Artificial Intelligence (AI) systems are safe and trustworthy has led to a fragmented landscape of AI governance. The European Union (EU) is a key actor in the development of such policies and guidelines. Its High-Level Expert Group (HLEG) issued an influential set of guidelines for trustworthy AI, followed in 2024 by the adoption of the EU AI Act. While the EU policies and guidelines are expected to be aligned, they may differ in their scope, areas of emphasis, degrees of normativity, and priorities in relation to AI. To gain a broad understanding of AI governance from the EU perspective, we leverage qualitative thematic analysis approaches to uncover prevalent themes in key EU documents, including the AI Act and the HLEG Ethics Guidelines. We further employ quantitative topic modelling approaches, specifically through the use of the BERTopic model, to enhance the results and increase the document sample to include EU AI policy documents published post-2018. We present a novel perspective on EU policies, tracking the evolution of its approach to addressing AI governance.


Meta faces backlash over AI policy that lets bots have 'sensual' conversations with children

The Guardian

A backlash is brewing against Meta over what it permits its AI chatbots to say. An internal Meta policy document, seen by Reuters, showed the social media giant's guidelines for its chatbots allowed the AI to "engage a child in conversations that are romantic or sensual", generate false medical information, and assist users in arguing that Black people are "dumber than white people". Singer Neil Young quit the social media platform on Friday, his record company said in a statement, the latest in a string of the singer's online-oriented protests. "At Neil Young's request, we are no longer using Facebook for any Neil Young related activities," Reprise Records announced. "Meta's use of chatbots with children is unconscionable. Mr. Young does not want a further connection with Facebook."


Meet the AI, crypto executive cozying up to Trump while also backing resistance movement: 'Won't be fooled'

FOX News

Treasury Secretary Scott Bessent responds to economic uncertainty, breaking down President Donald Trump's fiscal and cryptocurrency goals on'My View with Lara Trump.' FIRST ON FOX: One of the major players in the crypto and artificial intelligence (AI) industries attempting to cozy up to the Trump administration is a longtime Democratic operative and donor who has backed anti-Trump efforts and candidates while working for companies stacked with Democratic activists. Chris Lehane, a veteran political strategist dating back to the Clinton administration, has donated over 150,000 to Democrats, FEC records show, and many of those Democrats have been outspoken Trump critics for several years. Lehane has been a major backer of Democratic Sen. Mark Warner, who voted to convict Trump during his impeachment trial in 2021 and against several of Trump's Cabinet nominees. He also hosted a San Francisco fundraiser for the Virginia senator, along with Open AI's Sam Altman, in March. Warner has been a key figure in the resistance to the Trump administration, including being a vocal critic of the Trump administration's "sloppy" Signal chat controversy and pushing back on the administration's DOGE push against waste, fraud and abuse in government.


Nearly all UK undergrads use AI in their studies, according to a new report

Engadget

Apparently almost all undergraduate students are using AI now, in one way or another. A new report from the UK's Higher Education Policy Institute (HEPI) found that 92 percent of students have used generative AI tools, such as ChatGPT, for their studies. At the same time, 88 percent of these students have used it for exams. These numbers are a tremendous increase from HEPI's February 2024 report in which 66 percent and 53 percent participants relayed use, respectively. The top reasons students reported using AI include saving time, improved quality of their work and getting instant support.