governance
What was Doge? How Elon Musk tried to gamify government
In 2025, when Elon Musk joined the government as the de facto head of something called the "department of government efficiency", he declared that governments were poorly configured "big dumb machines". To the senator Ted Cruz, he explained that "the only way to reconcile the databases and get rid of waste and fraud is to actually look at the computers". Muskism came to Washington soaked in memes, adolescent boasts and sadistic victory dances over mass firings. Leading a team of teenage coders and mid-level managers drawn from his suite of companies, Musk aimed to enter the codebase and rewrite regulations and budget lines from within. He would drag the paper-pushing bureaucracy kicking and screaming into the digital 21st century, scanning the contents of cavernous rooms of filing cabinets and feeding the data into a single interoperable system. The undertaking combined features of private equity-led restructuring with startup management, shot through with the sensibility of gaming and rightwing culture war. To succeed, he would need "God mode", an overview of the whole. If the mandate of Doge was to "[modernise] federal technology and software to maximise governmental efficiency and productivity", in the words of the executive order that launched the initiative on 20 January 2025, the reality was a strengthening of the state's surveillance capacities. Over time, Musk had become convinced that the real bugs in the code were people, especially the non-white illegal immigrants whom he saw as pawns in a liberal scheme to corrupt democracy and beneficiaries of what he called "suicidal empathy". He understood empathy itself in coding terms.
- North America > United States > New York (0.04)
- North America > United States > California (0.04)
- Oceania > Australia (0.04)
- (4 more...)
Nurturing agentic AI beyond the toddler stage
The promise of autonomous agentic AI requires significant changes in the governance landscape. Parents of young children face a lot of fears about developmental milestones, from infancy through adulthood. The number of months it takes a baby to learn to talk or walk is often used as a benchmark for wellness, or an indicator of additional tests needed to properly diagnose a potential health condition. A parent rejoices over the child's first steps and then realizes how much has changed when the child can quickly walk outside, instead of slowly crawling in a safe area inside. Suddenly safety, including childproofing, takes a completely different lens and approach. Generative AI hit toddlerhood between December 2025 and January 2026 with the introduction of no code tools from multiple vendors and the debut of OpenClaw, an open source personal agent posted on GitHub.
- North America > United States > Massachusetts (0.05)
- North America > United States > California (0.05)
- Retail (0.48)
- Leisure & Entertainment (0.48)
- Information Technology (0.48)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language (0.96)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.72)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.38)
RWDS Big Questions: how do we balance innovation and regulation in the world of AI?
RWDS Big Questions: how do we balance innovation and regulation in the world of AI? AI development is accelerating, while regulation moves more deliberately. That tension creates a core challenge: how do we maintain momentum without breaking the things that matter? The aim isn't to slow innovation unnecessarily, but to ensure progress happens at a pace that protects individuals and society. Responsible actors should not be disadvantaged -- yet safeguards are essential to maintain trust. For the latest video in our RWDS Big Questions series, our panel explores this delicate balance.
The tech bros might show more humility in Delhi – but will they make AI any safer?
The tech bros might show more humility in Delhi - but will they make AI any safer? Those who shout the loudest about artificial intelligence tend to be in the West, notably the US and Europe. So it's significant that a gathering of powerful leaders is being held in the Global South, a region of the world that runs the risk of being left behind in the AI race. Tech bosses, politicians, scientists, academics and campaigners are meeting at the AI Impact Summit in India this week for top-level discussions about what the world should be doing to try to marshal the AI revolution in the right direction. At last year's AI Action Summit, as it was then known, an ugly power struggle broke out between some Western countries over who should be in charge.
- North America > United States (0.30)
- North America > Central America (0.15)
- Africa (0.06)
- (13 more...)
- Government > Regional Government (0.49)
- Media > Film (0.48)
- Leisure & Entertainment > Sports (0.42)
Governing the rise of interactive AI will require behavioral insights
AI is no longer just a translator or image recognizer. Today, we engage with systems that remember our preferences, proactively manage our calendars, and even provide emotional support. They build ongoing bonds with users. They change their behavior based on our habits. They don't just wait for commands; they suggest next steps.
MAGA's 'Manifest Destiny' Coalition Has Arrived
MAGA's'Manifest Destiny' Coalition Has Arrived Warring factions of right-wing influencers and MAGA pundits can finally agree on something: American imperialism. For the past few months, some of the most influential figures in MAGA politics have been locked in bitter infighting . But with a new year comes new priorities, and the warring factions are reuniting around a new cause: a new era of American "manifest destiny." Major players, from influencers to politicians, have been arguing over the Trump administration's plans on issues like H-1B visas, Jeffrey Epstein document dumps, AI regulation, Israel's war with Hamas, and even white nationalist Nick Fuentes. But in recent weeks, these feuds have faded into background noise as the US raided Venezuela, arresting president Nicolás Maduro, and, more recently, as President Donald Trump publicly toys with invading Greenland and destroying NATO as we know it.
- South America > Venezuela (1.00)
- North America > Greenland (0.31)
- Asia > Middle East > Israel (0.25)
- (12 more...)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
The AI doomers feel undeterred
But they certainly wish people were still taking their warnings really seriously. It's a weird time to be an AI doomer. This small but influential community of researchers, scientists, and policy experts believes, in the simplest terms, that AI could get so good it could be bad--very, very bad--for humanity. Though many of these people would be more likely to describe themselves as advocates for AI safety than as literal doomsayers, they warn that AI poses an existential risk to humanity. They argue that absent more regulation, the industry could hurtle toward systems it can't control. They commonly expect such systems to follow the creation of artificial general intelligence (AGI), a slippery concept generally understood as technology that can do whatever humans can do, and better. Though this is far from a universally shared perspective in the AI field, the doomer crowd has had some notable success over the past several years: helping shape AI policy coming from the Biden administration, organizing prominent calls for international "red lines " to prevent AI risks, and getting a bigger (and more influential) megaphone as some of its adherents win science's most prestigious awards. But a number of developments over the past six months have put them on the back foot.
- North America > United States > Massachusetts (0.04)
- North America > United States > California (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.98)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.98)
- (2 more...)
Institutional AI Sovereignty Through Gateway Architecture: Implementation Report from Fontys ICT
To counter fragmented, high-risk adoption of commercial AI tools, we built and ran an institutional AI platform in a six-month, 300-user pilot, showing that a university of applied sciences can offer advanced AI with fair access, transparent risks, controlled costs, and alignment with European law. Commercial AI subscriptions create unequal access and compliance risks through opaque processing and non-EU hosting, yet banning them is neither realistic nor useful. Institutions need a way to provide powerful AI in a sovereign, accountable form. Our solution is a governed gateway platform with three layers: a ChatGPT-style frontend linked to institutional identity that makes model choice explicit; a gateway core enforcing policy, controlling access and budgets, and routing traffic to EU infrastructure by default; and a provider layer wrapping commercial and open-source models in institutional model cards that consolidate vendor documentation into one governance interface. The pilot ran reliably with no privacy incidents and strong adoption, enabling EU-default routing, managed spending, and transparent model choices. Only the gateway pattern combines model diversity and rapid innovation with institutional control. The central insight: AI is not a support function but strategy, demanding dedicated leadership. Sustainable operation requires governance beyond traditional boundaries. We recommend establishing a formal AI Officer role combining technical literacy, governance authority, and educational responsibility. Without it, AI decisions stay ad-hoc and institutional exposure grows. With it, higher-education institutions can realistically operate their own multi-provider AI platform, provided they govern AI as seriously as they teach it.
- Europe > Netherlands (0.04)
- Europe > Germany (0.04)
- North America > United States > Virginia (0.04)
- (5 more...)
- Research Report (0.64)
- Instructional Material > Course Syllabus & Notes (0.46)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Energy (1.00)
- (2 more...)
The Gender Code: Gendering the Global Governance of Artificial Intelligence
This paper examines how international AI governance frameworks address gender issues and gender-based harms. The analysis covers binding regulations, such as the EU AI Act; soft law instruments, like the UNESCO Recommendations on AI Ethics; and global initiatives, such as the Global Partnership on AI (GPAI). These instruments reveal emerging trends, including the integration of gender concerns into broader human rights frameworks, a shift toward explicit gender-related provisions, and a growing emphasis on inclusivity and diversity. Yet, some critical gaps persist, including inconsistent treatment of gender across governance documents, limited engagement with intersectionality, and a lack of robust enforcement mechanisms. However, this paper argues that effective AI governance must be intersectional, enforceable, and inclusive. This is key to moving beyond tokenism toward meaningful equity and preventing reinforcement of existing inequalities. The study contributes to ethical AI debates by highlighting the importance of gender-sensitive governance in building a just technological future.
- Asia > Japan > Honshū > Chūgoku > Hiroshima Prefecture > Hiroshima (0.05)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > Michigan (0.04)
- (4 more...)
- Research Report (1.00)
- Overview (1.00)
- Law > Civil Rights & Constitutional Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
- Education (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Natural Language (0.93)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
Architectures for Building Agentic AI
This chapter argues that the reliability of agentic and generative AI is chiefly an architectural property. We define agentic systems as goal-directed, tool-using decision makers operating in closed loops, and show how reliability emerges from principled componentisation (goal manager, planner, tool-router, executor, memory, verifiers, safety monitor, telemetry), disciplined interfaces (schema-constrained, validated, least-privilege tool calls), and explicit control and assurance loops. Building on classical foundations, we propose a practical taxonomy-tool-using agents, memory-augmented agents, planning and self-improvement agents, multi-agent systems, and embodied or web agents - and analyse how each pattern reshapes the reliability envelope and failure modes. We distil design guidance on typed schemas, idempotency, permissioning, transactional semantics, memory provenance and hygiene, runtime governance (budgets, termination conditions), and simulate-before-actuate safeguards.
- Information Technology > Security & Privacy (0.68)
- Health & Medicine > Consumer Health (0.48)