Goto

Collaborating Authors

 governance


Rebuilding the data stack for AI

MIT Technology Review

Enterprise AI hinges on high-accuracy outputs, requiring better data context, unified architectures, and rigorous measurement frameworks, says Bavesh Patel, senior vice president at Databricks, and Rajan Padmanabhan, unit technology officer at Infosys. Artificial intelligence may be dominating boardroom agendas, but many enterprises are discovering that the biggest obstacle to meaningful adoption is the state of their data. While consumer-facing AI tools have dazzled users with speed and ease, enterprise leaders are discovering that deploying AI at scale requires something far less glamorous but far more consequential: data infrastructure that is unified, governed, and fit for purpose. That gap between AI ambition and enterprise readiness is becoming one of the defining challenges of this next phase of digital transformation. As Bavesh Patel, senior vice president of Databricks, puts it, "the quality of that AI and how effective that AI is, is really dependent on information in your ...


What was Doge? How Elon Musk tried to gamify government

The Guardian

In 2025, when Elon Musk joined the government as the de facto head of something called the "department of government efficiency", he declared that governments were poorly configured "big dumb machines". To the senator Ted Cruz, he explained that "the only way to reconcile the databases and get rid of waste and fraud is to actually look at the computers". Muskism came to Washington soaked in memes, adolescent boasts and sadistic victory dances over mass firings. Leading a team of teenage coders and mid-level managers drawn from his suite of companies, Musk aimed to enter the codebase and rewrite regulations and budget lines from within. He would drag the paper-pushing bureaucracy kicking and screaming into the digital 21st century, scanning the contents of cavernous rooms of filing cabinets and feeding the data into a single interoperable system. The undertaking combined features of private equity-led restructuring with startup management, shot through with the sensibility of gaming and rightwing culture war. To succeed, he would need "God mode", an overview of the whole. If the mandate of Doge was to "[modernise] federal technology and software to maximise governmental efficiency and productivity", in the words of the executive order that launched the initiative on 20 January 2025, the reality was a strengthening of the state's surveillance capacities. Over time, Musk had become convinced that the real bugs in the code were people, especially the non-white illegal immigrants whom he saw as pawns in a liberal scheme to corrupt democracy and beneficiaries of what he called "suicidal empathy". He understood empathy itself in coding terms.


Nurturing agentic AI beyond the toddler stage

MIT Technology Review

The promise of autonomous agentic AI requires significant changes in the governance landscape. Parents of young children face a lot of fears about developmental milestones, from infancy through adulthood. The number of months it takes a baby to learn to talk or walk is often used as a benchmark for wellness, or an indicator of additional tests needed to properly diagnose a potential health condition. A parent rejoices over the child's first steps and then realizes how much has changed when the child can quickly walk outside, instead of slowly crawling in a safe area inside. Suddenly safety, including childproofing, takes a completely different lens and approach. Generative AI hit toddlerhood between December 2025 and January 2026 with the introduction of no code tools from multiple vendors and the debut of OpenClaw, an open source personal agent posted on GitHub.


RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

AIHub

RWDS Big Questions: how do we balance innovation and regulation in the world of AI? AI development is accelerating, while regulation moves more deliberately. That tension creates a core challenge: how do we maintain momentum without breaking the things that matter? The aim isn't to slow innovation unnecessarily, but to ensure progress happens at a pace that protects individuals and society. Responsible actors should not be disadvantaged -- yet safeguards are essential to maintain trust. For the latest video in our RWDS Big Questions series, our panel explores this delicate balance.


Copycats

Neural Information Processing Systems

In the past, MI datasets were frequently proprietary, confined to particular institutions, and stored in private repositories. In this particular setting, there is a pressing need for alternative models of data sharing, documentation, and governance. Within this context,theemergence ofCommunityContributed Platforms (CCPs) presented a potential for the public sharing of medical datasets.


The tech bros might show more humility in Delhi – but will they make AI any safer?

BBC News

The tech bros might show more humility in Delhi - but will they make AI any safer? Those who shout the loudest about artificial intelligence tend to be in the West, notably the US and Europe. So it's significant that a gathering of powerful leaders is being held in the Global South, a region of the world that runs the risk of being left behind in the AI race. Tech bosses, politicians, scientists, academics and campaigners are meeting at the AI Impact Summit in India this week for top-level discussions about what the world should be doing to try to marshal the AI revolution in the right direction. At last year's AI Action Summit, as it was then known, an ugly power struggle broke out between some Western countries over who should be in charge.


Governing the rise of interactive AI will require behavioral insights

AIHub

AI is no longer just a translator or image recognizer. Today, we engage with systems that remember our preferences, proactively manage our calendars, and even provide emotional support. They build ongoing bonds with users. They change their behavior based on our habits. They don't just wait for commands; they suggest next steps.


From guardrails to governance: A CEO's guide for securing agentic systems

MIT Technology Review

A practical blueprint for companies and CEOs that shows how to secure agentic systems by shifting from prompt tinkering to hard controls on identity, tools, and data. The previous article in this series, " Rules fail at the prompt, succeed at the boundary," focused on the first AI-orchestrated espionage campaign and the failure of prompt-level control. This article is the prescription. Across recent AI security guidance from standards bodies, regulators, and major providers, a simple idea keeps repeating: treat agents like powerful, semi-autonomous users, and enforce rules at the boundaries where they touch identity, tools, data, and outputs. These steps help define identity and limit capabilities. Today, agents run under vague, over-privileged service identities.


Rules fail at the prompt, succeed at the boundary

MIT Technology Review

From the Gemini Calendar prompt-injection attack of 2026 to the September 2025 state-sponsored hack using Anthropic's Claude code as an automated intrusion engine, the coercion of human-in-the-loop agentic actions and fully autonomous agentic workflows are the new attack vector for hackers. In the Anthropic case, roughly 30 organizations across tech, finance, manufacturing, and government were affected. Anthropic's threat team assessed that the attackers used AI to carry out 80% to 90% of the operation: reconnaissance, exploit development, credential harvesting, lateral movement, and data exfiltration, with humans stepping in only at a handful of key decision points. This was not a lab demo; it was a live espionage campaign. The attackers hijacked an agentic setup (Claude code plus tools exposed via Model Context Protocol (MCP)) and jailbroke it by decomposing the attack into small, seemingly benign tasks and telling the model it was doing legitimate penetration testing. The same loop that powers developer copilots and internal agents was repurposed as an autonomous cyber-operator.


MAGA's 'Manifest Destiny' Coalition Has Arrived

WIRED

MAGA's'Manifest Destiny' Coalition Has Arrived Warring factions of right-wing influencers and MAGA pundits can finally agree on something: American imperialism. For the past few months, some of the most influential figures in MAGA politics have been locked in bitter infighting . But with a new year comes new priorities, and the warring factions are reuniting around a new cause: a new era of American "manifest destiny." Major players, from influencers to politicians, have been arguing over the Trump administration's plans on issues like H-1B visas, Jeffrey Epstein document dumps, AI regulation, Israel's war with Hamas, and even white nationalist Nick Fuentes. But in recent weeks, these feuds have faded into background noise as the US raided Venezuela, arresting president Nicolás Maduro, and, more recently, as President Donald Trump publicly toys with invading Greenland and destroying NATO as we know it.