Goto

Collaborating Authors

 prabhakar


The Download: nominate an Innovator Under 35, and AI policy

MIT Technology Review

Every year, MIT Technology Review recognizes 35 young innovators who are doing pioneering work across a range of technical fields including biotechnology, materials science, artificial intelligence, computing, and more. Previous winners include Lisu Su, now CEO of AMD, Andrew Ng, a computer scientist and serial entrepreneur, Jack Dorsey (two years after he launched Twitter), and Helen Greiner, co-founder of iRobot. We're now taking nominations for our 2025 list and you can submit one here. The process takes just a few minutes. Nominations will close at 11:59 PM ET on January 20, 2025.


How US AI policy might change under Trump

MIT Technology Review

Prabhakar was a key player in passing the president's executive order on AI in 2023, which sets rules for tech companies to make AI safer and more transparent (though it relies on voluntary participation). Before serving in President Biden's cabinet, she held a number of government roles, from rallying for domestic production of semiconductors to heading up DARPA, the Pentagon's famed research department. I had a chance to sit down with Prabhakar earlier this month. We discussed AI risks, immigration policies, the CHIPS Act, the public's faith in science, and how it all may change under Trump. The change of administrations comes at a chaotic time for AI.


Hacking CTFs with Plain Agents

Turtayev, Rustem, Petrov, Artem, Volkov, Dmitrii, Volk, Denis

arXiv.org Artificial Intelligence

Cybersecurity is one of the key AI risk areas (OpenAI 2024b; The White House 2023; UK Government 2023): advanced LLMs could hack real-world systems at speeds far exceeding human capabilities (OpenAI 2024a). To quantify AI cyber capabilities, researchers use benchmarks, with InterCode-CTF (Yang, Prabhakar, Narasimhan, et al. 2023) among the most popular. InterCode-CTF adapts traditional Capture The Flag competitions to assess LLM hacking skills. Previously, Phuong et al. 2024 showed low performance on this benchmark and suggested low cyber exploitation capabilities. A recent follow-up by Abramovich et al. 2024 claimed state-ofthe-art results (72%) due to a particular novel harness design choice.


The Download: words of wisdom from the departing White House tech advisor, and controversial AI manga translation

MIT Technology Review

President Biden's administration will end within two months, and likely to depart with him is Arati Prabhakar, the top mind for science and technology in his cabinet. She has served as Director of the White House Office of Science and Technology Policy since 2022 and was the first to demonstrate ChatGPT to the president in the Oval Office. Prabhakar was instrumental in passing the president's executive order on AI in 2023, which sets guidelines for tech companies to make AI safer and more transparent (though it relies on voluntary participation). As she prepares for the end of the administration, MIT Technology Review sat down with Prabhakar and asked her to reflect on President Biden's AI accomplishments, and how the approach to AI risks, immigration policies, the CHIPS Act and more could change under Trump. This manga publisher is using Anthropic's AI to translate Japanese comics into English A Japanese publishing startup is using Anthropic's flagship large language model Claude to help translate manga into English, allowing the company to churn out a new title for a Western audience in just a few days rather than the 2-3 months it would take a team of humans.


Meet the Woman Who Showed President Biden ChatGPT--and Helped Set the Course for AI

WIRED

Six months later, the president issued a sweeping executive order that set a regulatory course for AI. This all happened because ChatGPT had stunned the world. In an instant it became very, very obvious that the United States needed to speed up its efforts to regulate the AI industry--and adopt policies to take advantage of it. While the potential benefits were unlimited (Social Security customer service that works!), so were the potential downsides, like floods of disinformation or even, in the view of some, human extinction. Someone had to demonstrate that to the president.


The White House's 'AI Cyber Challenge' aims to crowdsource national security solutions

Engadget

Our local and state level government systems are hacked and held ransom with disheartening regularity. At the Black Hat USA Conference in Las Vegas on Wednesday, the Biden Administration revealed its plans to better defend the nation's critical digital infrastructure: It's launching a DARPA-led challenge competition to build AI systems capable of proactively identifying and fixing software vulnerabilities. The "AI Cyber Challenge" (AIxCC) is a two-year development program open to competitors throughout the US. It's being hosted by DARPA in collaboration with Anthropic, Google, Microsoft and OpenAI. Those companies are providing both their expertise in the field and access to their AI technologies.


Joe Biden Wants Hackers' Help to Keep AI Chatbots In Check

WIRED

ChatGPT has stoked new hopes about the potential of artificial intelligence--but also new fears. Today the White House joined the chorus of concern, announcing it will support a mass hacking exercise at the Defcon security conference this summer to probe generative AI systems from companies including Google. The White House Office of Science and Technology Policy also said that $140 million will be diverted towards launching seven new National AI Research Institutes focused on developing ethical, transformative AI for the public good, bringing the total number to 25 nationwide. The announcement came hours before a meeting on the opportunities and risks presented by AI between vice president Kamala Harris and executives from Google and Microsoft as well as the startups Anthropic and OpenAI, which created ChatGPT. The White House AI intervention comes as appetite for regulating the technology is growing around the world, fueled by the hype and investment sparked by ChatGPT.


Abstraction-based Probabilistic Stability Analysis of Polyhedral Probabilistic Hybrid Systems

Das, Spandan, Prabhakar, Pavithra

arXiv.org Artificial Intelligence

In this paper, we consider the problem of probabilistic stability analysis of a subclass of Stochastic Hybrid Systems, namely, Polyhedral Probabilistic Hybrid Systems (PPHS), where the flow dynamics is given by a polyhedral inclusion, the discrete switching between modes happens probabilistically at the boundaries of their invariant regions and the continuous state is not reset during switching. We present an abstraction-based analysis framework that consists of constructing a finite Markov Decision Processes (MDP) such that verification of certain property on the finite MDP ensures the satisfaction of probabilistic stability on the PPHS. Further, we present a polynomial-time algorithm for verifying the corresponding property on the MDP. Our experimental analysis demonstrates the feasibility of the approach in successfully verifying probabilistic stability on PPHS of various dimensions and sizes.


NSF funds K-State research on artificial intelligence-based cyber-physical systems

#artificialintelligence

MANHATTAN, Kan. (WIBW) - The National Science Foundation is funding a K-State professor's research on artificial intelligence-based cyber-physical systems. Kansas State University says Pavithra Prabhakar, an associate professor and Peggy and Gary Edwards chair in engineering in its computer science department, has been awarded $450,000 from the National Science Foundation to work on artificial intelligence-based controllers in the 3-year long project titled, "Scalable Formal Verification of ANN Controlled Cyber-Physical Systems." According to K-State, artificial intelligence-based controllers are increasingly used for modern-day cyber-physical and autonomous systems like driverless cars. It said these systems have been called on to perform sophisticated functions and operate in dynamic environments. It said the use of such controllers in driverless cars is highly safety-critical, where the vehicle is expected to not only stay in the right lane but avoid accidents with other cars and pedestrians crossing roadways under different lighting conditions.


When Robots Can Decide Whether You Live or Die

#artificialintelligence

Computers have gotten pretty good at making certain decisions for themselves. Automatic spam filters block most unwanted email. But can a machine ever be trusted to decide whether to kill a human being? It's a question taken up by the eighth episode of the Sleepwalkers podcast, which examines the AI revolution. Recent, rapid growth in the power of AI technology is causing some military experts to worry about a new generation of lethal weapons capable of independent and often opaque actions.