Goto

Collaborating Authors

 horvitz


Microsoft has a new plan to prove what's real and what's AI online

MIT Technology Review

Microsoft has a new plan to prove what's real and what's AI online A new proposal calls on social media and AI companies to adopt strict verification, but the company hasn't committed to following its own recommendations. There are the high-profile cases you may easily spot, like when White House officials recently shared a manipulated image of a protester in Minnesota and then mocked those asking about it. Other times, it slips quietly into social media feeds and racks up views, like the videos that Russian influence campaigns are currently spreading to discourage Ukrainians from enlisting. It is into this mess that Microsoft has put forward a blueprint, shared with, for how to prove what's real online. An AI safety research team at the company recently evaluated how methods for documenting digital manipulation are faring against today's most worrying AI developments, like interactive deepfakes and widely accessible hyperrealistic models. It then recommended technical standards that can be adopted by AI companies and social media platforms.


Should we worry AI will create deadly bioweapons? Not yet, but one day

New Scientist

Should we worry AI will create deadly bioweapons? Artificial intelligence promises to transform biology, allowing us to design better drugs, vaccines and even synthetic organisms for, say, eating waste plastic. But some fear it could also be used for darker purposes, to create bioweapons that wouldn't be detected by conventional methods until it was too late. So, how worried should we be? "AI advances are fuelling breakthroughs in biology and medicine," says Eric Horvitz, chief scientific officer at Microsoft. "With new power comes responsibility for vigilance." His team has published a study looking at whether AI could design proteins that do the same thing as proteins that are known to be dangerous, but are different enough that they wouldn't be recognised as dangerous.


AI Horizon Scanning, White Paper p3395, IEEE-SA. Part I: Areas of Attention

Cortês, Marina, Liddle, Andrew R., Emmanouilidis, Christos, Kelly, Anthony E., Matusow, Ken, Ragunathan, Ragu, Suess, Jayne M., Tambouratzis, George, Zalewski, Janusz, Bray, David A.

arXiv.org Artificial Intelligence

Generative Artificial Intelligence (AI) models may carry societal transformation to an extent demanding a delicate balance between opportunity and risk. This manuscript is the first of a series of White Papers informing the development of IEEE-SA's p3995: `Standard for the Implementation of Safeguards, Controls, and Preventive Techniques for Artificial Intelligence (AI) Models', Chair: Marina Cort\^{e}s (https://standards.ieee.org/ieee/3395/11378/). In this first horizon-scanning we identify key attention areas for standards activities in AI. We examine different principles for regulatory efforts, and review notions of accountability, privacy, data rights and mis-use. As a safeguards standard we devote significant attention to the stability of global infrastructures and consider a possible overdependence on cloud computing that may result from densely coupled AI components. We review the recent cascade-failure-like Crowdstrike event in July 2024, as an illustration of potential impacts on critical infrastructures from AI-induced incidents in the (near) future. It is the first of a set of articles intended as White Papers informing the audience on the standard development. Upcoming articles will focus on regulatory initiatives, technology evolution and the role of AI in specific domains.


Getting Serious about Humor: Crafting Humor Datasets with Unfunny Large Language Models

Horvitz, Zachary, Chen, Jingru, Aditya, Rahul, Srivastava, Harshvardhan, West, Robert, Yu, Zhou, McKeown, Kathleen

arXiv.org Artificial Intelligence

Humor is a fundamental facet of human cognition and interaction. Yet, despite recent advances in natural language processing, humor detection remains a challenging task that is complicated by the scarcity of datasets that pair humorous texts with similar non-humorous counterparts. In our work, we investigate whether large language models (LLMs), can generate synthetic data for humor detection via editing texts. We benchmark LLMs on an existing human dataset and show that current LLMs display an impressive ability to 'unfun' jokes, as judged by humans and as measured on the downstream task of humor detection. We extend our approach to a code-mixed English-Hindi humor dataset, where we find that GPT-4's synthetic data is highly rated by bilingual annotators and provides challenging adversarial examples for humor classifiers.


A Computational Inflection for Scientific Discovery

Hope, Tom, Downey, Doug, Etzioni, Oren, Weld, Daniel S., Horvitz, Eric

arXiv.org Artificial Intelligence

We stand at the foot of a significant inflection in the trajectory of scientific discovery. As society continues on its fast-paced digital transformation, so does humankind's collective scientific knowledge and discourse. We now read and write papers in digitized form, and a great deal of the formal and informal processes of science are captured digitally -- including papers, preprints and books, code and datasets, conference presentations, and interactions in social networks and collaboration and communication platforms. The transition has led to the creation and growth of a tremendous amount of information -- much of which is available for public access -- opening exciting opportunities for computational models and systems that analyze and harness it. In parallel, exponential growth in data processing power has fueled remarkable advances in artificial intelligence, including large neural language models capable of learning powerful representations from unstructured text. Dramatic changes in scientific communication -- such as the advent of the first scientific journal in the 17th century -- have historically catalyzed revolutions in scientific thought. The confluence of societal and computational trends suggests that computer science is poised to ignite a revolution in the scientific process itself.


The Internet-Warping Power of 'Synthetic Histories'

The Atlantic - Technology

History has long been a theater of war, the past serving as a proxy in conflicts over the present. Ron DeSantis is warping history by banning books on racism from Florida's schools; people remain divided about the right approach to repatriating Indigenous objects and remains; the Pentagon Papers were an attempt to twist narratives about the Vietnam War. The Nazis seized power in part by manipulating the past--they used propaganda about the burning of the Reichstag, the German parliament building, to justify persecuting political rivals and assuming dictatorial authority. That specific example weighs on Eric Horvitz, Microsoft's chief scientific officer and a leading AI researcher, who tells me that the apparent AI revolution could not only provide a new weapon to propagandists, as social media did earlier this century, but entirely reshape the historiographic terrain, perhaps laying the groundwork for a modern-day Reichstag fire. These are powerful and easy-to-use programs that produce synthetic text, images, video, and audio, all of which can be used by bad actors to fabricate events, people, speeches, and news reports to sow disinformation.


Microsoft researcher describes two new deepfake methods and their risks

#artificialintelligence

Eric Joel Horvitz is a computer scientist and director of the Microsoft Research Lab in Redmond. In a new research paper, he describes two new deepfake methods and their far-reaching risks. In the research paper, "On the Horizon: Interactive and Compositional Deepfakes," Horvitz describes two new deepfake methods that he believes are technically possible in the future and "that we can expect to come into practice with costly implications for society." "Interactive deepfakes" is what Horvitz calls multimodal deepfake clones of real people that are indistinguishable from the real person during video phone calls, for example. Current deepfake systems are mostly limited to exchanging faces – and even that offers only limited interaction possibilities.


On the Horizon: Interactive and Compositional Deepfakes

Horvitz, Eric

arXiv.org Artificial Intelligence

Over a five-year period, computing methods for generating high-fidelity, fictional depictions of people and events moved from exotic demonstrations by computer science research teams into ongoing use as a tool of disinformation. The methods, referred to with the portmanteau of "deepfakes," have been used to create compelling audiovisual content. Here, I share challenges ahead with malevolent uses of two classes of deepfakes that we can expect to come into practice with costly implications for society: interactive and compositional deepfakes. Interactive deepfakes have the capability to impersonate people with realistic interactive behaviors, taking advantage of advances in multimodal interaction. Compositional deepfakes leverage synthetic content in larger disinformation plans that integrate sets of deepfakes over time with observed, expected, and engineered world events to create persuasive synthetic histories. Synthetic histories can be constructed manually but may one day be guided by adversarial generative explanation (AGE) techniques. In the absence of mitigations, interactive and compositional deepfakes threaten to move us closer to a post-epistemic world, where fact cannot be distinguished from fiction. I shall describe interactive and compositional deepfakes and reflect about cautions and potential mitigations to defend against them.


Congressional hearings focus on AI, machine learning challenges in cybersecurity

#artificialintelligence

Congressional hearings on artificial intelligence and machine learning in cyberspace quietly took place in the U.S. Senate Armed Forces Committee's Subcommittee on Cyber in early May 2022. The committee discussed the topic with representatives from Google, Microsoft and the Center for Security and Emerging Technology at Georgetown University. While work has begun in earnest within industry and government, it is clear that much still needs to be done. The hearing chair, Senator Joe Manchin (D-WV), articulated the importance of AI and machine learning to the armed forces of the United States. Additionally, the committee highlighted the "shortfall of technically trained cybersecurity personnel across the country in government and industry alike."


Ideal Partition of Resources for Metareasoning

Horvitz, Eric, Breese, John

arXiv.org Artificial Intelligence

We can achieve significant gains in the value of computation by metareasoning about the nature or extent of base-level problem solving before executing a solution. However, resources that are irrevocably committed to metareasoning are not available for executing a solution. Thus, it is important to determine the portion of resources we wish to apply to metareasoning and control versus to the execution of a solution plan. Recent research on rational agency has highlighted the importance of limiting the consumption of resources by metareasoning machinery. We shall introduce the metareasoning-partition problem--the problem of ideally apportioning costly reasoning resources to planning a solution versus applying resource to executing a solution to a problem. We exercise prototypical metareasoning-partition models to probe the relationships between time allocated to metareasoning and to execution for different problem classes. Finally, we examine the value of metareasoning in the context of our functional analyses. This work was supported by a NASA Fellowship under Grant NCC-220-51, by the National Science Foundation under Grant IRI-8703710, and by the U.S. Army Research Office under Grant P-25514-EL. Computing facilities were provided by the SUMEX-AIM Resource under NLM Grant LM05208.