research priority
LLM Cyber Evaluations Don't Capture Real-World Risk
Lukošiūtė, Kamilė, Swanda, Adam
Large language models (LLMs) are demonstrating increasing prowess in cybersecurity applications, creating creating inherent risks alongside their potential for strengthening defenses. In this position paper, we argue that current efforts to evaluate risks posed by these capabilities are misaligned with the goal of understanding real-world impact. Evaluating LLM cybersecurity risk requires more than just measuring model capabilities -- it demands a comprehensive risk assessment that incorporates analysis of threat actor adoption behavior and potential for impact. We propose a risk assessment framework for LLM cyber capabilities and apply it to a case study of language models used as cybersecurity assistants. Our evaluation of frontier models reveals high compliance rates but moderate accuracy on realistic cyber assistance tasks. However, our framework suggests that this particular use case presents only moderate risk due to limited operational advantages and impact potential. Based on these findings, we recommend several improvements to align research priorities with real-world impact assessment, including closer academia-industry collaboration, more realistic modeling of attacker behavior, and inclusion of economic metrics in evaluations. This work represents an important step toward more effective assessment and mitigation of LLM-enabled cybersecurity risks.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Asia > Cambodia (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- Africa > Nigeria (0.04)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.96)
Love, Joy, and Autism Robots: A Metareview and Provocatype
Hundt, Andrew, Ohlson, Gabrielle, Wolfert, Pieter, Miranda, Lux, Zhu, Sophia, Winkle, Katie
Previous work has observed how Neurodivergence is often harmfully pathologized in Human-Computer Interaction (HCI) and Human-Robot interaction (HRI) research. We conduct a review of autism robot reviews and find the dominant research direction is Autistic people's second to lowest (24 of 25) research priority: interventions and treatments purporting to 'help' neurodivergent individuals to conform to neurotypical social norms, become better behaved, improve social and emotional skills, and otherwise 'fix' us -- rarely prioritizing the internal experiences that might lead to such differences. Furthermore, a growing body of evidence indicates many of the most popular current approaches risk inflicting lasting trauma and damage on Autistic people. We draw on the principles and findings of the latest Autism research, Feminist HRI, and Robotics to imagine a role reversal, analyze the implications, then conclude with actionable guidance on Autistic-led scientific methods and research directions.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.14)
- North America > United States > Colorado > Boulder County > Boulder (0.05)
- Asia > Japan (0.04)
- (15 more...)
CITRIS Core Seed Funding - CITRIS and the Banatao Institute
The CITRIS Seed Funding program issues short-term, targeted awards to further the institute's research priorities for societal benefit, catalyze early results that can lead to significant funding and strengthen connections across UC campuses. Proposals are invited from principal investigators at UC Berkeley, UC Davis, UC Davis Health, UC Merced and UC Santa Cruz. Awardees embody the university's public mission and the innovative spirit of California. This year, up to 12 CITRIS Seed Awards will be chosen to address "grand challenges" in information technology. Each winning proposal receives $40,000–$60,000 and engagement with the CITRIS research community during the Jan. 1–Dec.
Letter to the Editor: Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter
Artificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents -- systems that perceive and act in some environment. In this context, "intelligence" is related to statistical and economic notions of rationality -- colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems. As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research.
- North America > United States > Oregon (0.08)
- North America > United States > California > Alameda County > Berkeley (0.08)
Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society
Prunkl, Carina, Whittlestone, Jess
One way of carving up the broad "AI ethics and society" research space that has emerged in recent years is to distinguish between "near-term" and "long-term" research. While such ways of breaking down the research space can be useful, we put forward several concerns about the near/long-term distinction gaining too much prominence in how research questions and priorities are framed. We highlight some ambiguities and inconsistencies in how the distinction is used, and argue that while there are differing priorities within this broad research community, these differences are not well-captured by the near/long-term distinction. We unpack the near/long-term distinction into four different dimensions, and propose some ways that researchers can communicate more clearly about their work and priorities using these dimensions. We suggest that moving towards a more nuanced conversation about research priorities can help establish new opportunities for collaboration, aid the development of more consistent and coherent research agendas, and enable identification of previously neglected research areas.
- North America > United States > New York > New York County > New York City (0.05)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > New York > Monroe County > Rochester (0.04)
- (2 more...)
Report sets research priorities for Biden's cancer moonshot
A new report outlines a scientific roadmap for the White House's cancer "moonshot" initiative -- urging research to harness the power of immune-based therapy, and to better tailor treatment by helping more patients get their tumors genetically profiled. Those are among a list of recommendations issued Wednesday by a panel of cancer experts and patient advocates advising the moonshot project on ways to speed progress against the nation's No. 2 killer. Also on the list: Learning what drives childhood cancer, finding ways to minimize the side effects of treatment, and making better use of some proven anti-cancer strategies. For example, about 3 percent of colorectal cancers are fueled by certain inherited genetic mutations -- and the report proposes a pilot project to test all newly diagnosed patients so the relatives of those who harbor the defects could learn if they, too, are at risk. The recommendations mark "a bold but feasible scientific proposal," said Dr. Doug Lowy, acting director of the National Cancer Institute, who will send the panel's report to Vice President Joe Biden's cancer moonshot task force.
Supermorality The Babel Singularity
"What is the ape to a man? And so shall man be to the Übermensch." Years ago, as a teenager in France, I used to visit a family friend, Monsieur de la Place. He lived with his wife in a high-rise in a small town outside of Paris. An old man, one side of his body had been paralyzed by a stroke. He was very educated and profound, and became a mentor of sorts to me for a time.
Letter to the Editor: Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter
Russell, Stuart (University of California, Berkeley) | Dietterich, Tom (Oregon State University) | Horvitz, Eric (Microsoft) | Selman, Bart (Cornell University) | Rossi, Francesca (University of Padova) | Hassabis, Demis (DeepMind) | Legg, Shane (DeepMind) | Suleyman, Mustafa (DeepMind) | George, Dileep (Vicarious) | Phoenix, Scott (Vicarious)
The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.
Letter to the Editor: Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter
Russell, Stuart (University of California, Berkeley) | Dietterich, Tom (Oregon State University) | Horvitz, Eric (Microsoft) | Selman, Bart (Cornell University) | Rossi, Francesca (University of Padova) | Hassabis, Demis (DeepMind) | Legg, Shane (DeepMind) | Suleyman, Mustafa (DeepMind) | George, Dileep (Vicarious) | Phoenix, Scott (Vicarious)
Artificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents — systems that perceive and act in some environment. In this context, "intelligence" is related to statistical and economic notions of rationality — colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems. As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research. There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls. The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008–09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. The attached research priorities document [see page X] gives many examples of such research directions that can help maximize the societal benefit of AI. This research is by necessity interdisciplinary, because it involves both society and AI. It ranges from economics, law and philosophy to computer security, formal methods and, of course, various branches of AI itself. In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.
- North America > Canada > Ontario > Toronto (0.15)
- Oceania > Australia > New South Wales (0.05)
- North America > United States > Oregon (0.05)
- (8 more...)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)