rai
Artificial Intelligence and Nuclear Weapons Proliferation: The Technological Arms Race for (In)visibility
Allison, David M., Herzog, Stephen
A robust nonproliferation regime has contained the spread of nuclear weapons to just nine states. Yet, emerging and disruptive technologies are reshaping the landscape of nuclear risks, presenting a critical juncture for decision makers. This article lays out the contours of an overlooked but intensifying technological arms race for nuclear (in)visibility, driven by the interplay between proliferation-enabling technologies (PETs) and detection-enhancing technologies (DETs). We argue that the strategic pattern of proliferation will be increasingly shaped by the innovation pace in these domains. Artificial intelligence (AI) introduces unprecedented complexity to this equation, as its rapid scaling and knowledge substitution capabilities accelerate PET development and challenge traditional monitoring and verification methods. To analyze this dynamic, we develop a formal model centered on a Relative Advantage Index (RAI), quantifying the shifting balance between PETs and DETs. Our model explores how asymmetric technological advancement, particularly logistic AI-driven PET growth versus stepwise DET improvements, expands the band of uncertainty surrounding proliferation detectability. Through replicable scenario-based simulations, we evaluate the impact of varying PET growth rates and DET investment strategies on cumulative nuclear breakout risk. We identify a strategic fork ahead, where detection may no longer suffice without broader PET governance. Governments and international organizations should accordingly invest in policies and tools agile enough to keep pace with tomorrow's technology.
- Europe > Switzerland > Zürich > Zürich (0.14)
- Asia > North Korea (0.14)
- North America > United States > California > Santa Clara County > Palo Alto (0.14)
- (18 more...)
- Information Technology > Security & Privacy (1.00)
- Government > Military (1.00)
- Energy > Power Industry > Utilities > Nuclear (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.70)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Case-Based Reasoning (0.46)
Why has an AI-altered Bollywood movie sparked uproar in India?
New Delhi, India – What if Michael had died instead of Sonny in The Godfather? Or if Rose had shared the debris plank, and Jack hadn't been left to freeze in the Atlantic in Titanic*? Eros International, one of India's largest production houses, with more than 4,000 films in its catalogue, has decided to explore this sort of what-if scenario. It has re-released one of its major hits, Raanjhanaa, a 2013 romantic drama, in cinemas – but has used artificial intelligence (AI) to change its tragic end, in which the male lead dies. In the AI-altered version, Kundan (played by popular actor Dhanush), a Hindu man who has a doomed romance with a Muslim woman, lives.
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
Responsible AI (RAI) Games and Ensembles
Several recent works have studied the societal effects of AI; these include issues such as fairness, robustness, and safety. In many of these objectives, a learner seeks to minimize its worst-case loss over a set of predefined distributions (known as uncertainty sets), with usual examples being perturbed versions of the empirical distribution. In other words, the aforementioned problems can be written as min-max problems over these uncertainty sets. In this work, we provide a general framework for studying these problems, which we refer to as Responsible AI (RAI) games. We provide two classes of algorithms for solving these games: (a) game-play based algorithms, and (b) greedy stagewise estimation algorithms.
RAI: Flexible Agent Framework for Embodied AI
Rachwał, Kajetan, Majek, Maciej, Boczek, Bartłomiej, Dąbrowski, Kacper, Liberadzki, Paweł, Dąbrowski, Adam, Ganzha, Maria
With an increase in the capabilities of generative language models, a growing interest in embodied AI has followed. This contribution introduces RAI - a framework for creating embodied Multi Agent Systems for robotics. The proposed framework implements tools for Agents' integration with robotic stacks, Large Language Models, and simulations. It provides out-of-the-box integration with state-of-the-art systems like ROS 2. It also comes with dedicated mechanisms for the embodiment of Agents. These mechanisms have been tested on a physical robot, Husarion ROSBot XL, which was coupled with its digital twin, for rapid prototyping. Furthermore, these mechanisms have been deployed in two simulations: (1) robot arm manipulator and (2) tractor controller. All of these deployments have been evaluated in terms of their control capabilities, effectiveness of embodiment, and perception ability. The proposed framework has been used successfully to build systems with multiple agents. It has demonstrated effectiveness in all the aforementioned tasks. It also enabled identifying and addressing the shortcomings of the generative models used for embodied AI.
- Europe > Poland > Masovia Province > Warsaw (0.05)
- Europe > Croatia > Primorje-Gorski Kotar County > Rijeka (0.04)
Framework, Standards, Applications and Best practices of Responsible AI : A Comprehensive Survey
Gadekallu, Thippa Reddy, Dev, Kapal, Khowaja, Sunder Ali, Wang, Weizheng, Feng, Hailin, Fang, Kai, Pandya, Sharnil, Wang, Wei
Responsible Artificial Intelligence (RAI) is a combination of ethics associated with the usage of artificial intelligence aligned with the common and standard frameworks. This survey paper extensively discusses the global and national standards, applications of RAI, current technology and ongoing projects using RAI, and possible challenges in implementing and designing RAI in the industries and projects based on AI. Currently, ethical standards and implementation of RAI are decoupled which caters each industry to follow their own standards to use AI ethically. Many global firms and government organizations are taking necessary initiatives to design a common and standard framework. Social pressure and unethical way of using AI forces the RAI design rather than implementation.
- North America > United States (1.00)
- Africa (0.04)
- North America > Central America (0.04)
- (13 more...)
- Research Report (1.00)
- Overview (1.00)
Reviews: Modeling Uncertainty by Learning a Hierarchy of Deep Neural Connections
Here are my comments for the paper: - B2N, RAI, and GGT abbreviations are never defined in the paper; the have been just cited from previous works (minor). A short background section on these methods can also include their full name. As far as I understand, the proposed method is B2N with B-RAI instead of RAI which was originally proposed in [25]. This allows the model to sample multiple generative and discriminative structures, and as a result create an ensemble of networks with possibly different structures and parameters. Maybe a better way for structuring the paper is to have a background section on B-RAI and B2N, and a separate section on BRAINet in which the distinction with other works and contribution is clearly written.
Responsible AI (RAI) Games and Ensembles
Several recent works have studied the societal effects of AI; these include issues such as fairness, robustness, and safety. In many of these objectives, a learner seeks to minimize its worst-case loss over a set of predefined distributions (known as uncertainty sets), with usual examples being perturbed versions of the empirical distribution. In other words, the aforementioned problems can be written as min-max problems over these uncertainty sets. In this work, we provide a general framework for studying these problems, which we refer to as Responsible AI (RAI) games. We provide two classes of algorithms for solving these games: (a) game-play based algorithms, and (b) greedy stagewise estimation algorithms.
DCR-Consistency: Divide-Conquer-Reasoning for Consistency Evaluation and Improvement of Large Language Models
Cui, Wendi, Zhang, Jiaxin, Li, Zhuohang, Damien, Lopez, Das, Kamalika, Malin, Bradley, Kumar, Sricharan
Evaluating the quality and variability of text generated by Large Language Models (LLMs) poses a significant, yet unresolved research challenge. Traditional evaluation methods, such as ROUGE and BERTScore, which measure token similarity, often fail to capture the holistic semantic equivalence. This results in a low correlation with human judgments and intuition, which is especially problematic in high-stakes applications like healthcare and finance where reliability, safety, and robust decision-making are highly critical. This work proposes DCR, an automated framework for evaluating and improving the consistency of LLM-generated texts using a divide-conquer-reasoning approach. Unlike existing LLM-based evaluators that operate at the paragraph level, our method employs a divide-and-conquer evaluator (DCE) that breaks down the paragraph-to-paragraph comparison between two generated responses into individual sentence-to-paragraph comparisons, each evaluated based on predefined criteria. To facilitate this approach, we introduce an automatic metric converter (AMC) that translates the output from DCE into an interpretable numeric score. Beyond the consistency evaluation, we further present a reason-assisted improver (RAI) that leverages the analytical reasons with explanations identified by DCE to generate new responses aimed at reducing these inconsistencies. Through comprehensive and systematic empirical analysis, we show that our approach outperforms state-of-the-art methods by a large margin (e.g., +19.3% and +24.3% on the SummEval dataset) in evaluating the consistency of LLM generation across multiple benchmarks in semantic, factual, and summarization consistency tasks. Our approach also substantially reduces nearly 90% of output inconsistencies, showing promise for effective hallucination mitigation.
- North America > The Bahamas (0.14)
- Asia > Japan (0.14)
- Europe > United Kingdom > England (0.05)
- (6 more...)
- Leisure & Entertainment > Sports > Olympic Games (0.69)
- Government (0.68)
Explainable AI is Responsible AI: How Explainability Creates Trustworthy and Socially Responsible Artificial Intelligence
Artificial intelligence (AI) has been clearly established as a technology with the potential to revolutionize fields from healthcare to finance - if developed and deployed responsibly. This is the topic of responsible AI, which emphasizes the need to develop trustworthy AI systems that minimize bias, protect privacy, support security, and enhance transparency and accountability. Explainable AI (XAI) has been broadly considered as a building block for responsible AI (RAI), with most of the literature considering it as a solution for improved transparency. This work proposes that XAI and responsible AI are significantly more deeply entwined. In this work, we explore state-of-the-art literature on RAI and XAI technologies. Based on our findings, we demonstrate that XAI can be utilized to ensure fairness, robustness, privacy, security, and transparency in a wide range of contexts. Our findings lead us to conclude that XAI is an essential foundation for every pillar of RAI.
- Asia > Middle East > Saudi Arabia (0.14)
- Europe > United Kingdom (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- (24 more...)
- Transportation (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- (14 more...)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.68)
White House Blueprint is the Starting Point for Building Responsible AI - Nextgov
Late last year, White House Office of Science and Technology Policy released the Blueprint for an AI Bill of Rights, instantly elevating the topic of responsible AI to the top of leadership agendas across executive branch agencies. While the themes of the blueprint are not entirely new--building on prior work including the AI in Government Act of 2020, a December 2020 executive order on trustworthy AI, and the Federal Privacy Council's Fair Information Practice Principles--the report brings new urgency to ongoing agency efforts to leverage data in ways consistent with our democratic ideals. With a stated goal of supporting "the development of policies and practices that protect civil rights and promote democratic values in the building, deployment and governance of automated systems," the blueprint is rooted in five principles: safe and effective systems; algorithmic discrimination protections; data privacy; notice and explanation; and human alternatives, consideration and fallback. The Blueprint also includes notes on applying the principles and a technical companion to support operationalization. Some agencies that are less mature in their data capabilities might consider the blueprint to be of limited relevance.
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)