Goto

Collaborating Authors

 cais


From Kicking to Causality: Simulating Infant Agency Detection with a Robust Intrinsic Reward

Xu, Xia, Triesch, Jochen

arXiv.org Artificial Intelligence

While human infants robustly discover their own causal efficacy, standard reinforcement learning agents remain brittle, as their reliance on correlation-based rewards fails in noisy, ecologically valid scenarios. To address this, we introduce the Causal Action Influence Score (CAIS), a novel intrinsic reward rooted in causal inference. CAIS quantifies an action's influence by measuring the 1-Wasserstein distance between the learned distribution of sensory outcomes conditional on that action, $p(h|a)$, and the baseline outcome distribution, $p(h)$. This divergence provides a robust reward that isolates the agent's causal impact from confounding environmental noise. We test our approach in a simulated infant-mobile environment where correlation-based perceptual rewards fail completely when the mobile is subjected to external forces. In stark contrast, CAIS enables the agent to filter this noise, identify its influence, and learn the correct policy. Furthermore, the high-quality predictive model learned for CAIS allows our agent, when augmented with a surprise signal, to successfully reproduce the "extinction burst" phenomenon. We conclude that explicitly inferring causality is a crucial mechanism for developing a robust sense of agency, offering a psychologically plausible framework for more adaptive autonomous systems.


Modeling Resilience of Collaborative AI Systems

Rimawi, Diaeddin, Liotta, Antonio, Todescato, Marco, Russo, Barbara

arXiv.org Artificial Intelligence

A Collaborative Artificial Intelligence System (CAIS) performs actions in collaboration with the human to achieve a common goal. CAISs can use a trained AI model to control human-system interaction, or they can use human interaction to dynamically learn from humans in an online fashion. In online learning with human feedback, the AI model evolves by monitoring human interaction through the system sensors in the learning state, and actuates the autonomous components of the CAIS based on the learning in the operational state. Therefore, any disruptive event affecting these sensors may affect the AI model's ability to make accurate decisions and degrade the CAIS performance. Consequently, it is of paramount importance for CAIS managers to be able to automatically track the system performance to understand the resilience of the CAIS upon such disruptive events. In this paper, we provide a new framework to model CAIS performance when the system experiences a disruptive event. With our framework, we introduce a model of performance evolution of CAIS. The model is equipped with a set of measures that aim to support CAIS managers in the decision process to achieve the required resilience of the system. We tested our framework on a real-world case study of a robot collaborating online with the human, when the system is experiencing a disruptive event. The case study shows that our framework can be adopted in CAIS and integrated into the online execution of the CAIS activities.


Green Resilience of Cyber-Physical Systems

Rimawi, Diaeddin

arXiv.org Artificial Intelligence

Cyber-Physical System (CPS) represents systems that join both hardware and software components to perform real-time services. Maintaining the system's reliability is critical to the continuous delivery of these services. However, the CPS running environment is full of uncertainties and can easily lead to performance degradation. As a result, the need for a recovery technique is highly needed to achieve resilience in the system, with keeping in mind that this technique should be as green as possible. This early doctorate proposal, suggests a game theory solution to achieve resilience and green in CPS. Game theory has been known for its fast performance in decision-making, helping the system to choose what maximizes its payoffs. The proposed game model is described over a real-life collaborative artificial intelligence system (CAIS), that involves robots with humans to achieve a common goal. It shows how the expected results of the system will achieve the resilience of CAIS with minimized CO2 footprint.


Researchers shed light on how to read, control AI systems' minds

FOX News

An organization dedicated to the safe development of artificial intelligence released a "breakthrough paper" it said will help humans better control the technology as it spreads. "We can't trust AIs if we don't know what they are thinking or how they work on the inside," Dan Hendrycks, director of the Center for AI Safety, told Fox News Digital. Hendrycks made the comments after the Center for AI Safety (CAIS) released a paper this week diving into the inner workings of the mind of AI systems, looking for ways that humans could better understand and control and understand AI technologies and mitigate some of the risks they pose. META MAY BE USING YOUR FACEBOOK, INSTAGRAM TO'FEED THE BEAST' OF NEW TECH According to the CAIS, the paper demonstrated ways humans can control and detect when AI systems are telling truths or lies, when they behave morally or immorally, whether they act with emotions such as anger, fear and joy, and how to make them less biased. The paper also looked at ways to develop systems that can resist jailbreaks, a practice where users can exploit vulnerabilities in AI systems and potentially use them outside desired protocols.


Elon Musk launches AI startup and warns of a 'terminator future'

The Guardian

Elon Musk has launched an artificial intelligence startup that will be "pro-humanity", as he said the world needed to worry about the prospect of a "terminator future" in order to avoid the most apocalyptic AI scenarios. Musk said xAI would seek to build a system that would be safe because it was "maximally curious" about humanity rather than having moral guidelines programmed into it. The world's wealthiest person was one of the signatories to a letter this year that called for a pause in building large AI models such as ChatGPT, the chatbot built by the US firm OpenAI. There are growing fears that development of AI technology will race beyond human control. Speaking on a Spaces discussion on Twitter, Musk saida pause no longer seemed realistic and he hopped xAI would provide an alternative path.


Towards Risk Modeling for Collaborative AI

Camilli, Matteo, Felderer, Michael, Giusti, Andrea, Matt, Dominik T., Perini, Anna, Russo, Barbara, Susi, Angelo

arXiv.org Artificial Intelligence

Collaborative AI systems aim at working together with humans in a shared space to achieve a common goal. This setting imposes potentially hazardous circumstances due to contacts that could harm human beings. Thus, building such systems with strong assurances of compliance with requirements domain specific standards and regulations is of greatest importance. Challenges associated with the achievement of this goal become even more severe when such systems rely on machine learning components rather than such as top-down rule-based AI. In this paper, we introduce a risk modeling approach tailored to Collaborative AI systems. The risk model includes goals, risk events and domain specific indicators that potentially expose humans to hazards. The risk model is then leveraged to drive assurance methods that feed in turn the risk model through insights extracted from run-time evidence. Our envisioned approach is described by means of a running example in the domain of Industry 4.0, where a robotic arm endowed with a visual perception component, implemented with machine learning, collaborates with a human operator for a production-relevant task.


Can artificial intelligence help prevent suicides?

#artificialintelligence

According to the CDC, the suicide rate for individuals 10-24 years old has increased 56% between 2007 and 2017. In comparison to the general population, more than half of people experiencing homelessness have had thoughts of suicide or have attempted suicide, the National Health Care for the Homeless Council reported. Phebe Vayanos, assistant professor of Industrial and Systems Engineering and Computer Science at the USC Viterbi School of Engineering has been enlisting the help of a powerful ally--artificial intelligence--to help mitigate the risk of suicide. "In this research, we wanted to find ways to mitigate suicidal ideation and death among youth. Our idea was to leverage real-life social network information to build a support network of strategically positioned individuals that can'watch-out' for their friends and refer them to help as needed," Vayanos said.


Toward Cognitive and Immersive Systems: Experiments in a Cognitive Microworld

Peveler, Matthew, Govindarajulu, Naveen Sundar, Bringsjord, Selmer, Srivastava, Biplav, Talamadupula, Kartik, Su, Hui

arXiv.org Artificial Intelligence

As computational power has continued to increase, and sensors have become more accurate, the corresponding advent of systems that are at once cognitive and immersive has arrived. These \textit{cognitive and immersive systems} (CAISs) fall squarely into the intersection of AI with HCI/HRI: such systems interact with and assist the human agents that enter them, in no small part because such systems are infused with AI able to understand and reason about these humans and their knowledge, beliefs, goals, communications, plans, etc. We herein explain our approach to engineering CAISs. We emphasize the capacity of a CAIS to develop and reason over a `theory of the mind' of its human partners. This capacity entails that the AI in question has a sophisticated model of the beliefs, knowledge, goals, desires, emotions, etc.\ of these humans. To accomplish this engineering, a formal framework of very high expressivity is needed. In our case, this framework is a \textit{cognitive event calculus}, a particular kind of quantified multi-operator modal logic, and a matching high-expressivity automated reasoner and planner. To explain, advance, and to a degree validate our approach, we show that a calculus of this type satisfies a set of formal requirements, and can enable a CAIS to understand a psychologically tricky scenario couched in what we call the \textit{cognitive polysolid framework} (CPF). We also formally show that a room that satisfies these requirements can have a useful property we term \emph{expectation of usefulness}. CPF, a sub-class of \textit{cognitive microworlds}, includes machinery able to represent and plan over not merely blocks and actions (such as seen in the primitive `blocks worlds' of old), but also over agents and their mental attitudes about both other agents and inanimate objects.