Goto

Collaborating Authors

 damasio


The Contingencies of Physical Embodiment Allow for Open-Endedness and Care

Christov-Moore, Leonardo, Juliani, Arthur, Kiefer, Alex, Reggente, Nicco, Rousse, B. Scott, Safron, Adam, Hinrichs, Nicolás, Polani, Daniel, Damasio, Antonio

arXiv.org Artificial Intelligence

Physical vulnerability and mortality are often seen as obstacles to be avoided in the development of artificial agents, which struggle to adapt to open-ended environments and provide aligned care. Meanwhile, biological organisms survive, thrive, and care for each other in an open-ended physical world with relative ease and efficiency. Understanding the role of the conditions of life in this disparity can aid in developing more robust, adaptive, and caring artificial agents. Here we define two minimal conditions for physical embodiment inspired by the existentialist phenomenology of Martin Heidegger: being-in-the-world (the agent is a part of the environment) and being-towards-death (unless counteracted, the agent drifts toward terminal states due to the second law of thermodynamics). We propose that from these conditions we can obtain both a homeostatic drive - aimed at maintaining integrity and avoiding death by expending energy to learn and act - and an intrinsic drive to continue to do so in as many ways as possible. Drawing inspiration from Friedrich Nietzsche's existentialist concept of will-to-power, we examine how intrinsic drives to maximize control over future states, e.g., empowerment, allow agents to increase the probability that they will be able to meet their future homeostatic needs, thereby enhancing their capacity to maintain physical integrity. We formalize these concepts within a reinforcement learning framework, which enables us to examine how intrinsically driven embodied agents learning in open-ended multi-agent environments may cultivate the capacities for open-endedness and care.


Sentience Quest: Towards Embodied, Emotionally Adaptive, Self-Evolving, Ethically Aligned Artificial General Intelligence

Hanson, David, Varcoe, Alexandre, Senna, Fabio, Krisciunas, Vytas, Huang, Wenwei, Sura, Jakub, Yeung, Katherine, Rodriguez, Mario, Wilsdorf, Jovanka, Smith, Kathy

arXiv.org Artificial Intelligence

Current artificial intelligence systems -- from large language models to autonomous robots -- excel at narrow tasks but lack key qualities of sentient beings: intrinsic motivation, affective interiority, autobiographical sense of self, deep creativity, and abili ties to autonomously evolve and adapt over time. Here we introduce Sentience Quest, an open research initiative to develop more capable artificial general intelligence lifeforms (AGIL) that achieve these grand challenges with an embodied, emotionally adaptive, self - determining, living AI, with core drives that ethically align with human s and the future of life. Our vision builds on ideas from cognitive science and neuroscience -- from Baars' Global Workspace Theory and Damasio's somatic mind, to Tononi's Integrated Information Theory and Hofstadter's narrative self -- synthesizing these into a novel cognitive architecture. We describe an approach that integrates intrinsic drives (e.g., survival, social bonding, curiosity), a global "Story Weaver" workspace for internal narrative and adaptive goal pursuit, and a hybrid neuro - symbolic memory that logs the AI's life events as structured "story objects." Implemented in humanoid robots like Sophia, this architecture enables adaptive behavior grounded in a human - like body, in pursuit of experiential learning homologous to human experiences. Early resu lts are promising, with a driver - based goal system generating self - motivated actions, a narrative memory allowing the robot to refer to its own experiences, and integrated information measures (Φ) quantifying evolving cognitive integration. We discuss ethi cal implications, exploring how co - evolution with humans via an information - centric ethics ("SuperGood" principle) may guide both developers and AI systems to ensure value alignment. Sentience Quest is presented as a call to action: a collaborative, open - source effort to imbue machines with accelerating sentience in a safe, transparent, and beneficial manner. 2


Probing for Consciousness in Machines

Immertreu, Mathis, Schilling, Achim, Maier, Andreas, Krauss, Patrick

arXiv.org Artificial Intelligence

This study explores the potential for artificial agents to develop core consciousness, as proposed by Antonio Damasio's theory of consciousness. According to Damasio, the emergence of core consciousness relies on the integration of a self model, informed by representations of emotions and feelings, and a world model. We hypothesize that an artificial agent, trained via reinforcement learning (RL) in a virtual environment, can develop preliminary forms of these models as a byproduct of its primary task. The agent's main objective is to learn to play a video game and explore the environment. To evaluate the emergence of world and self models, we employ probes-feedforward classifiers that use the activations of the trained agent's neural networks to predict the spatial positions of the agent itself. Our results demonstrate that the agent can form rudimentary world and self models, suggesting a pathway toward developing machine consciousness. This research provides foundational insights into the capabilities of artificial agents in mirroring aspects of human consciousness, with implications for future advancements in artificial intelligence.


The feasibility of artificial consciousness through the lens of neuroscience

Aru, Jaan, Larkum, Matthew, Shine, James M.

arXiv.org Artificial Intelligence

Biological cells have multi-level organization, and depend on a further cascade of biophysical intracellular complexity [79-83]. For instance, consider the Krebs cycle that underlies cellular respiration, a key process in maintaining cellular homeostasis [84]. Cellular respiration is a crucial biological process that enables cells to convert the energy stored in organic molecules into a form of energy that can be utilized by the cell, however this process is not compressible into software as processes like cellular respiration need to happen with real physical molecules. Note that our aim is not to suggest that consciousness requires the Krebs cycle, but rather to highlight that perhaps consciousness is similar: it cannot be abstracted away from the underlying machinery [68-69,85]. Importantly, we are not claiming that consciousness cannot be captured within software at all [68-69,85-87]. Rather, we emphasize that we have to at least entertain the possibility that consciousness is linked to the complex biological organization underlying life [74-81], and thus any computational description of consciousness will be much more complex than our present-day theories suggest (Figure 1).


Will Machines Ever Be Self-Conscious? - AI Summary

#artificialintelligence

Without a doubt, neuroscience holds vast scientific information about human consciousness as researchers over the years have tackled issues such as: how consciousness correlates with neural knowledge, the computational phenomenon achieved through consciousness, the theory of a global workspace, and the model of consciousness postulated by Damasio. Biologically he created a plausible model on consciousnesses as he had assigned all stages of consciousness to specific structures in the brain and associated them with respective functions. Tapping into the bedrock of discoveries made by neuroscience, artificial intelligence hosts many theories on consciousness, obviously from Damasio's Machinist point of view. However, virtually all algorithms investigated to create self-conscious machines have toed the line of a global workspace model of consciousness, which may be likened to a mechanical model. Unfortunately, due to widespread belief among the scientific community that human consciousness will never be simulated on a computer due to the infancy of AI ideas, there's a lackadaisical attitude towards implementing theories in this space.


Will Machines Ever Be Self-Conscious? - Data Enigma

#artificialintelligence

Over the last few decades, we have observed tremendous improvements in the ability of machines to carry out human tasks. Thanks to progressive studies, series of algorithms are consistently improved to ensure that these functions are carried out efficiently. Regardless of daunting improvements in recent times, scientists continue to improve the functionality of these machines. This brings to the question, will machines ever attain consciousness or be self-aware like the human mind? Self-consciousness cannot be understood by mere behavioral observance because it's an act of the mind.


What does the future of artificial intelligence mean for humans?

#artificialintelligence

The first question many people ask about artificial intelligence (AI) is, "Will it be good or bad?" The answer is … yes. Canadian company BlueDot used AI technology to detect the novel coronavirus outbreak in Wuhan, China, just hours after the first cases were diagnosed. Compiling data from local news reports, social media accounts and government documents, the infectious disease data analytics firm warned of the emerging crisis a week before the World Health Organization made any official announcement. While predictive algorithms could help us stave off pandemics or other global threats as well as manage many of our day-to-day challenges, AI's ultimate impact is impossible to predict.


Will we ever have Conscious Machines?

Krauss, Patrick, Maier, Andreas

arXiv.org Artificial Intelligence

The question of whether artificial beings or machines could become self-aware or consciousness has been a philosophical question for centuries. The main problem is that self-awareness cannot be observed from an outside perspective and the distinction of whether something is really self-aware or merely a clever program that pretends to do so cannot be answered without access to accurate knowledge about the mechanism's inner workings. We review the current state-of-the-art regarding these developments and investigate common machine learning approaches with respect to their potential ability to become self-aware. We realise that many important algorithmic steps towards machines with a core consciousness have already been devised. For human-level intelligence, however, many additional techniques have to be discovered.


Robots Need to Know They Can Die at Any Minute, Just Like the Rest of Us

#artificialintelligence

How do you get machines to perform better? Tell them they could croak at any minute. In a new paper from the University of Southern California, scientists say that "in a dynamic and unpredictable world, an intelligent agent should hold its own meta-goal of self-preservation." Lead researcher Antonio Damasio is a luminary in the field of intelligence and the brain. In his profile at the Edge Foundation, they say Damasio "has made seminal contributions to the understanding of brain processes underlying emotions, feelings, decision-making and consciousness."


Fun New Paper Says We Should Make Machines Freak Out About Their Own Mortality

#artificialintelligence

Artificial intelligence is already making great strides forward, but taking it to the next level might require a more drastic approach. According to two researchers, we could try giving AI a sense of peril and the fragility of its own existence. For now, the machines we code don't have a sense of their own being, or the need to fight for life and for survival, as we humans do. If those feelings were developed, that might give robots a better sense of urgency. The idea is to instil a sense of homeostasis – that need to balance conditions, whether that's the temperature of an environment, or the need for food and drink, that are required to ensure survival.