Gudwin, Ricardo
Building a Cognitive Twin Using a Distributed Cognitive System and an Evolution Strategy
Gibaut, Wandemberg, Gudwin, Ricardo
Approximately at the same time, based on the ideas This work proposes an approach that uses an evolutionary presented by Newell, Rosenbloom and Laird (1989), Laird algorithm along traditional Machine Learning methods released early versions of the SOAR cognitive architecture to build a digital, distributed cognitive agent capable of (Laird and Rosenbloom, 1996; Laird, 2012). By the end of emulating the potential actions (input-output behavior) of the 1990s, a large group of researchers involved in the Simulation a user while allowing further analysis and experimentation of Adaptive Behavior shaped the concept of Cognitive - at a certain level - of its internal structures. We focus Architecture as an essential set of structures and processes on the usage of simple devices and the automation of this necessary for the generation of a computational, cognitive building process, rather than manually designing the agent.
Learning Goal-based Movement via Motivational-based Models in Cognitive Mobile Robots
Berto, Letícia, Costa, Paula, Simões, Alexandre, Gudwin, Ricardo, Colombini, Esther
Humans have needs motivating their behavior according to intensity and context. However, we also create preferences associated with each action's perceived pleasure, which is susceptible to changes over time. This makes decision-making more complex, requiring learning to balance needs and preferences according to the context. To understand how this process works and enable the development of robots with a motivational-based learning model, we computationally model a motivation theory proposed by Hull. In this model, the agent (an abstraction of a mobile robot) is motivated to keep itself in a state of homeostasis. We added hedonic dimensions to see how preferences affect decision-making, and we employed reinforcement learning to train our motivated-based agents. We run three agents with energy decay rates representing different metabolisms in two different environments to see the impact on their strategy, movement, and behavior. The results show that the agent learned better strategies in the environment that enables choices more adequate according to its metabolism. The use of pleasure in the motivational mechanism significantly impacted behavior learning, mainly for slow metabolism agents. When survival is at risk, the agent ignores pleasure and equilibrium, hinting at how to behave in harsh scenarios.