conscious processing
Towards Conscious Service Robots
Deep learning's success in perception, natural language processing, etc. inspires hopes for advancements in autonomous robotics. However, real-world robotics face challenges like variability, high-dimensional state spaces, non-linear dependencies, and partial observability. A key issue is non-stationarity of robots, environments, and tasks, leading to performance drops with out-of-distribution data. Unlike current machine learning models, humans adapt quickly to changes and new tasks due to a cognitive architecture that enables systematic generalization and meta-cognition. Human brain's System 1 handles routine tasks unconsciously, while System 2 manages complex tasks consciously, facilitating flexible problem-solving and self-monitoring. For robots to achieve human-like learning and reasoning, they need to integrate causal models, working memory, planning, and metacognitive processing. By incorporating human cognition insights, the next generation of service robots will handle novel situations and monitor themselves to avoid risks and mitigate errors.
- North America > United States > New Jersey > Middlesex County > New Brunswick (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Germany > North Rhine-Westphalia > Cologne Region > Bonn (0.04)
- Asia > Middle East > Saudi Arabia > Northern Borders Province > Arar (0.04)
- Health & Medicine (1.00)
- Leisure & Entertainment > Games > Computer Games (0.93)
Is artificial consciousness achievable? Lessons from the human brain
Farisco, Michele, Evers, Kathinka, Changeux, Jean-Pierre
We here analyse the question of developing artificial consciousness from an evolutionary perspective, taking the evolution of the human brain and its relation with consciousness as a reference model. This kind of analysis reveals several structural and functional features of the human brain that appear to be key for reaching human-like complex conscious experience and that current research on Artificial Intelligence (AI) should take into account in its attempt to develop systems capable of conscious processing. We argue that, even if AI is limited in its ability to emulate human consciousness for both intrinsic (structural and architectural) and extrinsic (related to the current stage of scientific and technological knowledge) reasons, taking inspiration from those characteristics of the brain that make conscious processing possible and/or modulate it, is a potentially promising strategy towards developing conscious AI. Also, it is theoretically possible that AI research can develop partial or potentially alternative forms of consciousness that is qualitatively different from the human, and that may be either more or less sophisticated depending on the perspectives. Therefore, we recommend neuroscience-inspired caution in talking about artificial consciousness: since the use of the same word consciousness for humans and AI becomes ambiguous and potentially misleading, we propose to clearly specify what is common and what differs in AI conscious processing from full human conscious experience.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Asia > Middle East > Jordan (0.05)
- (9 more...)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.92)
DELTA: Dynamic Embedding Learning with Truncated Conscious Attention for CTR Prediction
Zhu, Chen, Du, Liang, Chen, Hong, Zhao, Shuang, Sun, Zixun, Wang, Xin, Zhu, Wenwu
Click-Through Rate (CTR) prediction is a pivotal task in product and content recommendation, where learning effective feature embeddings is of great significance. However, traditional methods typically learn fixed feature representations without dynamically refining feature representations according to the context information, leading to suboptimal performance. Some recent approaches attempt to address this issue by learning bit-wise weights or augmented embeddings for feature representations, but suffer from uninformative or redundant features in the context. To tackle this problem, inspired by the Global Workspace Theory in conscious processing, which posits that only a specific subset of the product features are pertinent while the rest can be noisy and even detrimental to human-click behaviors, we propose a CTR model that enables Dynamic Embedding Learning with Truncated Conscious Attention for CTR prediction, termed DELTA. DELTA contains two key components: (I) conscious truncation module (CTM), which utilizes curriculum learning to apply adaptive truncation on attention weights to select the most critical feature in the context; (II) explicit embedding optimization (EEO), which applies an auxiliary task during training that directly and independently propagates the gradient from the loss layer to the embedding layer, thereby optimizing the embedding explicitly via linear feature crossing. Extensive experiments on five challenging CTR datasets demonstrate that DELTA achieves new state-of-art performance among current CTR methods.
The Neural Newsletter 9/15-9/22
A powerful symbiotic relationship has blossomed between neuroscience and computer science as of late, with brain systems providing inspiration for prevalent computer algorithms like neural networks and computer-based mathematical models driving important research into the brain's computational methods. Daniel Kahneman's Thinking, Fast and Slow, has popularized the notion that human cognition is divided into distinct hierarchical systems, which Kahneman deems "system 1" and "system 2." Artificial intelligence can handle system 1 tasks, pertaining to fast, nonconscious operations, just as efficiently as humans can. However, it still lags behind when it comes to system 2 tasks, which engage different cognitive pathways that are slower and enlist conscious deliberation. The fact that computers can't compete with humans at deliberate tasks means that computer scientists still have a lot to learn from the brain, which inspired researchers out of the Sorbonne to develop a computational model based on the most recent theories in human learning and cognitive development. They found that processes like synaptic pruning (the elimination of underused synapses), neurogenesis, and energy regulation, and accurate dopamine reinforcement were underrepresented in computational learning models.
- North America > United States (0.14)
- Europe > Hungary (0.04)
- Law > Statutes (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
Yann LeCun and Yoshua Bengio: Self-supervised learning is the key to human-level intelligence
Self-supervised learning could lead to the creation of AI that's more human-like in its reasoning, according to Turing Award winners Yoshua Bengio and Yann LeCun. Bengio, director at the Montreal Institute for Learning Algorithms, and LeCun, Facebook VP and chief AI scientist, spoke candidly about this and other research trends during a session at the International Conference on Learning Representation (ICLR) 2020, which took place online. Supervised learning entails training an AI model on a labeled data set, and LeCun thinks it'll play a diminishing role as self-supervised learning comes into wider use. Instead of relying on annotations, self-supervised learning algorithms generate labels from data by exposing relationships among the data's parts, a step believed to be critical to achieving human-level intelligence. "Most of what we learn as humans and most of what animals learn is in a self-supervised mode, not a reinforcement mode. It's basically observing the world and interacting with it a little bit, mostly by observation in a test-independent way," said LeCun.
- Personal (0.36)
- Research Report (0.31)
Will we ever have Conscious Machines?
Krauss, Patrick, Maier, Andreas
The question of whether artificial beings or machines could become self-aware or consciousness has been a philosophical question for centuries. The main problem is that self-awareness cannot be observed from an outside perspective and the distinction of whether something is really self-aware or merely a clever program that pretends to do so cannot be answered without access to accurate knowledge about the mechanism's inner workings. We review the current state-of-the-art regarding these developments and investigate common machine learning approaches with respect to their potential ability to become self-aware. We realise that many important algorithmic steps towards machines with a core consciousness have already been devised. For human-level intelligence, however, many additional techniques have to be discovered.
- Europe > Germany > Bavaria > Middle Franconia > Nuremberg (0.14)
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (3 more...)
- Research Report (1.00)
- Overview (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Issues (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.69)