firing
- Europe > United Kingdom > North Sea > Southern North Sea (0.04)
- Asia > Middle East > Israel (0.04)
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.04)
- Asia > China > Beijing > Beijing (0.04)
Using Petri Nets for Context-Adaptive Robot Explanations
Soylu, Görkem Kılınç, Akalin, Neziha, Riveiro, Maria
In human-robot interaction, robots must communicate in a natural and transparent manner to foster trust, which requires adapting their communication to the context. In this paper, we propose using Petri nets (PNs) to model contextual information for adaptive robot explanations. PNs provide a formal, graphical method for representing concurrent actions, causal dependencies, and system states, making them suitable for analyzing dynamic interactions between humans and robots. We demonstrate this approach through a scenario involving a robot that provides explanations based on contextual cues such as user attention and presence. Model analysis confirms key properties, including deadlock-freeness, context-sensitive reachability, boundedness, and liveness, showing the robustness and flexibility of PNs for designing and verifying context-adaptive explanations in human-robot interactions.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > California > Riverside County > Riverside (0.04)
- Europe > Sweden (0.04)
- Europe > Slovakia > Bratislava > Bratislava (0.04)
Causes in neuron diagrams, and testing causal reasoning in Large Language Models. A glimpse of the future of philosophy?
Vervoort, Louis, Nikolaev, Vitaly
We propose a test for abstract causal reasoning in AI, based on scholarship in the philosophy of causation, in particular on the neuron diagrams popularized by D. Lewis. We illustrate the test on advanced Large Language Models (ChatGPT, DeepSeek and Gemini). Remarkably, these chatbots are already capable of correctly identifying causes in cases that are hotly debated in the literature. In order to assess the results of these LLMs and future dedicated AI, we propose a definition of cause in neuron diagrams with a wider validity than published hitherto, which challenges the widespread view that such a definition is elusive. We submit that these results are an illustration of how future philosophical research might evolve: as an interplay between human and artificial expertise.
- Asia > Russia (0.40)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.40)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (2 more...)
- Health & Medicine (0.92)
- Education (0.87)
DOGE Is Working on Software That Automates the Firing of Government Workers
Engineers for Elon Musk's so-called Department of Government Efficiency, or DOGE, are working on new software that could assist mass firings of federal workers across government, sources tell WIRED. The software, called AutoRIF, which stands for Automated Reduction in Force, was first developed by the Department of Defense more than two decades ago. Since then, it's been updated several times and used by a variety of agencies to expedite reductions in workforce. Screenshots of internal databases reviewed by WIRED show that DOGE operatives have accessed AutoRIF and appear to be editing its code. There is a repository in the Office of Personnel Management's (OPM) enterprise GitHub system titled "autorif" in a space created specifically for the director's office--where Musk associates have taken charge--soon after Trump took office.
A Synaptical Story of Persistent Activity with Graded Lifetime in a Neural System
Yuanyuan Mi, Luozheng Li, Dahui Wang, Si Wu
Persistent activity refers to the phenomenon that cortical neurons keep firing even after the stimulus triggering the initial neuronal responses is moved. Persistent activity is widely believed to be the substrate for a neural system retaining a memory trace of the stimulus information. In a conventional view, persistent activity is regarded as an attractor of the network dynamics, but it faces a challenge of how to be closed properly. Here, in contrast to the view of attractor, we consider that the stimulus information is encoded in a marginally unstable state of the network which decays very slowly and exhibits persistent firing for a prolonged duration. We propose a simple yet effective mechanism to achieve this goal, which utilizes the property of short-term plasticity (STP) of neuronal synapses. STP has two forms, short-term depression (STD) and short-term facilitation (STF), which have opposite effects on retaining neuronal responses. We find that by properly combining STF and STD, a neural system can hold persistent activity of graded lifetime, and that persistent activity fades away naturally without relying on an external drive. The implications of these results on neural information representation are discussed.
- Asia > China > Beijing > Beijing (0.05)
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China > Jiangsu Province (0.04)
Reviews: Visualizing the PHATE of Neural Networks
Update after author response: Taking on faith the results the authors report in their author response (namely ability to identify generalization performance using only the training set, results on CIFAR10 and white noise datasets, and the quantitative evaluation of the task-switching), I would raise my score to a 6 (actually if they did achieve everything they claimed in the author response, I would be inclined to give it a 7, but I'd need to see all the results for that). Originality: I think the originality is fairly high. Although the PHATE algorithm exists in the literature, the Multislice kernel is novel, and the idea of visualizing the learning dynamics of the hidden neurons to ascertain things like catastrophic forgetting or poor generalization is (to my knowledge) novel. Quality: I think the Experiments sections could be substantially improved: (1) For the experiments on continual learning, from looking at Figure 3 it is not obvious to me that Adagrad does better than Rehearsal for the "Domain" learning setting, or that Adagrad outperforms Adam at class learning. Adam apparently does the best at task learning, but again, I wouldn't have guessed from the trajectories.
A Synaptical Story of Persistent Activity with Graded Lifetime in a Neural System
Persistent activity refers to the phenomenon that cortical neurons keep firing even after the stimulus triggering the initial neuronal responses is moved. Persistent activity is widely believed to be the substrate for a neural system retaining a memory trace of the stimulus information. In a conventional view, persistent activity is regarded as an attractor of the network dynamics, but it faces a challenge of how to be closed properly. Here, in contrast to the view of attractor, we consider that the stimulus information is encoded in a marginally unstable state of the network which decays very slowly and exhibits persistent firing for a prolonged duration. We propose a simple yet effective mechanism to achieve this goal, which utilizes the property of short-term plasticity (STP) of neuronal synapses. STP has two forms, short-term depression (STD) and short-term facilitation (STF), which have opposite effects on retaining neuronal responses. We find that by properly combining STF and STD, a neural system can hold persistent activity of graded lifetime, and that persistent activity fades away naturally without relying on an external drive. The implications of these results on neural information representation are discussed.
- Asia > China > Beijing > Beijing (0.05)
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China > Jiangsu Province (0.04)
The Big Questions About AI in 2024
Let us be thankful for the AI industry. Its leaders may be nudging humans closer to extinction, but this year, they provided us with a gloriously messy spectacle of progress. When I say "year," I mean the long year that began late last November, when OpenAI released ChatGPT and, in doing so, launched generative AI into the cultural mainstream. In the months that followed, politicians, teachers, Hollywood screenwriters, and just about everyone else tried to understand what this means for their future. Cash fire-hosed into AI companies, and their executives, now glowed up into international celebrities, fell into Succession-style infighting.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.63)
A Cognitive Architecture for Machine Consciousness and Artificial Superintelligence: Thought Is Structured by the Iterative Updating of Working Memory
This article provides an analytical framework for how to simulate human-like thought processes within a computer. It describes how attention and memory should be structured, updated, and utilized to search for associative additions to the stream of thought. The focus is on replicating the dynamics of the mammalian working memory system, which features two forms of persistent activity: sustained firing (preserving information on the order of seconds) and synaptic potentiation (preserving information from minutes to hours). The article uses a series of over 40 original figures to systematically demonstrate how the iterative updating of these working memory stores provides functional structure to behavior, cognition, and consciousness. In an AI implementation, these two memory stores should be updated continuously and in an iterative fashion, meaning each state should preserve a proportion of the coactive representations from the state before it. Thus, the set of concepts in working memory will evolve gradually and incrementally over time. This makes each state a revised iteration of the preceding state and causes successive states to overlap and blend with respect to the information they contain. Transitions between states happen as persistent activity spreads activation energy throughout the hierarchical network searching long-term memory for the most appropriate representation to be added to the global workspace. The result is a chain of associatively linked intermediate states capable of advancing toward a solution or goal. Iterative updating is conceptualized here as an information processing strategy, a model of working memory, a theory of consciousness, and an algorithm for designing and programming artificial general intelligence.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > New York (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (12 more...)
- Workflow (0.92)
- Research Report > New Finding (0.45)
- Research Report > Experimental Study (0.45)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Health Care Technology (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.92)
- Information Technology > Artificial Intelligence > Cognitive Science > Neuroscience (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Expert Systems (0.67)
- (2 more...)