firing
Trump's mass firing just dealt another blow to American science
Trump's mass firing just dealt another blow to American science Ambitious research is on the chopping block following yet more cuts at the National Science Foundation. This past week delivered another gut punch for science in the US. This time, the target was the National Science Foundation--a federal agency that funds major research projects to the tune of around $9 billion. The foundation's efforts were overseen by a board of 22 prominent scientists. On Friday last week, they were all fired . The NSF has been without a director since April 2025, when former director Sethuraman Panchanathan stepped down in the wake of DOGE-led funding cuts and mass firings.
Using Petri Nets for Context-Adaptive Robot Explanations
Soylu, Görkem Kılınç, Akalin, Neziha, Riveiro, Maria
In human-robot interaction, robots must communicate in a natural and transparent manner to foster trust, which requires adapting their communication to the context. In this paper, we propose using Petri nets (PNs) to model contextual information for adaptive robot explanations. PNs provide a formal, graphical method for representing concurrent actions, causal dependencies, and system states, making them suitable for analyzing dynamic interactions between humans and robots. We demonstrate this approach through a scenario involving a robot that provides explanations based on contextual cues such as user attention and presence. Model analysis confirms key properties, including deadlock-freeness, context-sensitive reachability, boundedness, and liveness, showing the robustness and flexibility of PNs for designing and verifying context-adaptive explanations in human-robot interactions.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > California > Riverside County > Riverside (0.04)
- Europe > Sweden (0.04)
- Europe > Slovakia > Bratislava > Bratislava (0.04)
Causes in neuron diagrams, and testing causal reasoning in Large Language Models. A glimpse of the future of philosophy?
Vervoort, Louis, Nikolaev, Vitaly
We propose a test for abstract causal reasoning in AI, based on scholarship in the philosophy of causation, in particular on the neuron diagrams popularized by D. Lewis. We illustrate the test on advanced Large Language Models (ChatGPT, DeepSeek and Gemini). Remarkably, these chatbots are already capable of correctly identifying causes in cases that are hotly debated in the literature. In order to assess the results of these LLMs and future dedicated AI, we propose a definition of cause in neuron diagrams with a wider validity than published hitherto, which challenges the widespread view that such a definition is elusive. We submit that these results are an illustration of how future philosophical research might evolve: as an interplay between human and artificial expertise.
- Asia > Russia (0.40)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.40)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (2 more...)
- Health & Medicine (0.92)
- Education (0.87)
DOGE Is Working on Software That Automates the Firing of Government Workers
Engineers for Elon Musk's so-called Department of Government Efficiency, or DOGE, are working on new software that could assist mass firings of federal workers across government, sources tell WIRED. The software, called AutoRIF, which stands for Automated Reduction in Force, was first developed by the Department of Defense more than two decades ago. Since then, it's been updated several times and used by a variety of agencies to expedite reductions in workforce. Screenshots of internal databases reviewed by WIRED show that DOGE operatives have accessed AutoRIF and appear to be editing its code. There is a repository in the Office of Personnel Management's (OPM) enterprise GitHub system titled "autorif" in a space created specifically for the director's office--where Musk associates have taken charge--soon after Trump took office.
A Synaptical Story of Persistent Activity with Graded Lifetime in a Neural System
Yuanyuan Mi, Luozheng Li, Dahui Wang, Si Wu
Persistent activity refers to the phenomenon that cortical neurons keep firing even after the stimulus triggering the initial neuronal responses is moved. Persistent activity is widely believed to be the substrate for a neural system retaining a memory trace of the stimulus information. In a conventional view, persistent activity is regarded as an attractor of the network dynamics, but it faces a challenge of how to be closed properly. Here, in contrast to the view of attractor, we consider that the stimulus information is encoded in a marginally unstable state of the network which decays very slowly and exhibits persistent firing for a prolonged duration. We propose a simple yet effective mechanism to achieve this goal, which utilizes the property of short-term plasticity (STP) of neuronal synapses. STP has two forms, short-term depression (STD) and short-term facilitation (STF), which have opposite effects on retaining neuronal responses. We find that by properly combining STF and STD, a neural system can hold persistent activity of graded lifetime, and that persistent activity fades away naturally without relying on an external drive. The implications of these results on neural information representation are discussed.
- Asia > China > Beijing > Beijing (0.05)
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China > Jiangsu Province (0.04)
Reviews: Visualizing the PHATE of Neural Networks
Update after author response: Taking on faith the results the authors report in their author response (namely ability to identify generalization performance using only the training set, results on CIFAR10 and white noise datasets, and the quantitative evaluation of the task-switching), I would raise my score to a 6 (actually if they did achieve everything they claimed in the author response, I would be inclined to give it a 7, but I'd need to see all the results for that). Originality: I think the originality is fairly high. Although the PHATE algorithm exists in the literature, the Multislice kernel is novel, and the idea of visualizing the learning dynamics of the hidden neurons to ascertain things like catastrophic forgetting or poor generalization is (to my knowledge) novel. Quality: I think the Experiments sections could be substantially improved: (1) For the experiments on continual learning, from looking at Figure 3 it is not obvious to me that Adagrad does better than Rehearsal for the "Domain" learning setting, or that Adagrad outperforms Adam at class learning. Adam apparently does the best at task learning, but again, I wouldn't have guessed from the trajectories.
A Synaptical Story of Persistent Activity with Graded Lifetime in a Neural System
Persistent activity refers to the phenomenon that cortical neurons keep firing even after the stimulus triggering the initial neuronal responses is moved. Persistent activity is widely believed to be the substrate for a neural system retaining a memory trace of the stimulus information. In a conventional view, persistent activity is regarded as an attractor of the network dynamics, but it faces a challenge of how to be closed properly. Here, in contrast to the view of attractor, we consider that the stimulus information is encoded in a marginally unstable state of the network which decays very slowly and exhibits persistent firing for a prolonged duration. We propose a simple yet effective mechanism to achieve this goal, which utilizes the property of short-term plasticity (STP) of neuronal synapses. STP has two forms, short-term depression (STD) and short-term facilitation (STF), which have opposite effects on retaining neuronal responses. We find that by properly combining STF and STD, a neural system can hold persistent activity of graded lifetime, and that persistent activity fades away naturally without relying on an external drive. The implications of these results on neural information representation are discussed.
- Asia > China > Beijing > Beijing (0.05)
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China > Jiangsu Province (0.04)
The Big Questions About AI in 2024
Let us be thankful for the AI industry. Its leaders may be nudging humans closer to extinction, but this year, they provided us with a gloriously messy spectacle of progress. When I say "year," I mean the long year that began late last November, when OpenAI released ChatGPT and, in doing so, launched generative AI into the cultural mainstream. In the months that followed, politicians, teachers, Hollywood screenwriters, and just about everyone else tried to understand what this means for their future. Cash fire-hosed into AI companies, and their executives, now glowed up into international celebrities, fell into Succession-style infighting.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.63)