stanley
Homemade chess board moves its own pieces. And wins.
Technology AI Homemade chess board moves its own pieces. Maker Joshua Stanley Robotics used magnets and an open source chess platform to build this unique board. Breakthroughs, discoveries, and DIY tips sent six days a week. It's been nearly 30 years since chess champion Garry Kasparov lost to IBM's Deep Blue, marking the first time a reigning world champion was defeated by a computer in a match. Chess engines have since improved so dramatically that even a simple smartphone app can now make top grandmasters sweat .
- North America > United States > North Carolina (0.05)
- North America > United States > New York (0.05)
- Information Technology > Artificial Intelligence > Games > Chess (0.55)
- Information Technology > Artificial Intelligence > Robots (0.53)
Illuminating the Three Dogmas of Reinforcement Learning under Evolutionary Light
Hamidi, Mani, Deacon, Terrence W.
Three core tenets of reinforcement learning (RL)--concerning the definition of agency, the objective of learning, and the scope of the reward hypothesis--have been highlighted as key targets for conceptual revision, with major implications for theory and application. We propose a framework, inspired by open-ended evolutionary theory, to reconsider these three "dogmas." We revisit each assumption and address related concerns raised alongside them. To make our arguments relevant to RL as a model of biological learning, we first establish that evolutionary dynamics can plausibly operate within living brains over an individual's lifetime, and are not confined to cross-generational processes. We begin by revisiting the second dogma, drawing on evolutionary insights to enrich the "adaptation-rather-than-search" view of learning. We then address the third dogma regarding the limits of the reward hypothesis, using analogies from evolutionary fitness to illuminate the scalar reward vs. multi-objective debate. After discussing practical implications for exploration in RL, we turn to the first--and arguably most fundamental--issue: the absence of a formal account of agency. We argue that unlike the other two problems, the evolutionary paradigm alone cannot resolve the agency question, though it gestures in a productive direction. We advocate integrating ideas from origins-of-life theory, where the thermodynamics of sustenance and replication offer promising foundations for understanding agency and resource-constrained reinforcement learning in biological systems.
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.14)
- Europe > United Kingdom > England (0.04)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- (3 more...)
FASCIST-O-METER: Classifier for Neo-fascist Discourse Online
Veliz, Rudy Alexandro Garrido, Semmann, Martin, Biemann, Chris, Yimam, Seid Muhie
Neo-fascism is a political and societal ideology that has been having remarkable growth in the last decade in the United States of America (USA), as well as in other Western societies. It poses a grave danger to democracy and the minorities it targets, and it requires active actions against it to avoid escalation. This work presents the first-of-its-kind neo-fascist coding scheme for digital discourse in the USA societal context, overseen by political science researchers. Our work bridges the gap between Natural Language Processing (NLP) and political science against this phenomena. Furthermore, to test the coding scheme, we collect a tremendous amount of activity on the internet from notable neo-fascist groups (the forums of Iron March and Stormfront.org), and the guidelines are applied to a subset of the collected posts. Through crowdsourcing, we annotate a total of a thousand posts that are labeled as neo-fascist or non-neo-fascist. With this labeled data set, we fine-tune and test both Small Language Models (SLMs) and Large Language Models (LLMs), obtaining the very first classification models for neo-fascist discourse. We find that the prevalence of neo-fascist rhetoric in this kind of forum is ever-present, making them a good target for future research. The societal context is a key consideration for neo-fascist speech when conducting NLP research. Finally, the work against this kind of political movement must be pressed upon and continued for the well-being of a democratic society. Disclaimer: This study focuses on detecting neo-fascist content in text, similar to other hate speech analyses, without labeling individuals or organizations.
- North America > United States > Indiana (0.04)
- North America > United States > California (0.04)
- Europe (0.04)
Questioning Representational Optimism in Deep Learning: The Fractured Entangled Representation Hypothesis
Kumar, Akarsh, Clune, Jeff, Lehman, Joel, Stanley, Kenneth O.
Much of the excitement in modern AI is driven by the observation that scaling up existing systems leads to better performance. But does better performance necessarily imply better internal representations? While the representational optimist assumes it must, this position paper challenges that view. We compare neural networks evolved through an open-ended search process to networks trained via conventional stochastic gradient descent (SGD) on the simple task of generating a single image. This minimal setup offers a unique advantage: each hidden neuron's full functional behavior can be easily visualized as an image, thus revealing how the network's output behavior is internally constructed neuron by neuron. The result is striking: while both networks produce the same output behavior, their internal representations differ dramatically. The SGD-trained networks exhibit a form of disorganization that we term fractured entangled representation (FER). Interestingly, the evolved networks largely lack FER, even approaching a unified factored representation (UFR). In large models, FER may be degrading core model capacities like generalization, creativity, and (continual) learning. Therefore, understanding and mitigating FER could be critical to the future of representation learning.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > Canada > British Columbia (0.04)
- Asia > Middle East > Jordan (0.04)
- Health & Medicine (0.67)
- Education > Curriculum > Subject-Specific Education (0.46)
How a new type of AI is helping police skirt facial recognition bans
"The whole vision behind Track in the first place," says Veritone CEO Ryan Steelberg, was "if we're not allowed to track people's faces, how do we assist in trying to potentially identify criminals or malicious behavior or activity?" In addition to tracking individuals where facial recognition isn't legally allowed, Steelberg says, it allows for tracking when faces are obscured or not visible. The product has drawn criticism from the American Civil Liberties Union, which--after learning of the tool through MIT Technology Review--said it was the first instance they'd seen of a nonbiometric tracking system used at scale in the US. They warned that it raises many of the same privacy concerns as facial recognition but also introduces new ones at a time when the Trump administration is pushing federal agencies to ramp up monitoring of protesters, immigrants, and students. Veritone gave us a demonstration of Track in which it analyzed people in footage from different environments, ranging from the January 6 riots to subway stations.
- North America > United States > New Jersey (0.06)
- North America > United States > Illinois (0.06)
- North America > United States > Colorado (0.06)
- North America > United States > California (0.06)
- Law > Civil Rights & Constitutional Law (0.73)
- Government (0.73)
Parental Guidance: Efficient Lifelong Learning through Evolutionary Distillation
Zhang, Octi, Peng, Quanquan, Scalise, Rosario, Boots, Bryon
Developing robotic agents that can generalize across diverse environments while continually evolving their behaviors is a core challenge in AI and robotics. The difficulties lie in solving increasingly complex tasks and ensuring agents can continue learning without converging on narrow, specialized solutions. Quality Diversity (QD) [1, 2] methods effectively foster diversity but often rely on trial and error, where the path to a final solution can be convoluted, leading to inefficiencies and uncertainty. Our approach draws inspiration from nature's inheritance process, where offspring not only receive but also build upon the knowledge of their predecessors. Similarly, our agents inherit distilled behaviors from previous generations, allowing them to adapt and continue learning efficiently, eventually surpassing their predecessors. This natural knowledge transfer reduces randomness, guiding exploration toward more meaningful learning without manual intervention like reward shaping or task descriptors. What sets our method apart is that it offers a straightforward, evolution-inspired way to consolidate and progress, avoiding the need for manually defined styles or gradient editing [3, 4] to prevent forgetting. The agent's ability to retain and refine skills is driven by a blend of IL and RL, naturally passing down essential behaviors while implicitly discarding inferior ones. We introduce Parental Guidance (PG-1) which makes the following contributions: 1. Distributed Evolution Framework: We propose a framework that distributes the evolution process across multiple compute instances, efficiently scheduling and analyzing evolution.
The best new science fiction books of March 2025
The moon has turned to cheese in John Scalzi's new sci-fi novel My only complaint about the science fiction due to be published in March is: how in the world are we meant to find the time to read all these great novels? There are so many must-reads out this month, whether it's the latest from Nicholas Binge, Silvia Park's tale of a lost robot sibling or Laila Lalami's vision of a future where our dreams are policed for what we might be going to do (sounds quite Minority Report – a very good thing in my view). All I can say is, I think it's time to step away from the computer and get reading, if we want to keep up… Sadly for humanity, in this latest slice of comic sci-fi from the excellent John Scalzi, the moon has turned to cheese and they have to work out what to do about it. This sounds like a lot of fun, but I'm primarily planning to read it to find out what type of cheese the moon has become. Our sci-fi columnist Emily H. Wilson heartily approves of Binge's latest, writing that this time travel tale is well-deserving of its upcoming big-screen treatment.
Safety is Essential for Responsible Open-Ended Systems
Sheth, Ivaxi, Wehner, Jan, Abdelnabi, Sahar, Binkyte, Ruta, Fritz, Mario
AI advancements have been significantly driven by a combination of foundation models and curiosity-driven learning aimed at increasing capability and adaptability. A growing area of interest within this field is Open-Endedness - the ability of AI systems to continuously and autonomously generate novel and diverse artifacts or solutions. This has become relevant for accelerating scientific discovery and enabling continual adaptation in AI agents. This position paper argues that the inherently dynamic and self-propagating nature of Open-Ended AI introduces significant, underexplored risks, including challenges in maintaining alignment, predictability, and control. This paper systematically examines these challenges, proposes mitigation strategies, and calls for action for different stakeholders to support the safe, responsible and successful development of Open-Ended AI.
- Health & Medicine (0.93)
- Education (0.93)
- Information Technology (0.68)
- Government (0.68)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.88)
- Information Technology > Artificial Intelligence > Machine Learning > Evolutionary Systems (0.69)
Predicting Company Growth by Econophysics informed Machine Learning
Tao, Ruyi, Liu, Kaiwei, Jing, Xu, Zhang, Jiang
Predicting company growth is crucial for strategic adjustment, operational decision-making, risk assessment, and loan eligibility reviews. Traditional models for company growth often focus too much on theory, overlooking practical forecasting, or they rely solely on time series forecasting techniques, ignoring interpretability and the inherent mechanisms of company growth. In this paper, we propose a machine learning-based prediction framework that incorporates an econophysics model for company growth. Our model captures both the intrinsic growth mechanisms of companies led by scaling laws and the fluctuations influenced by random factors and individual decisions, demonstrating superior predictive performance compared with methods that use time series techniques alone. Its advantages are more pronounced in long-range prediction tasks. By explicitly modeling the baseline growth and volatility components, our model is more interpretable.
- North America > United States (0.14)
- Asia > China > Beijing > Beijing (0.04)
- Oceania > Australia > Victoria (0.04)
- (2 more...)
- Information Technology > Modeling & Simulation (1.00)
- Information Technology > Data Science > Data Mining (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Style-Compress: An LLM-Based Prompt Compression Framework Considering Task-Specific Styles
Pu, Xiao, He, Tianxing, Wan, Xiaojun
Prompt compression condenses contexts while maintaining their informativeness for different usage scenarios. It not only shortens the inference time and reduces computational costs during the usage of large language models, but also lowers expenses when using closed-source models. In a preliminary study, we discover that when instructing language models to compress prompts, different compression styles (e.g., extractive or abstractive) impact performance of compressed prompts on downstream tasks. Building on this insight, we propose Style-Compress, a lightweight framework that adapts a smaller language model to compress prompts for a larger model on a new task without additional training. Our approach iteratively generates and selects effective compressed prompts as task-specific demonstrations through style variation and in-context learning, enabling smaller models to act as efficient compressors with task-specific examples. Style-Compress outperforms two baseline compression models in four tasks: original prompt reconstruction, text summarization, multi-hop QA, and CoT reasoning. In addition, with only 10 samples and 100 queries for adaptation, prompts compressed by Style-Compress achieve performance on par with or better than original prompts at a compression ratio of 0.25 or 0.5.
- Asia > Singapore (0.04)
- North America > United States > Ohio (0.04)
- North America > United States > California > Santa Barbara County > Santa Barbara (0.04)
- (6 more...)