arbiter
Enhancing Fault-Tolerant Space Computing: Guidance Navigation and Control (GNC) and Landing Vision System (LVS) Implementations on Next-Gen Multi-Core Processors
Yun, Kyongsik, Bayard, David, Kubiak, Gerik, Owens, Austin, Johnson, Andrew, Johnson, Ryan, Scharf, Dan, Lu, Thomas
Future planetary exploration missions demand high-performance, fault-tolerant computing to enable autonomous Guidance, Navigation, and Control (GNC) and Lander Vision System (LVS) operations during Entry, Descent, and Landing (EDL). This paper evaluates the deployment of GNC and LVS algorithms on next-generation multi-core processors--HPSC, Snapdragon VOXL2, and AMD Xilinx Versal--demonstrating up to 15x speedup for LVS image processing and over 250x speedup for Guidance for Fuel-Optimal Large Divert (GFOLD) trajectory optimization compared to legacy spaceflight hardware. To ensure computational reliability, we present ARBITER (Asynchronous Redundant Behavior Inspection for Trusted Execution and Recovery), a Multi-Core Voting (MV) mechanism that performs real-time fault detection and correction across redundant cores. ARBITER is validated in both static optimization tasks (GFOLD) and dynamic closed-loop control (Attitude Control System). A fault injection study further identifies the gradient computation stage in GFOLD as the most sensitive to bit-level errors, motivating selective protection strategies and vector-based output arbitration. This work establishes a scalable and energy-efficient architecture for future missions, including Mars Sample Return, Enceladus Orbilander, and Ceres Sample Return, where onboard autonomy, low latency, and fault resilience are critical.
- North America > United States > California > Los Angeles County > Pasadena (0.06)
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > United States > California > Monterey County > Seaside (0.04)
- (2 more...)
- Transportation > Air (0.46)
- Aerospace & Defense > Aircraft (0.46)
- Government > Space Agency (0.32)
- Government > Regional Government > North America Government > United States Government (0.32)
A Multi-Task Evaluation of LLMs' Processing of Academic Text Input
Li, Tianyi, Qin, Yu, Sheng, Olivia R. Liu
How much large language models (LLMs) can aid scientific discovery, notably in assisting academic peer review, is in heated debate. Between a literature digest and a human-comparable research assistant lies their practical application potential. We organize individual tasks that computer science studies employ in separate terms into a guided and robust workflow to evaluate LLMs' processing of academic text input. We employ four tasks in the assessment: content reproduction/comparison/scoring/reflection, each demanding a specific role of the LLM (oracle/judgmental arbiter/knowledgeable arbiter/collaborator) in assisting scholarly works, and altogether testing LLMs with questions that increasingly require intellectual capabilities towards a solid understanding of scientific texts to yield desirable solutions. We exemplify a rigorous performance evaluation with detailed instructions on the prompts. Adopting first-rate Information Systems articles at three top journals as the input texts and an abundant set of text metrics, we record a compromised performance of the leading LLM - Google's Gemini: its summary and paraphrase of academic text is acceptably reliable; using it to rank texts through pairwise text comparison is faintly scalable; asking it to grade academic texts is prone to poor discrimination; its qualitative reflection on the text is self-consistent yet hardly insightful to inspire meaningful research. This evidence against an endorsement of LLMs' text-processing capabilities is consistent across metric-based internal (linguistic assessment), external (comparing to the ground truth), and human evaluation, and is robust to the variations of the prompt. Overall, we do not recommend an unchecked use of LLMs in constructing peer reviews.
- North America > United States > California (0.14)
- North America > United States > Arizona (0.04)
- Europe > Netherlands > South Holland > Leiden (0.04)
- (9 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine (1.00)
- Materials (0.67)
- Information Technology > Security & Privacy (0.45)
- Education > Educational Setting (0.45)
Efficient FPGA Implementation of Time-Domain Popcount for Low-Complexity Machine Learning
Duan, Shengyu, Sartori, Marcos L. L., Shafik, Rishad, Yakovlev, Alex, Ozer, Emre
--Population count (popcount) is a crucial operation for many low-complexity machine learning (ML) algorithms, including Tsetlin Machine (TM)-a promising new ML method, particularly well-suited for solving classification tasks. The inference mechanism in TM consists of propositional logic-based structures within each class, followed by a majority voting scheme, which makes the classification decision. In TM, the voters are the outputs of Boolean clauses. The voting mechanism comprises two operations: popcount for each class and determining the class with the maximum vote by means of an argmax operation. While TMs offer a lightweight ML alternative, their performance is often limited by the high computational cost of popcount and comparison required to produce the argmax result. In this paper, we propose an innovative approach to accelerate and optimize these operations by performing them in the time domain. Our time-domain implementation uses programmable delay lines (PDLs) and arbiters to efficiently manage these tasks through delay-based mechanisms. We also present an FPGA design flow for practical implementation of the time-domain popcount, addressing delay skew and ensuring that the behavior matches that of the model's intended functionality. By leveraging the natural compatibility of the proposed popcount with asynchronous architectures, we demonstrate significant improvements in an asynchronous TM, including up to 38% reduction in latency, 43.1% reduction in dynamic power, and 15% savings in resource utilization, compared to synchronous TMs using adder-based popcount.
- Europe > United Kingdom > England > Tyne and Wear > Newcastle (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Norway (0.04)
Self-Interested Agents in Collaborative Learning: An Incentivized Adaptive Data-Centric Framework
Vijayan, Nithia, Low, Bryan Kian Hsiang
We propose a framework for adaptive data-centric collaborative learning among self-interested agents, coordinated by an arbiter. Designed to handle the incremental nature of real-world data, the framework operates in an online manner: at each step, the arbiter collects a batch of data from agents, trains a machine learning model, and provides each agent with a distinct model reflecting its data contributions. This setup establishes a feedback loop where shared data influence model updates, and the resulting models guide future data-sharing strategies. Agents evaluate and partition their data, selecting a partition to share using a stochastic parameterized policy optimized via policy gradient methods to optimize the utility of the received model as defined by agent-specific evaluation functions. On the arbiter side, the expected loss function over the true data distribution is optimized, incorporating agent-specific weights to account for distributional differences arising from diverse sources and selective sharing. A bilevel optimization algorithm jointly learns the model parameters and agent-specific weights. Mean-zero noise, computed using a distortion function that adjusts these agent-specific weights, is introduced to generate distinct agent-specific models, promoting valuable data sharing without requiring separate training. Our framework is underpinned by non-asymptotic analyses, ensuring convergence of the agent-side policy optimization to an approximate stationary point of the evaluation functions and convergence of the arbiter-side optimization to an approximate stationary point of the expected loss function.
A Novel Approach to Eliminating Hallucinations in Large Language Model-Assisted Causal Discovery
Sng, Grace, Zhang, Yanming, Mueller, Klaus
The increasing use of large language models (LLMs) in causal discovery as a substitute for human domain experts highlights the need for optimal model selection. This paper presents the first hallucination survey of popular LLMs for causal discovery. We show that hallucinations exist when using LLMs in causal discovery so the choice of LLM is important. We propose using Retrieval Augmented Generation (RAG) to reduce hallucinations when quality data is available. Additionally, we introduce a novel method employing multiple LLMs with an arbiter in a debate to audit edges in causal graphs, achieving a comparable reduction in hallucinations to RAG.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > New York > Suffolk County > Stony Brook (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
Adaptive Reinforcement Learning for Robot Control
Liu, Yu Tang, Singh, Nilaksh, Ahmad, Aamir
Deep reinforcement learning (DRL) has shown remarkable success in simulation domains, yet its application in designing robot controllers remains limited, due to its single-task orientation and insufficient adaptability to environmental changes. To overcome these limitations, we present a novel adaptive agent that leverages transfer learning techniques to dynamically adapt policy in response to different tasks and environmental conditions. The approach is validated through the blimp control challenge, where multitasking capabilities and environmental adaptability are essential. The agent is trained using a custom, highly parallelized simulator built on IsaacGym. We perform zero-shot transfer to fly the blimp in the real world to solve various tasks. We share our code at \url{https://github.com/robot-perception-group/adaptive\_agent/}.
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.14)
- Europe > Germany > Baden-Württemberg > Stuttgart Region > Stuttgart (0.04)
- Asia > India > West Bengal > Kharagpur (0.04)
Local women in tech are making strides in artificial intelligence – The Arbiter
Artificial Intelligence (A.I.) is one of the fastest-growing markets with a 54% growth rate annually, and is quickly becoming a huge part of people's everyday life. From video games to phone applications, many people use A.I. more than they may think. A.I. is some of the most cutting-edge technology, but it's the people behind it that are the driving force of this field. A.I. is a male-dominated industry, with women making up only 26% of the A.I. workforce. Locally, there are many women involved in A.I., making great strides in the industry.
- North America > United States > Idaho > Ada County > Boise (0.10)
- North America > United States > Texas > Orange County (0.06)
'Is it OK to …': the bot that gives you an instant moral judgment
Corporal punishment, wearing fur, pineapple on pizza – moral dilemmas, are by their very nature, hard to solve. That's why the same ethical questions are constantly resurfaced in TV, films and literature. But what if AI could take away the brain work and answer ethical quandaries for us? Ask Delphi is a bot that's been fed more than 1.7m examples of people's ethical judgments on everyday questions and scenarios. If you pose an ethical quandary, it will tell you whether something is right, wrong, or indefensible. Users just put a question to the bot on its website, and see what it comes up with.
Move over, Aristotle: can a bot solve moral philosophy?
Corporal punishment, wearing fur, pineapple on pizza – moral dilemmas, are by their very nature, hard to solve. That's why the same ethical questions are constantly resurfaced in TV, films and literature. But what if AI could take away the brain work and answer ethical quandaries for us? Ask Delphi is a bot that's been fed more than 1.7m examples of people's ethical judgments on everyday questions and scenarios. If you pose an ethical quandary, it will tell you whether something is right, wrong, or indefensible. Users just put a question to the bot on its website, and see what it comes up with.
IBM researchers train AI to follow code of ethics
In recent years, artificial intelligence algorithms have become very good at recommending content to users -- a bit too good, you might say. Tech companies use AI to optimize their recommendations based on how users react to content. This is good for the companies serving content, since it results in users spending more time on their applications and generating more revenue. But what's good for companies is not necessarily good for the users. Often, what we want to see is not necessarily what we should see.