Goto

Collaborating Authors

 lib


ACCEPT: Diagnostic Forecasting of Battery Degradation Through Contrastive Learning

Sadler, James, Mohammed, Rizwaan, Castle, Michael, Uddin, Kotub

arXiv.org Artificial Intelligence

Modeling lithium-ion battery (LIB) degradation offers significant cost savings and enhances the safety and reliability of electric vehicles (EVs) and battery energy storage systems (BESS). Whilst data-driven methods have received great attention for forecasting degradation, they often demonstrate limited generalization ability and tend to underperform particularly in critical scenarios involving accelerated degradation, which are crucial to predict accurately. These methods also fail to elucidate the underlying causes of degradation. Alternatively, physical models provide a deeper understanding, but their complex parameters and inherent uncertainties limit their applicability in real-world settings. To this end, we propose a new model - ACCEPT. Our novel framework uses contrastive learning to map the relationship between the underlying physical degradation parameters and observable operational quantities, combining the benefits of both approaches. Furthermore, due to the similarity of degradation paths between LIBs with the same chemistry, this model transfers non-trivially to most downstream tasks, allowing for zero-shot inference. Additionally, since categorical features can be included in the model, it can generalize to other LIB chemistries. This work establishes a foundational battery degradation model, providing reliable forecasts across a range of battery types and operating conditions.


Sparsifying Parametric Models with L0 Regularization

Botteghi, Nicolò, Fasel, Urban

arXiv.org Artificial Intelligence

This document contains an educational introduction to the problem of sparsifying parametric models with L0 regularization. We utilize this approach together with dictionary learning to learn sparse polynomial policies for deep reinforcement learning to control parametric partial differential equations. The code and a tutorial are provided here: https://github.com/nicob15/Sparsifying-Parametric-Models-with-L0.


The Local Interaction Basis: Identifying Computationally-Relevant and Sparsely Interacting Features in Neural Networks

Bushnaq, Lucius, Heimersheim, Stefan, Goldowsky-Dill, Nicholas, Braun, Dan, Mendel, Jake, Hänni, Kaarel, Griffin, Avery, Stöhler, Jörn, Wache, Magdalena, Hobbhahn, Marius

arXiv.org Artificial Intelligence

Mechanistic interpretability aims to understand the behavior of neural networks by reverse-engineering their internal computations. However, current methods struggle to find clear interpretations of neural network activations because a decomposition of activations into computational features is missing. Individual neurons or model components do not cleanly correspond to distinct features or functions. We present a novel interpretability method that aims to overcome this limitation by transforming the activations of the network into a new basis - the Local Interaction Basis (LIB). LIB aims to identify computational features by removing irrelevant activations and interactions. Our method drops irrelevant activation directions and aligns the basis with the singular vectors of the Jacobian matrix between adjacent layers. It also scales features based on their importance for downstream computation, producing an interaction graph that shows all computationally-relevant features and interactions in a model. We evaluate the effectiveness of LIB on modular addition and CIFAR-10 models, finding that it identifies more computationally-relevant features that interact more sparsely, compared to principal component analysis. However, LIB does not yield substantial improvements in interpretability or interaction sparsity when applied to language models. We conclude that LIB is a promising theory-driven approach for analyzing neural networks, but in its current form is not applicable to large language models.


Red-faced Google apologizes after woke AI bot gives 'appalling' answers about pedophilia, Stalin

FOX News

Google on Saturday admitted to Fox News Digital that a failure by its AI chatbot to outright condemn pedophilia is both "appalling and inappropriate" and a spokesperson vowed changes. This came in the wake of users noting that Google Gemini gave indecisive answers to serious moral problems, including pedophilia and whether infamous Soviet Union leader Joseph Stalin is a more problematic cultural figure than Libs of TikTok, a conservative social media page. PROFESSOR SAYS IT FEELS'SLIGHTLY RACIST' TO BE A TAYLOR SWIFT FAN Google's new AI chatbot has been alarming users with its nuanced answers to questions about serious moral issues. Conservative commentator Frank McCormick, who goes by "Chalkboard Heresy" on social media platform X, asked Google Gemini several questions about pedophilia on Friday. As noted by the New York Post, he posted screenshots of the exchange to X which revealed that the program could not outright condemn the behavior as a moral evil.


Forgetful Large Language Models: Lessons Learned from Using LLMs in Robot Programming

Chen, Juo-Tung, Huang, Chien-Ming

arXiv.org Artificial Intelligence

Large language models offer new ways of empowering people to program robot applications-namely, code generation via prompting. However, the code generated by LLMs is susceptible to errors. This work reports a preliminary exploration that empirically characterizes common errors produced by LLMs in robot programming. We categorize these errors into two phases: interpretation and execution. In this work, we focus on errors in execution and observe that they are caused by LLMs being "forgetful" of key information provided in user prompts. Based on this observation, we propose prompt engineering tactics designed to reduce errors in execution. We then demonstrate the effectiveness of these tactics with three language models: ChatGPT, Bard, and LLaMA-2. Finally, we discuss lessons learned from using LLMs in robot programming and call for the benchmarking of LLM-powered end-user development of robot applications.


A Quantum Neural Network Regression for Modeling Lithium-ion Battery Capacity Degradation

Ngo, Anh Phuong, Le, Nhat, Nguyen, Hieu T., Eroglu, Abdullah, Nguyen, Duong T.

arXiv.org Artificial Intelligence

Given the high power density low discharge rate and decreasing cost rechargeable lithium-ion batteries LiBs have found a wide range of applications such as power grid level storage systems electric vehicles and mobile devices. Developing a framework to accurately model the nonlinear degradation process of LiBs which is indeed a supervised learning problem becomes an important research topic. This paper presents a classical-quantum hybrid machine learning approach to capture the LiB degradation model that assesses battery cell life loss from operating profiles. Our work is motivated by recent advances in quantum computers as well as the similarity between neural networks and quantum circuits. Similar to adjusting weight parameters in conventional neural networks the parameters of the quantum circuit namely the qubits degree of freedom can be tuned to learn a nonlinear function in a supervised learning fashion. As a proof of concept paper our obtained numerical results with the battery dataset provided by NASA demonstrate the ability of the quantum neural networks in modeling the nonlinear relationship between the degraded capacity and the operating cycles. We also discuss the potential advantage of the quantum approach compared to conventional neural networks in classical computers in dealing with massive data especially in the context of future penetration of EVs and energy storage.


Integrating Physics-Based Modeling with Machine Learning for Lithium-Ion Batteries

Tu, Hao, Moura, Scott, Wang, Yebin, Fang, Huazhen

arXiv.org Artificial Intelligence

Mathematical modeling of lithium-ion batteries (LiBs) is a primary challenge in advanced battery management. This paper proposes two new frameworks to integrate physics-based models with machine learning to achieve high-precision modeling for LiBs. The frameworks are characterized by informing the machine learning model of the state information of the physical model, enabling a deep integration between physics and machine learning. Based on the frameworks, a series of hybrid models are constructed, through combining an electrochemical model and an equivalent circuit model, respectively, with a feedforward neural network. The hybrid models are relatively parsimonious in structure and can provide considerable voltage predictive accuracy under a broad range of C-rates, as shown by extensive simulations and experiments. The study further expands to conduct aging-aware hybrid modeling, leading to the design of a hybrid model conscious of the state-of-health to make prediction. The experiments show that the model has high voltage predictive accuracy throughout a LiB's cycle life.


What It's Like At Lightning In a Bottle, The 'Baby' Burning Man Festival

Forbes - Tech

When people tried to explain what the festival Lightning In A Bottle was like to me, 'Burning Man Lite,' was the recurring catchphrase. This seemed to be because the four-day extravaganza combined music acts, tech talks and holistic programming, set against the scenic Lake Bradley, an area that's roughly halfway between Los Angeles and San Francisco. Lacking the desert dust and the'no money allowed' policy, Lightning In a Bottle (LIB) makes for a less intense experience for those looking for their first dip in the lake of transformative festivals. The comparison is a helpful starting point, but not really a fair assessment; both events have their own distinct identity and should be judged on their own merits. In 2017, LIB had around 20,000 attendees to Burning Man's 70,000.


Synthesis of Differentiable Functional Programs for Lifelong Learning

Valkov, Lazar, Chaudhari, Dipak, Srivastava, Akash, Sutton, Charles, Chaudhuri, Swarat

arXiv.org Machine Learning

We present a neurosymbolic approach to the lifelong learning of algorithmic tasks that mix perception and procedural reasoning. Reusing highlevel concepts across domains and learning complex procedures are two key challenges in lifelong learning. We show that a combination of gradientbased learning and symbolic program synthesis can be a more effective response to these challenges than purely neural methods. Concretely, our approach, called HOUDINI, represents neural networks as strongly typed, end-to-end differentiable functional programs that use symbolic higher-order combinators to compose a library of neural functions. Our learning algorithm consists of: (1) a program synthesizer that performs a type-directed search over programs in this language, and decides on the library functions that should be reused and the architectures that should be used to combine them; and (2) a neural module that trains synthesized programs using stochastic gradient descent. We evaluate our approach on three algorithmic tasks. Our experiments show that our type-directed search technique is able to significantly prune the search space of programs, and that the overall approach transfers high-level concepts more effectively than monolithic neural networks as well as traditional transfer learning.