Goto

Collaborating Authors

 carle


CARLE: A Hybrid Deep-Shallow Learning Framework for Robust and Explainable RUL Estimation of Rolling Element Bearings

Razzaq, Waleed, Zhao, Yun-Bo

arXiv.org Artificial Intelligence

Prognostic Health Management (PHM) systems monitor and predict equipment health. A key task is Remaining Useful Life (RUL) estimation, which predicts how long a component, such as a rolling element bearing, will operate before failure. Many RUL methods exist but often lack generalizability and robustness under changing operating conditions. This paper introduces CARLE, a hybrid AI framework that combines deep and shallow learning to address these challenges. CARLE uses Res-CNN and Res-LSTM blocks with multi-head attention and residual connections to capture spatial and temporal degradation patterns, and a Random Forest Regressor (RFR) for stable, accurate RUL prediction. A compact preprocessing pipeline applies Gaussian filtering for noise reduction and Continuous Wavelet Transform (CWT) for time-frequency feature extraction. We evaluate CARLE on the XJTU-SY and PRONOSTIA bearing datasets. Ablation studies measure each component's contribution, while noise and cross-domain experiments test robustness and generalization. Comparative results show CARLE outperforms several state-of-the-art methods, especially under dynamic conditions. Finally, we analyze model interpretability with LIME and SHAP to assess transparency and trustworthiness.


AIhub coffee corner: how do you solve a problem like conference reviewing?

AIHub

This month, our trustees tackle the topic of conference reviewing. Joining the conversation are: Sanmay Das (George Mason University), Sarit Kraus (Bar-Ilan University), Michael Littman (Brown University) and Carles Sierra (CSIC). Lucy Smith: Our topic this month is the conference reviewing and publication process. It would be good to discuss some of the issues and then consider some possible improvements. Sarit Kraus: Well, where do we start…?! Carles Sierra: I mean, there are so many issues.


AIhub coffee corner: Regulation of AI

AIHub

Three years ago, our trustees sat down to discuss AI and regulation. A lot has happened since then, both on the technological development front and on the policy front, so we thought it was time to tackle the topic again. You can read more about that here.] Joining the conversation this time are: Sabine Hauert (University of Bristol), Sarit Kraus (Bar-Ilan University), Michael Littman (Brown University), and Carles Sierra (CSIC). Sabine Hauert: Regulation of AI was a very hot topic a few months ago, and interest has definitely not died down.


Carle's Game: An Open-Ended Challenge in Exploratory Machine Creativity

Davis, Q. Tyrell

arXiv.org Artificial Intelligence

This paper is both an introduction and an invitation. It is an introduction to CARLE, a Life-like cellular automata simulator and reinforcement learning environment. It is also an invitation to Carle's Game, a challenge in open-ended machine exploration and creativity. Inducing machine agents to excel at creating interesting patterns across multiple cellular automata universes is a substantial challenge, and approaching this challenge is likely to require contributions from the fields of artificial life, AI, machine learning, and complexity, at multiple levels of interest. Carle's Game is based on machine agent interaction with CARLE, a Cellular Automata Reinforcement Learning Environment. CARLE is flexible, capable of simulating any of the 262,144 different rules defining Life-like cellular automaton universes. CARLE is also fast and can simulate automata universes at a rate of tens of thousands of steps per second through a combination of vectorization and GPU acceleration. Finally, CARLE is simple. Compared to high-fidelity physics simulators and video games designed for human players, CARLE's two-dimensional grid world offers a discrete, deterministic, and atomic universal playground, despite its complexity. In combination with CARLE, Carle's Game offers an initial set of agent policies, learning and meta-learning algorithms, and reward wrappers that can be tailored to encourage exploration or specific tasks.