healing
Reinforcement Learning for Self-Healing Material Systems
Chatterjee, Maitreyi, Agarwal, Devansh, Chatterjee, Biplab
The transition to autonomous material systems necessitates adaptive control methodologies to maximize structural longevity. This study frames the self-healing process as a Reinforcement Learning (RL) problem within a Markov Decision Process (MDP), enabling agents to autonomously derive optimal policies that efficiently balance structural integrity maintenance against finite resource consumption. A comparative evaluation of discrete-action (Q-learning, DQN) and continuous-action (TD3) agents in a stochastic simulation environment revealed that RL controllers significantly outperform heuristic baselines, achieving near-complete material recovery. Crucially, the TD3 agent utilizing continuous dosage control demonstrated superior convergence speed and stability, underscoring the necessity of fine-grained, proportional actuation in dynamic self-healing applications.
- Asia > India > West Bengal > Kolkata (0.06)
- North America > United States > New York > Tompkins County > Ithaca (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > Canada > Alberta > Census Division No. 11 > Edmonton Metropolitan Region > Edmonton (0.04)
I tried a sound bath to see if it actually made me calmer
The science is still emerging, but I couldn't deny how relaxed I felt. A mallet hovers over a bronze singing bowl, an instrument long used for meditation and relaxation practices. Breakthroughs, discoveries, and DIY tips sent every weekday. I drove up to my first sound bath experience not knowing quite what to expect. The meditative practice, which utilizes chimes, gongs, and an arsenal of "vibrational musical instruments," has existed in various forms for millennia but has seen a post-pandemic resurgence, particularly among the burgeoning, influencer-boosted online wellness community.
- South America > Chile (0.04)
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > New York (0.04)
- (5 more...)
- Research Report > Experimental Study (0.69)
- Research Report > New Finding (0.49)
How to Protect Models against Adversarial Unlearning?
Jasiorski, Patryk, Klonowski, Marek, Woźniak, Michał
AI models need to be unlearned to fulfill the requirements of legal acts such as the AI Act or GDPR, and also because of the need to remove toxic content, debiasing, the impact of malicious instances, or changes in the data distribution structure in which a model works. Unfortunately, removing knowledge may cause undesirable side effects, such as a deterioration in model performance. In this paper, we investigate the problem of adversarial unlearning, where a malicious party intentionally sends unlearn requests to deteriorate the model's performance maximally. We show that this phenomenon and the adversary's capabilities depend on many factors, primarily on the backbone model itself and strategy/limitations in selecting data to be unlearned. The main result of this work is a new method of protecting model performance from these side effects, both in the case of unlearned behavior resulting from spontaneous processes and adversary actions.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- (7 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
How Afrofuturism can help us imagine futures worth living in Lonny Avi Brooks and Reynaldo Anderson
The digital age sings a seductive song of progress, yet a deliberate erasure echoes within its circuits. We stand at a crossroads, where technology, particularly the promise of artificial intelligence, threatens both to illuminate and to obliterate. Whose perspectives will shape, and whose will be erased from, the future we build? AI, in particular, has become the latest battleground in a culture war that oscillates between unchecked techno-optimism and dystopian fear. We are told, on one hand, that AI will save us – from disease, inefficiency, ignorance – on the other, that it will replace us, dominate us, erase us.
- North America > United States > Texas (0.05)
- North America > United States > New York (0.05)
- North America > United States > Michigan (0.05)
- (3 more...)
I'm a Therapist, and I'm Replaceable. But So Are You
I'm a psychologist, and AI is coming for my job. The signs are everywhere: a client showing me how ChatGPT helped her better understand her relationship with her parents; a friend ditching her in-person therapist to process anxiety with Claude; a startup raising 40 million to build a super-charged-AI-therapist. The other day on TikTok, I came across an influencer sharing how she doesn't need friends; she can just vent to God and ChatGPT. "ChatGPT talked me out of self-sabotaging." "It knows me better than any human walking this earth."
- North America > United States (0.05)
- Europe > Austria > Vienna (0.05)
CURing Large Models: Compression via CUR Decomposition
Park, Sanghyeon, Moon, Soo-Mook
Large deep learning models have achieved remarkable success but are resource-intensive, posing challenges such as memory usage. We introduce CURing, a novel model compression method based on CUR matrix decomposition, which approximates weight matrices as the product of selected columns (C) and rows (R), and a small linking matrix (U). We apply this decomposition to weights chosen based on the combined influence of their magnitudes and activations. By identifying and retaining informative rows and columns, CURing significantly reduces model size with minimal performance loss. For example, it reduces Llama3.1-8B's parameters to 7.32B (-9%) in just 129 seconds, over 20 times faster than prior compression methods.
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
Optimizing Small Language Models for In-Vehicle Function-Calling
Khiabani, Yahya Sowti, Atif, Farris, Hsu, Chieh, Stahlmann, Sven, Michels, Tobias, Kramer, Sebastian, Heidrich, Benedikt, Sarfraz, M. Saquib, Merten, Julian, Tafazzoli, Faezeh
We propose a holistic approach for deploying Small Language Models (SLMs) as function-calling agents within vehicles as edge devices, offering a more flexible and robust alternative to traditional rule-based systems. By leveraging SLMs, we simplify vehicle control mechanisms and enhance the user experience. Given the in-vehicle hardware constraints, we apply state-of-the-art model compression techniques, including structured pruning, healing, and quantization, ensuring that the model fits within the resource limitations while maintaining acceptable performance. Our work focuses on optimizing a representative SLM, Microsoft's Phi-3 mini, and outlines best practices for enabling embedded models, including compression, task-specific fine-tuning, and vehicle integration. We demonstrate that, despite significant reduction in model size which removes up to 2 billion parameters from the original model, our approach preserves the model's ability to handle complex in-vehicle tasks accurately and efficiently. Furthermore, by executing the model in a lightweight runtime environment, we achieve a generation speed of 11 tokens per second, making real-time, on-device inference feasible without hardware acceleration. Our results demonstrate the potential of SLMs to transform vehicle control systems, enabling more intuitive interactions between users and their vehicles for an enhanced driving experience.
- North America > United States > Hawaii (0.04)
- Europe > Slovenia > Drava > Municipality of Benedikt > Benedikt (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
Neural Networks and Friction: Slide, Hold, Learn
In this study, it is demonstrated that Recurrent Neural Networks (RNNs), specifically those utilizing Gated Recurrent Unit (GRU) architecture, possess the capability to learn the complex dynamics of rate-and-state friction laws from synthetic data. The data employed for training the network is generated through the application of traditional rate-and-state friction equations coupled with the aging law for state evolution. A novel aspect of our approach is the formulation of a loss function that explicitly accounts for the direct effect by means of automatic differentiation. It is found that the RNN, with its GRU architecture, effectively learns to predict changes in the friction coefficient resulting from velocity jumps (with and without noise in the target data), thereby showcasing the potential of machine learning models in understanding and simulating the physics of frictional processes.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Europe > Switzerland > Vaud > Lausanne (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
Predictive Model Development to Identify Failed Healing in Patients after Non-Union Fracture Surgery
Donié, Cedric, Reumann, Marie K., Hartung, Tony, Braun, Benedikt J., Histing, Tina, Endo, Satoshi, Hirche, Sandra
Bone non-union is among the most severe complications associated with trauma surgery, occurring in 10-30% of cases after long bone fractures. Treating non-unions requires a high level of surgical expertise and often involves multiple revision surgeries, sometimes even leading to amputation. Thus, more accurate prognosis is crucial for patient well-being. Recent advances in machine learning (ML) hold promise for developing models to predict non-union healing, even when working with smaller datasets, a commonly encountered challenge in clinical domains. To demonstrate the effectiveness of ML in identifying candidates at risk of failed non-union healing, we applied three ML models (logistic regression, support vector machine, and XGBoost) to the clinical dataset TRUFFLE, which includes 797 patients with long bone non-union. The models provided prediction results with 70% sensitivity, and the specificities of 66% (XGBoost), 49% (support vector machine), and 43% (logistic regression). These findings offer valuable clinical insights because they enable early identification of patients at risk of failed non-union healing after the initial surgical revision treatment protocol.
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.06)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.05)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Slovenia > Drava > Municipality of Benedikt > Benedikt (0.04)
The Unreasonable Ineffectiveness of the Deeper Layers
Gromov, Andrey, Tirumala, Kushal, Shapourian, Hassan, Glorioso, Paolo, Roberts, Daniel A.
We empirically study a simple layer-pruning strategy for popular families of open-weight pretrained LLMs, finding minimal degradation of performance on different question-answering benchmarks until after a large fraction (up to half) of the layers are removed. To prune these models, we identify the optimal block of layers to prune by considering similarity across layers; then, to "heal" the damage, we perform a small amount of finetuning. In particular, we use parameter-efficient finetuning (PEFT) methods, specifically quantization and Low Rank Adapters (QLoRA), such that each of our experiments can be performed on a single A100 GPU. From a practical perspective, these results suggest that layer pruning methods can complement other PEFT strategies to further reduce computational resources of finetuning on the one hand, and can improve the memory and latency of inference on the other hand. From a scientific perspective, the robustness of these LLMs to the deletion of layers implies either that current pretraining methods are not properly leveraging the parameters in the deeper layers of the network or that the shallow layers play a critical role in storing knowledge.
- Asia > Middle East > Jordan (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)