Goto

Collaborating Authors

 abba


Watch out! Motion is Blurring the Vision of Your Deep Neural Networks

Neural Information Processing Systems

The state-of-the-art deep neural networks (DNNs) are vulnerable against adversarial examples with additive random-like noise perturbations. While such examples are hardly found in the physical world, the image blurring effect caused by object motion, on the other hand, commonly occurs in practice, making the study of which greatly important especially for the widely adopted real-time image processing tasks (e.g., object detection, tracking). In this paper, we initiate the first step to comprehensively investigate the potential hazards of blur effect for DNN, caused by object motion. We propose a novel adversarial attack method that can generate visually natural motion-blurred adversarial examples, named motion-based adversarial blur attack (ABBA). To this end, we first formulate the kernel-prediction-based attack where an input image is convolved with kernels in a pixel-wise way, and the misclassification capability is achieved by tuning the kernel weights.


ABBA-Adapters: Efficient and Expressive Fine-Tuning of Foundation Models

Singhal, Raghav, Ponkshe, Kaustubh, Vartak, Rohit, Vepakomma, Praneeth

arXiv.org Artificial Intelligence

Large Language Models have demonstrated strong performance across a wide range of tasks, but adapting them efficiently to new domains remains a key challenge. Parameter-Efficient Fine-Tuning (PEFT) methods address this by introducing lightweight, trainable modules while keeping most pre-trained weights fixed. The prevailing approach, LoRA, models updates using a low-rank decomposition, but its expressivity is inherently constrained by the rank. Recent methods like HiRA aim to increase expressivity by incorporating a Hadamard product with the frozen weights, but still rely on the structure of the pre-trained model. We introduce ABBA, a new PEFT architecture that reparameterizes the update as a Hadamard product of two independently learnable low-rank matrices. In contrast to prior work, ABBA fully decouples the update from the pre-trained weights, enabling both components to be optimized freely. This leads to significantly higher expressivity under the same parameter budget, a property we validate through matrix reconstruction experiments. Empirically, ABBA achieves state-of-the-art results on arithmetic and commonsense reasoning benchmarks, consistently outperforming existing PEFT methods by a significant margin across multiple models. Our code is publicly available at: https://github.com/CERT-Lab/abba.


Personalised Insulin Adjustment with Reinforcement Learning: An In-Silico Validation for People with Diabetes on Intensive Insulin Treatment

Panagiotou, Maria, Brigato, Lorenzo, Streit, Vivien, Hayoz, Amanda, Proennecke, Stephan, Athanasopoulos, Stavros, Olsen, Mikkel T., Brok, Elizabeth J. den, Svensson, Cecilie H., Makrilakis, Konstantinos, Xatzipsalti, Maria, Vazeou, Andriani, Mertens, Peter R., Pedersen-Bjergaard, Ulrik, de Galan, Bastiaan E., Mougiakakou, Stavroula

arXiv.org Artificial Intelligence

Despite recent advances in insulin preparations and technology, adjusting insulin remains an ongoing challenge for the majority of people with type 1 diabetes (T1D) and longstanding type 2 diabetes (T2D). In this study, we propose the Adaptive Basal-Bolus Advisor (ABBA), a personalised insulin treatment recommendation approach based on reinforcement learning for individuals with T1D and T2D, performing self-monitoring blood glucose measurements and multiple daily insulin injection therapy. We developed and evaluated the ability of ABBA to achieve better time-in-range (TIR) for individuals with T1D and T2D, compared to a standard basal-bolus advisor (BBA). The in-silico test was performed using an FDA-accepted population, including 101 simulated adults with T1D and 101 with T2D. An in-silico evaluation shows that ABBA significantly improved TIR and significantly reduced both times below- and above-range, compared to BBA. ABBA's performance continued to improve over two months, whereas BBA exhibited only modest changes. This personalised method for adjusting insulin has the potential to further optimise glycaemic control and support people with T1D and T2D in their daily self-management. Our results warrant ABBA to be trialed for the first time in humans.


LLM-ABBA: Understanding time series via symbolic approximation

Carson, Erin, Chen, Xinye, Kang, Cheng

arXiv.org Artificial Intelligence

The success of large language models (LLMs) for time series has been demonstrated in previous work. Utilizing a symbolic time series representation, one can efficiently bridge the gap between LLMs and time series. However, the remaining challenge is to exploit the semantic information hidden in time series by using symbols or existing tokens of LLMs, while aligning the embedding space of LLMs according to the hidden information of time series. The symbolic time series approximation (STSA) method called adaptive Brownian bridge-based symbolic aggregation (ABBA) shows outstanding efficacy in preserving salient time series features by modeling time series patterns in terms of amplitude and period while using existing tokens of LLMs. In this paper, we introduce a method, called LLM-ABBA, that integrates ABBA into large language models for various downstream time series tasks. By symbolizing time series, LLM-ABBA compares favorably to the recent state-of-the-art (SOTA) in UCR and three medical time series classification tasks. Meanwhile, a fixed-polygonal chain trick in ABBA is introduced to \kc{avoid obvious drifting} during prediction tasks by significantly mitigating the effects of cumulative error arising from misused symbols during the transition from symbols to numerical values. In time series regression tasks, LLM-ABBA achieves the new SOTA on Time Series Extrinsic Regression (TSER) benchmarks. LLM-ABBA also shows competitive prediction capability compared to recent SOTA time series prediction results. We believe this framework can also seamlessly extend to other time series tasks.


Watch out! Motion is Blurring the Vision of Your Deep Neural Networks

Neural Information Processing Systems

The state-of-the-art deep neural networks (DNNs) are vulnerable against adversarial examples with additive random-like noise perturbations. While such examples are hardly found in the physical world, the image blurring effect caused by object motion, on the other hand, commonly occurs in practice, making the study of which greatly important especially for the widely adopted real-time image processing tasks (e.g., object detection, tracking). In this paper, we initiate the first step to comprehensively investigate the potential hazards of blur effect for DNN, caused by object motion. We propose a novel adversarial attack method that can generate visually natural motion-blurred adversarial examples, named motion-based adversarial blur attack (ABBA). To this end, we first formulate the kernel-prediction-based attack where an input image is convolved with kernels in a pixel-wise way, and the misclassification capability is achieved by tuning the kernel weights.


AI Elvis not the first hologram star to shake his moves on stage

The Guardian

Elvis Presley's immersive concert experience is set to leave London all shook up, with an AI rendering of the king of rock'n'roll ready to enthral fans from November 2024. But this is not the first holographic performance – nor will it be the last. Here are some of the other artists whom technology has allowed to tour from beyond the grave, or as their younger selves. Abba's concert kicks off with a lithe and fresh-faced Benny Andersson reassuring the crowd: "This is really me. I just look very good for my age."


Dolly Parton, Whoopi Goldberg are anti-holograms; expert warns they 'can never fully ensure' against use

FOX News

Dolly Parton says she's never forgotten the little girl she used to be, and although she's getting older, she has no plans to slow down. Although not a new concept, the idea of immortalizing a human through holograms, potentially AI-created, is becoming more relevant as it gets discussed by Hollywood stars. Both country legend Dolly Parton and actor and media personality Whoopi Goldberg have recently noted their aversion to the permanency holograms allow, with Goldberg going so far as to make legal provisions against the technology in her will. Fox News Digital spoke with an expert who said that while certain steps can be taken to protect your name and likeness while alive, things become a whole different ball game after death. JUSTINE BATEMAN RIPS AI USE IN HOLLYWOOD, SAYS TECHNOLOGY IS'GETTING AWAY FROM BEING HUMAN' Dolly Parton and Whoopi Goldberg have both expressed they have no desire to be made into a hologram after their death. "Unfortunately, in the age of AI, celebrities can never fully ensure that their name and likeness won't be used as a hologram post-mortem without their permission," Abe Lichy, partner and chair of the intellectual property practice at McLaughlin & Stern, tells Fox News Digital.


Pet Shop Boys say AI can help complete their unfinished songs

Daily Mail - Science & tech

Artificial intelligence (AI) has proved a controversial matter in the music industry – resulting in legal tussles, job losses and a decline in musical quality. However, British pop icons the Pet Shop Boys argue that the technology can be used in a positive way in the creative process. The group's singer, Neil Tennant, said AI could'fill in the blanks' if a song has been left unfinished, such as when the composer is suffering from writer's block. Tennant and his bandmate Chris Lowe said they are looking at new technology as they prepare their'Dreamworld' greatest hits tour in Europe this summer. 'There's a song that we wrote a chorus for in 2003 and we never finished because I couldn't think of anything for the verses,' Tennant told the Radio Times.


Taking advantage of a very simple property to efficiently infer NFAs

Jastrzab, Tomasz, Lardeux, Frédéric, Monfroy, Eric

arXiv.org Artificial Intelligence

Grammatical inference consists in learning a formal grammar as a finite state machine or as a set of rewrite rules. In this paper, we are concerned with inferring Nondeterministic Finite Automata (NFA) that must accept some words, and reject some other words from a given sample. This problem can naturally be modeled in SAT. The standard model being enormous, some models based on prefixes, suffixes, and hybrids were designed to generate smaller SAT instances. There is a very simple and obvious property that says: if there is an NFA of size k for a given sample, there is also an NFA of size k+1. We first strengthen this property by adding some characteristics to the NFA of size k+1. Hence, we can use this property to tighten the bounds of the size of the minimal NFA for a given sample. We then propose simplified and refined models for NFA of size k+1 that are smaller than the initial models for NFA of size k. We also propose a reduction algorithm to build an NFA of size k from a specific NFA of size k+1. Finally, we validate our proposition with some experimentation that shows the efficiency of our approach.


ASTRIDE: Adaptive Symbolization for Time Series Databases

Combettes, Sylvain W., Truong, Charles, Oudre, Laurent

arXiv.org Artificial Intelligence

We introduce ASTRIDE (Adaptive Symbolization for Time seRIes DatabasEs), a novel symbolic representation of time series, along with its accelerated variant FASTRIDE (Fast ASTRIDE). Unlike most symbolization procedures, ASTRIDE is adaptive during both the segmentation step by performing change-point detection and the quantization step by using quantiles. Instead of proceeding signal by signal, ASTRIDE builds a dictionary of symbols that is common to all signals in a data set. We also introduce D-GED (Dynamic General Edit Distance), a novel similarity measure on symbolic representations based on the general edit distance. We demonstrate the performance of the ASTRIDE and FASTRIDE representations compared to SAX (Symbolic Aggregate approXimation), 1d-SAX, SFA (Symbolic Fourier Approximation), and ABBA (Adaptive Brownian Bridge-based Aggregation) on reconstruction and, when applicable, on classification tasks. These algorithms are evaluated on 86 univariate equal-size data sets from the UCR Time Series Classification Archive. An open source GitHub repository called astride is made available to reproduce all the experiments in Python.