Goto

Collaborating Authors

 counteract


RL-based Control of UAS Subject to Significant Disturbance

Chakraborty, Kousheek, Hof, Thijs, Alharbat, Ayham, Mersha, Abeje

arXiv.org Artificial Intelligence

RL-based Control of UAS Subject to Significant DisturbanceAccepted at the 2025 International Conference on Unmanned Aircraft SystemsKousheek Chakraborty, 1, Thijs Hof, 1, A yham Alharbat 1, 2, Abeje Mersha 1 Abstract --This paper proposes a Reinforcement Learning (RL)-based control framework for position and attitude control of an Unmanned Aerial System (UAS) subjected to significant disturbance that can be associated with an uncertain trigger signal. The proposed method learns the relationship between the trigger signal and disturbance force, enabling the system to anticipate and counteract the impending disturbances before they occur . We train and evaluate three policies: a baseline policy trained without exposure to the disturbance, a reactive policy trained with the disturbance but without the trigger signal, and a predictive policy that incorporates the trigger signal as an observation and is exposed to the disturbance during training. Our simulation results show that the predictive policy outperforms the other policies by minimizing position deviations through a proactive correction maneuver . This work highlights the potential of integrating predictive cues into RL frameworks to improve UAS performance. I NTRODUCTION Unmanned Aerial Systems (UAS) are increasingly deployed in high-risk environments to perform critical tasks such as infrastructure inspection, search and rescue, and aerial firefighting [1].


Mining Action Rules for Defect Reduction Planning

Oueslati, Khouloud, Laberge, Gabriel, Lamothe, Maxime, Khomh, Foutse

arXiv.org Artificial Intelligence

Defect reduction planning plays a vital role in enhancing software quality and minimizing software maintenance costs. By training a black box machine learning model and "explaining" its predictions, explainable AI for software engineering aims to identify the code characteristics that impact maintenance risks. However, post-hoc explanations do not always faithfully reflect what the original model computes. In this paper, we introduce CounterACT, a Counterfactual ACTion rule mining approach that can generate defect reduction plans without black-box models. By leveraging action rules, CounterACT provides a course of action that can be considered as a counterfactual explanation for the class (e.g., buggy or not buggy) assigned to a piece of code. We compare the effectiveness of CounterACT with the original action rule mining algorithm and six established defect reduction approaches on 9 software projects. Our evaluation is based on (a) overlap scores between proposed code changes and actual developer modifications; (b) improvement scores in future releases; and (c) the precision, recall, and F1-score of the plans. Our results show that, compared to competing approaches, CounterACT's explainable plans achieve higher overlap scores at the release level (median 95%) and commit level (median 85.97%), and they offer better trade-off between precision and recall (median F1-score 88.12%). Finally, we venture beyond planning and explore leveraging Large Language models (LLM) for generating code edits from our generated plans. Our results show that suggested LLM code edits supported by our plans are actionable and are more likely to pass relevant test cases than vanilla LLM code recommendations.


Remind of the Past: Incremental Learning with Analogical Prompts

Ma, Zhiheng, Hong, Xiaopeng, Liu, Beinan, Wang, Yabin, Guo, Pinyue, Li, Huiyun

arXiv.org Artificial Intelligence

Although data-free incremental learning methods are memory-friendly, accurately estimating and counteracting representation shifts is challenging in the absence of historical data. This paper addresses this thorny problem by proposing a novel incremental learning method inspired by human analogy capabilities. Specifically, we design an analogy-making mechanism to remap the new data into the old class by prompt tuning. It mimics the feature distribution of the target old class on the old model using only samples of new classes. The learnt prompts are further used to estimate and counteract the representation shift caused by fine-tuning for the historical prototypes. The proposed method sets up new state-of-the-art performance on four incremental learning benchmarks under both the class and domain incremental learning settings. It consistently outperforms data-replay methods by only saving feature prototypes for each class. It has almost hit the empirical upper bound by joint training on the Core50 benchmark. The code will be released at \url{https://github.com/ZhihengCV/A-Prompts}.


Researchers quantify bias in Reddit content sometimes used to train AI

#artificialintelligence

In a paper published on the preprint server Arxiv.org, This alone isn't surprising, but the problem is that data from these communities are often used to train large language models like OpenAI's GPT-3. That in turn is important because, as OpenAI itself notes, this sort of bias leads to placing words like "naughty" or "sucked" near female pronouns and "Islam" near words like "terrorism." The scientists' approach uses representations of words called embeddings to discover and categorize language biases, which could enable data scientists to trace the severity of bias in different communities and take steps to counteract this bias. To spotlight examples of potentially offensive content on Reddit subcommunities, given a language model and two sets of words representing concepts to compare and discover biases from, the method identifies the most biased words toward the concepts in a given community.


We Can't Stop Harmful AI So We Must Find Ways To Counteract It

#artificialintelligence

As the risks of deep learning's continued evolution have received greater attention, a growing refrain has focused on how best to prevent AI from being used for harm. From killer robots to prevalent facial recognition, societies are increasingly talking about the need for new legislation and corporate responsibility pledges to halt the spread of harmful AI. Unfortunately, the reality is that deep learning's ease of use and decentralized development across the world means it is simply impossible to constrain how it is used. Instead, societies must focus on how to counteract its most harmful applications. The public, press, pundits and policymakers speak of laws and pledges to halt the harmful use of AI.


AIs are being trained on racist data – and it's starting to show

#artificialintelligence

Machine learning algorithms process vast quantities of data and spot correlations, trends and anomalies, at levels far beyond even the brightest human mind. But as human intelligence relies on accurate information, so too do machines. Algorithms need training data to learn from. This training data is created, selected, collated and annotated by humans. And therein lies the problem.


AI and blockchain can counteract soaring drug prices

#artificialintelligence

In a new report, analysts GlobalData note that disruptive digital technologies, like artificial intelligence, big data analytics, and blockchain technology, will affect all business sectors. The impact also extends to the pharmaceutical industry. The impact will be felt in terms of adding value in the emergent area of personalized treatment (where medicines are tailored for individual patients). Another area will be to counteract the unsustainability of ever-rising medicine prices, says GlobalData, a leading data and analytics company. Prices of drugs have been rising, for most products, way above the rate of inflation, as noted in Scientific American.


Using AI to speed drug discovery

#artificialintelligence

The biomedical startup was founded by University of Toronto alumni David Q. Chen, Elvis Wianda, Liran Belenzon, Tom Leung.So far the venture has raised US$8 million, contributed by a group of investors including Montreal's iNovia Capital and Google's Gradient Ventures (which is Alphabet's AI venture capital firm). The new company is called BenchSci and it aims to use artificial intelligence to scan through millions of data points, drawn from published research papers, in order to find new compounds that can help to accelerate the drug discovery process. The focus of the new venture is with finding commercial antibodies. The researchers spent two years building machine learning software that can extract antibody usage data from published figures. This involves decoding millions of papers, with the end result of making the data easily discoverable for scientists.


artificial-intelligence-bullies

#artificialintelligence

Artificial Intelligence is learning how to take down internet bullies by being trained to recognise patterns of abuse to curb trolling. Jigsaw is training Artificial Intelligence (AI) to recognise patterns of abuse to take down online bullies. The aim of this task is to build a moderation system, using AI and machine learning, allowing for better comment moderation on websites. Much like Jigsaw, this Finnish company is working to provide a service using AI to minimise negative and potentially harmful comments for companies and their social media sites.


AI programs exhibit racist and sexist biases, research reveals

The Guardian

An artificial intelligence tool that has revolutionised the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases. The findings raise the spectre of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons. In the past few years, the ability of programs such as Google Translate to interpret language has improved dramatically. These gains have been thanks to new machine learning techniques and the availability of vast amounts of online text data, on which the algorithms can be trained. However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals.