support example
ImageBrush: Learning Visual In-Context Instructions
Our approach can be naturally extended to include multiple examples. Below we discuss the impact of these examples on our model's final performance by varying their Similarly, in the third row, the wormhole becomes complete. In our work, we have developed a human interface to further enhance our model's ability to understand Additionally, the dress before the chest area is better preserved. Grounding dino: Marrying dino with grounded pre-training for open-set object detection.
- North America > United States > Virginia (0.04)
- North America > United States > Michigan (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Information Technology > Security & Privacy (0.46)
- Health & Medicine > Diagnostic Medicine > Imaging (0.46)
- North America > United States > Virginia (0.04)
- North America > United States > Michigan (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Information Technology > Security & Privacy (0.46)
- Health & Medicine > Diagnostic Medicine > Imaging (0.46)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
From Theory to Practice: Evaluating Data Poisoning Attacks and Defenses in In-Context Learning on Social Media Health Discourse
Jhuma, Rabeya Amin, Faisal, Mostafa Mohaimen Akand
This study explored how in-context learning (ICL) in large language models can be disrupted by data poisoning attacks in the setting of public health sentiment analysis. Using tweets of Human Metapneumovirus (HMPV), small adversarial perturbations such as synonym replacement, negation insertion, and randomized perturbation were introduced into the support examples. Even these minor manipulations caused major disruptions, with sentiment labels flipping in up to 67% of cases. To address this, a Spectral Signature Defense was applied, which filtered out poisoned examples while keeping the data's meaning and sentiment intact. After defense, ICL accuracy remained steady at around 46.7%, and logistic regression validation reached 100% accuracy, showing that the defense successfully preserved the dataset's integrity. Overall, the findings extend prior theoretical studies of ICL poisoning to a practical, high-stakes setting in public health discourse analysis, highlighting both the risks and potential defenses for robust LLM deployment. This study also highlights the fragility of ICL under attack and the value of spectral defenses in making AI systems more reliable for health-related social media monitoring.
- Information Technology > Artificial Intelligence > Natural Language > Discourse & Dialogue (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.71)
- Information Technology > Artificial Intelligence > Natural Language > Information Extraction (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.50)
A Supplementary Material Learning Compositional Rules via Neural Program Synthesis
All models were implemented in PyTorch. For all experiments, we report standard error below. Primitive rules map a word to a color (e.g. In a higher-order rule, the left hand side can be one or two variables and a word, and the right hand side can be any sequence of bracketed forms of those variables. Figure A.2 shows several example training grammars sampled from the meta-grammar.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > Canada (0.04)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.69)
- Information Technology > Artificial Intelligence > Cognitive Science (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Rule-Based Reasoning (0.47)
Because we could not address all the issues due to the lack of space, we will try to include them in the final version
We thank all the reviewers for helpful feedback. We will do our best to answer the reviewers' questions and concerns. Because we could not address all the issues due to the lack of space, we will try to include them in the final version. Although Ravi et al. [28] include these adaptive properties, We will include clearer discussion with prior works in the updated version of the paper. We will publicly release the code and trained models if our paper gets accepted.