original model
The friendlier the AI chatbot the more inaccurate it is, study suggests
AI chatbots trained to be warm and friendly when interacting with users may also be more prone to inaccuracies, new research suggests. Oxford Internet Institute (OII) researchers analysed more than 400,000 responses from five AI systems which had been tweaked to communicate in a more empathetic way. Friendlier answers contained more mistakes - from giving inaccurate medical advice to reaffirming user's false beliefs, the study found. The findings raise further questions over the trustworthiness of AI models, which are often deliberately designed to be warm and human-like in order to increase engagement. Such concerns are accentuated by AI chatbots being used for support and even intimacy, as developers seek to broaden their appeal.
- Europe > United Kingdom (0.51)
- North America > United States (0.31)
- Africa > Senegal > Kolda Region > Kolda (0.04)
- North America > Canada (0.04)
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.93)
- Asia > China > Heilongjiang Province > Harbin (0.04)
- North America > United States > Minnesota (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Education (0.93)
- Information Technology > Software (0.46)
- Europe > Romania > Sud - Muntenia Development Region > Giurgiu County > Giurgiu (0.04)
- Asia (0.04)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Sensing and Signal Processing > Image Processing (0.68)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > Pennsylvania (0.04)
- North America > United States > New York > Rensselaer County > Troy (0.04)
- North America > Canada (0.04)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.94)
- Research Report > New Finding (0.67)
- Research Report > Promising Solution (0.47)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.93)
- Information Technology > Artificial Intelligence > Natural Language (0.92)
Text-Guided Attention is All You Need for Zero-Shot Robustness in Vision-Language Models
CLIP), have attracted widespread attention and adoption across various domains. Nonetheless, CLIP has been observed to be susceptible to adversarial examples. Through experimental analysis, we have observed a phenomenon wherein adversarial perturbations induce shifts in text-guided attention. Building upon this observation, we propose a simple yet effective strategy: Text-Guided Attention for Zero-Shot Robustness (TGA-ZSR). This framework incorporates two components: the Attention Refinement module and the Attention-based Model Constraint module.