More is More: Addition Bias in Large Language Models
Santagata, Luca, De Nobili, Cristiano
–arXiv.org Artificial Intelligence
In this paper, we investigate the presence of additive bias in Large Language Models (LLMs), drawing a parallel to the cognitive bias observed in humans where individuals tend to favor additive over subtractive changes. Using a series of controlled experiments, we tested various LLMs, including GPT-3.5 Turbo, Claude 3.5 Sonnet, Mistral, Math$\Sigma$tral, and Llama 3.1, on tasks designed to measure their propensity for additive versus subtractive modifications. Our findings demonstrate a significant preference for additive changes across all tested models. For example, in a palindrome creation task, Llama 3.1 favored adding letters 97.85% of the time over removing them. Similarly, in a Lego tower balancing task, GPT-3.5 Turbo chose to add a brick 76.38% of the time rather than remove one. In a text summarization task, Mistral 7B produced longer summaries in 59.40% to 75.10% of cases when asked to improve its own or others' writing. These results indicate that, similar to humans, LLMs exhibit a marked additive bias, which might have implications when LLMs are used on a large scale. Addittive bias might increase resource use and environmental impact, leading to higher economic costs due to overconsumption and waste. This bias should be considered in the development and application of LLMs to ensure balanced and efficient problem-solving approaches.
arXiv.org Artificial Intelligence
Sep-4-2024
- Country:
- Africa
- Middle East > Egypt (0.04)
- North Africa (0.04)
- Asia (0.04)
- Europe > Italy
- Friuli Venezia Giulia > Trieste Province
- Trieste (0.04)
- Lazio > Rome (0.04)
- Trentino-Alto Adige/Südtirol > Trentino Province
- Trento (0.04)
- Friuli Venezia Giulia > Trieste Province
- North America > United States (0.14)
- Pacific Ocean > North Pacific Ocean
- Puget Sound (0.04)
- Africa
- Genre:
- Research Report
- Experimental Study (0.54)
- New Finding (0.86)
- Research Report
- Industry:
- Law (0.67)
- Technology: