Are Smarter LLMs Safer? Exploring Safety-Reasoning Trade-offs in Prompting and Fine-Tuning
Li, Ang, Mo, Yichuan, Li, Mingjie, Wang, Yifei, Wang, Yisen
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) have demonstrated remarkable success across various NLP benchmarks. However, excelling in complex tasks that require nuanced reasoning and precise decision-making demands more than raw language proficiency--LLMs must reason, i.e., think logically, draw from past experiences, and synthesize information to reach conclusions and take action. To enhance reasoning abilities, approaches such as prompting and fine-tuning have been widely explored. While these methods have led to clear improvements in reasoning, their impact on LLM safety remains less understood. In this work, we investigate the interplay between reasoning and safety in LLMs. We highlight the latent safety risks that arise as reasoning capabilities improve, shedding light on previously overlooked vulnerabilities. At the same time, we explore how reasoning itself can be leveraged to enhance safety, uncovering potential mitigation strategies. By examining both the risks and opportunities in reasoning-driven LLM safety, our study provides valuable insights for developing models that are not only more capable but also more trustworthy in real-world deployments.
arXiv.org Artificial Intelligence
Feb-13-2025
- Genre:
- Research Report
- Experimental Study (0.65)
- New Finding (0.92)
- Research Report
- Industry:
- Education (0.87)
- Health & Medicine > Therapeutic Area (0.67)
- Information Technology > Security & Privacy (1.00)
- Leisure & Entertainment (0.68)
- Materials > Chemicals
- Commodity Chemicals (1.00)
- Technology: