Security-First AI: Foundations for Robust and Trustworthy Systems
–arXiv.org Artificial Intelligence
The conversation around artificial intelligence (AI) often focuses on safety, transparency, accountability, alignment, and responsibility. However, AI security (i.e., the safeguarding of data, models, and pipelines from adversarial manipulation) underpins all of these efforts. This manuscript posits that AI security must be prioritized as a foundational layer. We present a hierarchical view of AI challenges, distinguishing security from safety, and argue for a security-first approach to enable trustworthy and resilient AI systems. We discuss core threat models, key attack vectors, and emerging defense mechanisms, concluding that a metric-driven approach to AI security is essential for robust AI safety, transparency, and accountability.
arXiv.org Artificial Intelligence
Apr-24-2025
- Country:
- Europe > United Kingdom
- England > Oxfordshire > Oxford (0.04)
- North America > United States
- California > San Francisco County > San Francisco (0.14)
- Europe > United Kingdom
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine (0.68)
- Information Technology > Security & Privacy (1.00)
- Law (1.00)
- Technology: