How cybersecurity is getting AI wrong
The cybersecurity industry is rapidly embracing the notion of "zero trust", where architectures, policies, and processes are guided by the principle that no one and nothing should be trusted. However, in the same breath, the cybersecurity industry is incorporating a growing number of AI-driven security solutions that rely on some type of trusted "ground truth" as reference point. This is not a hypothetical discussion. Organizations are introducing AI models into their security practices that impact almost every aspect of their business, and one of the most urgent questions remains whether regulators, compliance officers, security professionals, and employees will be able to trust these security models at all. Because AI models are sophisticated, obscure, automated, and oftentimes evolving, it is difficult to establish trust in an AI-dominant environment. Yet without trust and accountability, some of these models might be considered risk-prohibitive and so could eventually be under-utilized, marginalized, or banned altogether.
Jul-12-2021, 06:10:06 GMT
- Industry:
- Government > Military
- Cyberwarfare (0.94)
- Information Technology > Security & Privacy (1.00)
- Government > Military
- Technology: