Machine Learning Detection and Response: Safeguarding AI with MLDR
In previous articles, we've discussed the ubiquity of AI-based systems and the risks they're facing; we've also described the common types of attacks against machine learning (ML) and built a list of adversarial ML tools and frameworks that are publicly available. Today, the time has come to talk about countermeasures. Over the past year, we've been working on something that fundamentally changes how we approach the security of ML and AI systems. Typically undertaken is a robustness-first approach which adds complexity to models, often at the expense of performance, efficacy, and training cost. To us, it felt like kicking the can down the road and not addressing the core problem – that ML is under attack. Back in 2019, the future founders of HiddenLayer worked closely together at a next-generation antivirus company.
Feb-19-2023, 02:15:30 GMT