Why Machine Learning is vulnerable to adversarial attacks and how to fix it
Through the media, this conversation may appear to sit in a cloud of worry about speculative future-bots that will wipe out humanity. However, real inklings of how we can easily lose mastery over our AI creations are observed in practical problems related to unintended behaviors from poorly designed machine learning systems. Among these potential "AI accidents" is the case of adversarial techniques. This approach takes, for instance, a trained classifier model that performs well with identifying inputs compared to how a person would classify. Then, a new input comes along that includes subtle yet maliciously crafted data that causes the model to behave very poorly. What is troublesome is that the type of poor behavior is not a reduction in the statistical performance of the model.
Jun-15-2019, 18:05:06 GMT