Goto

Collaborating Authors

 mldr


mldr.resampling: Efficient Reference Implementations of Multilabel Resampling Algorithms

Rivera, Antonio J., Dávila, Miguel A., Elizondo, David, del Jesus, María J., Charte, Francisco

arXiv.org Artificial Intelligence

MultiLabel Learning (MLL) [1] is one of the most common machine learning tasks today. It is based on the idea that each data sample is associated with a certain subset of labels. The full set of labels can be large, in many cases even having more labels than input features. As a result, it is common for some labels to occur in only a few samples, while others occur much more frequently. The label imbalance [2] in MLL is almost always present, and it is a serious obstacle to training good classifiers. Class imbalance is a very well-known problem in traditional learning tasks such as binary and multiclass classification. Hundreds of articles [3, 4, 5], conference papers [6] and books [7] have been devoted to studying it and proposing possible solutions. The most popular are data resampling, cost-sensitive learning and mixtures of these approaches [8, 9]. However, imbalanced learning in the MLL field presents some specific aspects that make it more difficult to deal with this problem.


Machine Learning Detection and Response: Safeguarding AI with MLDR

#artificialintelligence

In previous articles, we've discussed the ubiquity of AI-based systems and the risks they're facing; we've also described the common types of attacks against machine learning (ML) and built a list of adversarial ML tools and frameworks that are publicly available. Today, the time has come to talk about countermeasures. Over the past year, we've been working on something that fundamentally changes how we approach the security of ML and AI systems. Typically undertaken is a robustness-first approach which adds complexity to models, often at the expense of performance, efficacy, and training cost. To us, it felt like kicking the can down the road and not addressing the core problem – that ML is under attack. Back in 2019, the future founders of HiddenLayer worked closely together at a next-generation antivirus company.