Collaborating Authors

How to evaluate a machine learning model - part 3


This blog post is the continuation of my previous articles part 1 and part 2. The average per-class accuracy is a variation of accuracy. It is defined as the average of the accuracy for each individual class. Accuracy is an example of what is known as a micro-average, while average per-class accuracy is a macro-average. In general, when there are different numbers of examples per class, the average per-class accuracy will be different from the accuracy. Why this is important is because when the classes are imbalanced, i.e., there are a lot more examples of one class than of the other, and then the accuracy will give an imprecise picture as the class with more examples will dominate the statistic.

A Boosting Algorithm for Item Recommendation with Implicit Feedback

AAAI Conferences

Many recommendation tasks are formulated as top- N item recommendation problems based on users' implicit feedback instead of explicit feedback. Here explicit feedback refers to users' ratings to items while implicit feedback is derived from users' interactions with items, e.g. , number of times a user plays a song. In this paper, we propose a boosting algorithm named AdaBPR ( Ada ptive B oosting P ersonalized R anking) for top- N item recommendation using users' implicit feedback. In the proposed framework, multiple homogeneous component recommenders are linearly combined to create an ensemble model, for better recommendation accuracy. The component recommenders are constructed based on a fixed collaborative filtering algorithm by using a re-weighting strategy, which assigns a dynamic weight distribution on the observed user-item interactions. AdaBPR demonstrates its effectiveness on three datasets compared with strong baseline algorithms.

A Differentiable Ranking Metric Using Relaxed Sorting Opeartion for Top-K Recommender Systems Artificial Intelligence

A recommender system generates personalized recommendations for a user by computing the preference score of items, sorting the items according to the score, and filtering the top-Kitemswith high scores. While sorting and ranking items are integral for this recommendation procedure,it is nontrivial to incorporate them in the process of end-to-end model training since sorting is non-differentiable and hard to optimize with gradient-based updates. This incurs the inconsistency issue between the existing learning objectives and ranking-based evaluation metrics of recommendation models. In this work, we present DRM (differentiable ranking metric) that mitigates the inconsistency and improves recommendation performance, by employing the differentiable relaxation of ranking-based evaluation metrics. Via experiments with several real-world datasets, we demonstrate that the joint learning of the DRM cost function upon existing factor based recommendation models significantly improves the quality of recommendations, in comparison with other state-of-the-art recommendation methods.

Class-Weighted Evaluation Metrics for Imbalanced Data Classification Artificial Intelligence

Class distribution skews in imbalanced datasets may lead to models with prediction bias towards majority classes, making fair assessment of classifiers a challenging task. Balanced Accuracy is a popular metric used to evaluate a classifier's prediction performance under such scenarios. However, this metric falls short when classes vary in importance, especially when class importance is skewed differently from class cardinality distributions. In this paper, we propose a simple and general-purpose evaluation framework for imbalanced data classification that is sensitive to arbitrary skews in class cardinalities and importances. Experiments with several state-of-the-art classifiers tested on real-world datasets and benchmarks from two different domains show that our new framework is more effective than Balanced Accuracy - not only in evaluating and ranking model predictions, but also in training the models themselves. For a broad range of machine learning (ML) tasks, predictive modeling in the presence of imbalanced datasets - those with severe distribution skews - has been a longstanding problem (He & Garcia, 2009; Sun et al., 2009; He & Ma, 2013; Branco et al., 2016; Hilario et al., 2018; Johnson & Khoshgoftaar, 2019). Imbalanced training datasets lead to models with prediction bias towards majority classes, which in turn results in misclassification of the underrepresented ones.

Session-Based Recommender Systems


This is an applied research report by Cloudera Fast Forward. We write reports about emerging technologies, and conduct experiments to explore what's possible. Read our full report on Session-based Recommender Systems below, or download the PDF, and be sure to check out our github repo for the Experiments section. Being able to recommend an item of interest to a user (based on their past preferences) is a highly relevant problem in practice. A key trend over the past few years has been session-based recommendation algorithms that provide recommendations solely based on a user's interactions in an ongoing session, and which do not require the existence of user profiles or their entire historical preferences. This report explores a simple, yet powerful, NLP-based approach (word2vec) to recommend a next item to a user. While NLP-based approaches are generally employed for linguistic tasks, here we exploit them to learn the structure induced by a user's behavior or an item's nature. Recommendation systems have become a cornerstone of modern life, spanning sectors that include online retail, music and video streaming, and even content publishing. These systems help us navigate the sheer volume of content on the internet, allowing us to discover what's interesting or important to us. When implemented correctly, recommendation systems help us navigate efficiently and make more informed decisions. While this report is not comprehensive, we will touch on a variety of approaches to recommendation systems, and dig deep into one approach in particular. We'll demonstrate how we used that approach to build a recommendation system from the ground up for an e-commerce use case, and showcase our experimental findings. Recommendation systems are not new, and they have already achieved great success over the past ten years through a variety of approaches. These classic recommendation systems can be broadly categorized as content-based, as collaborative filtering-based, or as hybrid approaches that combine aspects of the two. At a high level, content-based filtering makes recommendations based on user preferences for product features, as identified through either the user's previous actions or explicit feedback.