Burke, Robin
Decoupled Recommender Systems: Exploring Alternative Recommender Ecosystem Designs
Buhayh, Anas, McKinnie, Elizabeth, Burke, Robin
Recommender ecosystems are an emerging subject of research. Such research examines how the characteristics of algorithms, recommendation consumers, and item providers influence system dynamics and long-term outcomes. One architectural possibility that has not yet been widely explored in this line of research is the consequences of a configuration in which recommendation algorithms are decoupled from the platforms they serve. This is sometimes called "the friendly neighborhood algorithm store" or "middleware" model. We are particularly interested in how such architectures might offer a range of different distributions of utility across consumers, providers, and recommendation platforms. In this paper, we create a model of a recommendation ecosystem that incorporates algorithm choice and examine the outcomes of such a design.
De-centering the (Traditional) User: Multistakeholder Evaluation of Recommender Systems
Burke, Robin, Adomavicius, Gediminas, Bogers, Toine, Di Noia, Tommaso, Kowald, Dominik, Neidhardt, Julia, รzgรถbek, รzlem, Pera, Maria Soledad, Tintarev, Nava, Ziegler, Jรผrgen
Expanding the frame of evaluation to include other parties, as well as the ecosystem in which the system is deployed, leads us to a multistakeholder view of recommender system evaluation as defined in [2]: "A multistakeholder evaluation is one in which the quality of recommendations is assessed across multiple groups of stakeholders." In this article, we provide (i) an overview of the types of recommendation stakeholders that can be considered in conducting such evaluations, (ii) a discussion of the considerations and values that enter into developing measures that capture outcomes of interest for a diversity of stakeholders, (iii) an outline of a methodology for developing and applying multistakeholder evaluation, and (iv) three examples of different multistakeholder scenarios including derivations of evaluation metrics for different stakeholder groups in these different scenarios. The variety of possible stakeholders we identified that are part of the general recommendation ecosystem is suggested in Figure 1 and defined here, using the terminology from [1, 2]: Recommendation consumers are the traditional recommender system users to whom recommendations are delivered and to which typical forms of recommender system evaluation are oriented. Item providers form the general class of individuals or entities who create or otherwise stand behind the items being recommended.
Social Choice for Heterogeneous Fairness in Recommendation
Aird, Amanda, ล tefancovรก, Elena, All, Cassidy, Voida, Amy, Homola, Martin, Mattei, Nicholas, Burke, Robin
Algorithmic fairness in recommender systems requires close attention to the needs of a diverse set of stakeholders that may have competing interests. Previous work in this area has often been limited by fixed, single-objective definitions of fairness, built into algorithms or optimization criteria that are applied to a single fairness dimension or, at most, applied identically across dimensions. These narrow conceptualizations limit the ability to adapt fairness-aware solutions to the wide range of stakeholder needs and fairness definitions that arise in practice. Our work approaches recommendation fairness from the standpoint of computational social choice, using a multi-agent framework. In this paper, we explore the properties of different social choice mechanisms and demonstrate the successful integration of multiple, heterogeneous fairness definitions across multiple data sets.
Exploring Social Choice Mechanisms for Recommendation Fairness in SCRUF
Aird, Amanda, All, Cassidy, Farastu, Paresha, Stefancova, Elena, Sun, Joshua, Mattei, Nicholas, Burke, Robin
Fairness problems in recommender systems often have a complexity in practice that is not adequately captured in simplified research formulations. A social choice formulation of the fairness problem, operating within a multi-agent architecture of fairness concerns, offers a flexible and multi-aspect alternative to fairness-aware recommendation approaches. Leveraging social choice allows for increased generality and the possibility of tapping into well-studied social choice algorithms for resolving the tension between multiple, competing fairness concerns. This paper explores a range of options for choice mechanisms in multi-aspect fairness applications using both real and synthetic data and shows that different classes of choice and allocation mechanisms yield different but consistent fairness / accuracy tradeoffs. We also show that a multi-agent formulation offers flexibility in adapting to user population dynamics.
Dynamic fairness-aware recommendation through multi-agent social choice
Aird, Amanda, Farastu, Paresha, Sun, Joshua, Voida, Amy, Mattei, Nicholas, Burke, Robin
Algorithmic fairness in the context of personalized recommendation presents significantly different challenges to those commonly encountered in classification tasks. Researchers studying classification have generally considered fairness to be a matter of achieving equality of outcomes between a protected and unprotected group, and built algorithmic interventions on this basis. We argue that fairness in real-world application settings in general, and especially in the context of personalized recommendation, is much more complex and multi-faceted, requiring a more general approach. We propose a model to formalize multistakeholder fairness in recommender systems as a two stage social choice problem. In particular, we express recommendation fairness as a novel combination of an allocation and an aggregation problem, which integrate both fairness concerns and personalized recommendation provisions, and derive new recommendation techniques based on this formulation. Simulations demonstrate the ability of the framework to integrate multiple fairness concerns in a dynamic way.
A Graph-based Approach for Mitigating Multi-sided Exposure Bias in Recommender Systems
Mansoury, Masoud, Abdollahpouri, Himan, Pechenizkiy, Mykola, Mobasher, Bamshad, Burke, Robin
Fairness is a critical system-level objective in recommender systems that has been the subject of extensive recent research. A specific form of fairness is supplier exposure fairness where the objective is to ensure equitable coverage of items across all suppliers in recommendations provided to users. This is especially important in multistakeholder recommendation scenarios where it may be important to optimize utilities not just for the end-user, but also for other stakeholders such as item sellers or producers who desire a fair representation of their items. This type of supplier fairness is sometimes accomplished by attempting to increasing aggregate diversity in order to mitigate popularity bias and to improve the coverage of long-tail items in recommendations. In this paper, we introduce FairMatch, a general graph-based algorithm that works as a post processing approach after recommendation generation to improve exposure fairness for items and suppliers. The algorithm iteratively adds high quality items that have low visibility or items from suppliers with low exposure to the users' final recommendation lists. A comprehensive set of experiments on two datasets and comparison with state-of-the-art baselines show that FairMatch, while significantly improves exposure fairness and aggregate diversity, maintains an acceptable level of relevance of the recommendations.
Fairness and Transparency in Recommendation: The Users' Perspective
Sonboli, Nasim, Smith, Jessie J., Berenfus, Florencia Cabral, Burke, Robin, Fiesler, Casey
Though recommender systems are defined by personalization, recent work has shown the importance of additional, beyond-accuracy objectives, such as fairness. Because users often expect their recommendations to be purely personalized, these new algorithmic objectives must be communicated transparently in a fairness-aware recommender system. While explanation has a long history in recommender systems research, there has been little work that attempts to explain systems that use a fairness objective. Even though the previous work in other branches of AI has explored the use of explanations as a tool to increase fairness, this work has not been focused on recommendation. Here, we consider user perspectives of fairness-aware recommender systems and techniques for enhancing their transparency. We describe the results of an exploratory interview study that investigates user perceptions of fairness, recommender systems, and fairness-aware objectives. We propose three features -- informed by the needs of our participants -- that could improve user understanding of and trust in fairness-aware recommender systems.
User-centered Evaluation of Popularity Bias in Recommender Systems
Abdollahpouri, Himan, Mansoury, Masoud, Burke, Robin, Mobasher, Bamshad, Malthouse, Edward
Recommendation and ranking systems are known to suffer from popularity bias; the tendency of the algorithm to favor a few popular items while under-representing the majority of other items. Prior research has examined various approaches for mitigating popularity bias and enhancing the recommendation of long-tail, less popular, items. The effectiveness of these approaches is often assessed using different metrics to evaluate the extent to which over-concentration on popular items is reduced. However, not much attention has been given to the user-centered evaluation of this bias; how different users with different levels of interest towards popular items are affected by such algorithms. In this paper, we show the limitations of the existing metrics to evaluate popularity bias mitigation when we want to assess these algorithms from the users' perspective and we propose a new metric that can address these limitations. In addition, we present an effective approach that mitigates popularity bias from the user-centered point of view. Finally, we investigate several state-of-the-art approaches proposed in recent years to mitigate popularity bias and evaluate their performances using the existing metrics and also from the users' perspective. Our experimental results using two publicly-available datasets show that existing popularity bias mitigation techniques ignore the users' tolerance towards popular items. Our proposed user-centered method can tackle popularity bias effectively for different users while also improving the existing metrics.
"And the Winner Is...": Dynamic Lotteries for Multi-group Fairness-Aware Recommendation
Sonboli, Nasim, Burke, Robin, Mattei, Nicholas, Eskandanian, Farzad, Gao, Tian
As recommender systems are being designed and deployed for an increasing number of socially-consequential applications, it has become important to consider what properties of fairness these systems exhibit. There has been considerable research on recommendation fairness. However, we argue that the previous literature has been based on simple, uniform and often uni-dimensional notions of fairness assumptions that do not recognize the real-world complexities of fairness-aware applications. In this paper, we explicitly represent the design decisions that enter into the trade-off between accuracy and fairness across multiply-defined and intersecting protected groups, supporting multiple fairness metrics. The framework also allows the recommender to adjust its performance based on the historical view of recommendations that have been delivered over a time horizon, dynamically rebalancing between fairness concerns. Within this framework, we formulate lottery-based mechanisms for choosing between fairness concerns, and demonstrate their performance in two recommendation domains.
Opportunistic Multi-aspect Fairness through Personalized Re-ranking
Sonboli, Nasim, Eskandanian, Farzad, Burke, Robin, Liu, Weiwen, Mobasher, Bamshad
As recommender systems have become more widespread and moved into areas with greater social impact, such as employment and housing, researchers have begun to seek ways to ensure fairness in the results that such systems produce. This work has primarily focused on developing recommendation approaches in which fairness metrics are jointly optimized along with recommendation accuracy. However, the previous work had largely ignored how individual preferences may limit the ability of an algorithm to produce fair recommendations. Furthermore, with few exceptions, researchers have only considered scenarios in which fairness is measured relative to a single sensitive feature or attribute (such as race or gender). In this paper, we present a re-ranking approach to fairness-aware recommendation that learns individual preferences across multiple fairness dimensions and uses them to enhance provider fairness in recommendation results. Specifically, we show that our opportunistic and metric-agnostic approach achieves a better trade-off between accuracy and fairness than prior re-ranking approaches and does so across multiple fairness dimensions.