Goto

Collaborating Authors

 recommendation policy


Exploration and Persuasion

Slivkins, Aleksandrs

arXiv.org Artificial Intelligence

How to incentivize self-interested agents to explore when they prefer to exploit? Consider a population of self-interested agents that make decisions under uncertainty. They "explore" to acquire new information and "exploit" this information to make good decisions. Collectively they need to balance these two objectives, but their incentives are skewed toward exploitation. This is because exploration is costly, but its benefits are spread over many agents in the future. "Incentivized Exploration" addresses this issue via strategic communication. Consider a benign ``principal" which can communicate with the agents and make recommendations, but cannot force the agents to comply. Moreover, suppose the principal can observe the agents' decisions and the outcomes of these decisions. The goal is to design a communication and recommendation policy which (i) achieves a desirable balance between exploration and exploitation, and (ii) incentivizes the agents to follow recommendations. What makes it feasible is "information asymmetry": the principal knows more than any one agent, as it collects information from many. It is essential that the principal does not fully reveal all its knowledge to the agents. Incentivized exploration combines two important problems in, resp., machine learning and theoretical economics. First, if agents always follow recommendations, the principal faces a multi-armed bandit problem: essentially, design an algorithm that balances exploration and exploitation. Second, interaction with a single agent corresponds to "Bayesian persuasion", where a principal leverages information asymmetry to convince an agent to take a particular action. We provide a brief but self-contained introduction to each problem through the lens of incentivized exploration, solving a key special case of the former as a sub-problem of the latter.


On Causally Disentangled State Representation Learning for Reinforcement Learning based Recommender Systems

Wang, Siyu, Chen, Xiaocong, Yao, Lina

arXiv.org Artificial Intelligence

In Reinforcement Learning-based Recommender Systems (RLRS), the complexity and dynamism of user interactions often result in high-dimensional and noisy state spaces, making it challenging to discern which aspects of the state are truly influential in driving the decision-making process. This issue is exacerbated by the evolving nature of user preferences and behaviors, requiring the recommender system to adaptively focus on the most relevant information for decision-making while preserving generaliability. To tackle this problem, we introduce an innovative causal approach for decomposing the state and extracting \textbf{C}ausal-\textbf{I}n\textbf{D}ispensable \textbf{S}tate Representations (CIDS) in RLRS. Our method concentrates on identifying the \textbf{D}irectly \textbf{A}ction-\textbf{I}nfluenced \textbf{S}tate Variables (DAIS) and \textbf{A}ction-\textbf{I}nfluence \textbf{A}ncestors (AIA), which are essential for making effective recommendations. By leveraging conditional mutual information, we develop a framework that not only discerns the causal relationships within the generative process but also isolates critical state variables from the typically dense and high-dimensional state representations. We provide theoretical evidence for the identifiability of these variables. Then, by making use of the identified causal relationship, we construct causal-indispensable state representations, enabling the training of policies over a more advantageous subset of the agent's state space. We demonstrate the efficacy of our approach through extensive experiments, showcasing our method outperforms state-of-the-art methods.


Harm Mitigation in Recommender Systems under User Preference Dynamics

Chee, Jerry, Kalyanaraman, Shankar, Ernala, Sindhu Kiranmai, Weinsberg, Udi, Dean, Sarah, Ioannidis, Stratis

arXiv.org Artificial Intelligence

We consider a recommender system that takes into account the interplay between recommendations, the evolution of user interests, and harmful content. We model the impact of recommendations on user behavior, particularly the tendency to consume harmful content. We seek recommendation policies that establish a tradeoff between maximizing click-through rate (CTR) and mitigating harm. We establish conditions under which the user profile dynamics have a stationary point, and propose algorithms for finding an optimal recommendation policy at stationarity. We experiment on a semi-synthetic movie recommendation setting initialized with real data and observe that our policies outperform baselines at simultaneously maximizing CTR and mitigating harm.


Committing Bandits

Neural Information Processing Systems

We consider a multi-armed bandit problem where there are two phases. The first phase is an experimentation phase where the decision maker is free to explore multiple options. In the second phase the decision maker has to commit to one of the arms and stick with it. Cost is incurred during both phases with a highercost during the experimentation phase. We analyze the regret in this setup, and both propose algorithms and provide upper and lower bounds that depend on the ratio of the duration of the experimentation phase to the duration of the commitment phase. Our analysis reveals that if given the choice, it is optimal to experiment Θ(ln T) steps and then commit, where T is the time horizon.


The Best Decisions Are Not the Best Advice: Making Adherence-Aware Recommendations

Grand-Clément, Julien, Pauphilet, Jean

arXiv.org Artificial Intelligence

Many high-stake decisions follow an expert-in-loop structure in that a human operator receives recommendations from an algorithm but is the ultimate decision maker. Hence, the algorithm's recommendation may differ from the actual decision implemented in practice. However, most algorithmic recommendations are obtained by solving an optimization problem that assumes recommendations will be perfectly implemented. We propose an adherence-aware optimization framework to capture the dichotomy between the recommended and the implemented policy and analyze the impact of partial adherence on the optimal recommendation. We show that overlooking the partial adherence phenomenon, as is currently being done by most recommendation engines, can lead to arbitrarily severe performance deterioration, compared with both the current human baseline performance and what is expected by the recommendation algorithm. Our framework also provides useful tools to analyze the structure and to compute optimal recommendation policies that are naturally immune against such human deviations, and are guaranteed to improve upon the baseline policy.


ZETAR: Modeling and Computational Design of Strategic and Adaptive Compliance Policies

Huang, Linan, Zhu, Quanyan

arXiv.org Artificial Intelligence

Compliance management plays an important role in mitigating insider threats. Incentive design is a proactive and non-invasive approach to achieving compliance by aligning an insider's incentive with the defender's security objective, which motivates (rather than commands) an insider to act in the organization's interests. Controlling insiders' incentives for population-level compliance is challenging because they are neither precisely known nor directly controllable. To this end, we develop ZETAR, a zero-trust audit and recommendation framework, to provide a quantitative approach to model insiders' incentives and design customized recommendation policies to improve their compliance. We formulate primal and dual convex programs to compute the optimal bespoke recommendation policies. We create the theoretical underpinning for understanding trust, compliance, and satisfaction, which leads to scoring mechanisms of how compliant and persuadable an insider is. After classifying insiders as malicious, self-interested, or amenable based on their incentive misalignment levels with the defender, we establish bespoke information disclosure principles for these insiders of different incentive categories. We identify the policy separability principle and the set convexity, which enable finite-step algorithms to efficiently learn the Completely Trustworthy (CT) policy set when insiders' incentives are unknown. Finally, we present a case study to corroborate the design. Our results show that ZETAR can well adapt to insiders with different risk and compliance attitudes and significantly improve compliance. Moreover, trustworthy recommendations can provably promote cyber hygiene and insiders' satisfaction.


PrefRec: Recommender Systems with Human Preferences for Reinforcing Long-term User Engagement

Xue, Wanqi, Cai, Qingpeng, Xue, Zhenghai, Sun, Shuo, Liu, Shuchang, Zheng, Dong, Jiang, Peng, Gai, Kun, An, Bo

arXiv.org Artificial Intelligence

Current advances in recommender systems have been remarkably successful in optimizing immediate engagement. However, long-term user engagement, a more desirable performance metric, remains difficult to improve. Meanwhile, recent reinforcement learning (RL) algorithms have shown their effectiveness in a variety of long-term goal optimization tasks. For this reason, RL is widely considered as a promising framework for optimizing long-term user engagement in recommendation. Though promising, the application of RL heavily relies on well-designed rewards, but designing rewards related to long-term user engagement is quite difficult. To mitigate the problem, we propose a novel paradigm, recommender systems with human preferences (or Preference-based Recommender systems), which allows RL recommender systems to learn from preferences about users historical behaviors rather than explicitly defined rewards. Such preferences are easily accessible through techniques such as crowdsourcing, as they do not require any expert knowledge. With PrefRec, we can fully exploit the advantages of RL in optimizing long-term goals, while avoiding complex reward engineering. PrefRec uses the preferences to automatically train a reward function in an end-to-end manner. The reward function is then used to generate learning signals to train the recommendation policy. Furthermore, we design an effective optimization method for PrefRec, which uses an additional value function, expectile regression and reward model pre-training to improve the performance. We conduct experiments on a variety of long-term user engagement optimization tasks. The results show that PrefRec significantly outperforms previous state-of-the-art methods in all the tasks.


Online Learning in a Creator Economy

Zhu, Banghua, Karimireddy, Sai Praneeth, Jiao, Jiantao, Jordan, Michael I.

arXiv.org Artificial Intelligence

The creator economy refers to a rapidly growing online-platform-facilitated economy that brings together content creators and users, allowing creators to earn revenue from their creations [Banks and Humphreys, 2008, Bhargava, 2022, El Sanyoura and Anderson, 2022, Radionova and Trots, 2021, Schram, 2020]. These platforms monetize the content created by content creators through various means, including paid audience partnerships, ad revenue, tipping platforms, and product sales provided by the users. The creator economy can be viewed as a three-party game linking users, platform, and content creators. On the one hand, we can model the interactions between the platform and the content creator as a principal-agent relationship, focusing on the need to incentivize the production of high-quality content by the content creator. The platform would like to collect better content to enhance the desirability of the platform. The content creator wishes to gain profit on the platform from their content. The two sides develop agreements in the form of a contract, which specifies how much the platform would pay under the different possible kinds of content. By learning the intent and interest of the content creator, the platform is able to identify a better way to incentivize participation and share the profit with the content creator. The overall framework is contract theory, which is a branch of the theory of incentives [Bolton and Dewatripont, 2004, Faure-Grimaud et al., 2001, Grossman and Hart, 1992, Salanié, 2005].


A Common Misassumption in Online Experiments with Machine Learning Models

Jeunen, Olivier

arXiv.org Artificial Intelligence

Online experiments such as Randomised Controlled Trials (RCTs) or A/B-tests are the bread and butter of modern platforms on the web. They are conducted continuously to allow platforms to estimate the causal effect of replacing system variant "A" with variant "B", on some metric of interest. These variants can differ in many aspects. In this paper, we focus on the common use-case where they correspond to machine learning models. The online experiment then serves as the final arbiter to decide which model is superior, and should thus be shipped. The statistical literature on causal effect estimation from RCTs has a substantial history, which contributes deservedly to the level of trust researchers and practitioners have in this "gold standard" of evaluation practices. Nevertheless, in the particular case of machine learning experiments, we remark that certain critical issues remain. Specifically, the assumptions that are required to ascertain that A/B-tests yield unbiased estimates of the causal effect, are seldom met in practical applications. We argue that, because variants typically learn using pooled data, a lack of model interference cannot be guaranteed. This undermines the conclusions we can draw from online experiments with machine learning models. We discuss the implications this has for practitioners, and for the research literature.


Incentive-Aware Recommender Systems in Two-Sided Markets

Dai, Xiaowu, Yuan, null, Qi, null, Jordan, Michael I.

arXiv.org Artificial Intelligence

Online platforms in the Internet Economy commonly incorporate recommender systems that recommend arms (e.g., products) to agents (e.g., users). In such platforms, a myopic agent has a natural incentive to exploit, by choosing the best product given the current information rather than to explore various alternatives to collect information that will be used for other agents. We propose a novel recommender system that respects agents' incentives and enjoys asymptotically optimal performances expressed by the regret in repeated games. We model such an incentive-aware recommender system as a multi-agent bandit problem in a two-sided market which is equipped with an incentive constraint induced by agents' opportunity costs. If the opportunity costs are known to the principal, we show that there exists an incentive-compatible recommendation policy, which pools recommendations across a genuinely good arm and an unknown arm via a randomized and adaptive approach. On the other hand, if the opportunity costs are unknown to the principal, we propose a policy that randomly pools recommendations across all arms and uses each arm's cumulative loss as feedback for exploration. We show that both policies also satisfy an ex-post fairness criterion, which protects agents from over-exploitation.