Goto

Collaborating Authors

 optimal sequence


Multi-armed Bandits: Competing with Optimal Sequences

Neural Information Processing Systems

We consider sequential decision making problem in the adversarial setting, where regret is measured with respect to the optimal sequence of actions and the feedback adheres the bandit setting. It is well-known that obtaining sublinear regret in this setting is impossible in general, which arises the question of when can we do better than linear regret? Previous works show that when the environment is guaranteed to vary slowly and furthermore we are given prior knowledge regarding its variation (i.e., a limit on the amount of changes suffered by the environment), then this task is feasible. The caveat however is that such prior knowledge is not likely to be available in practice, which causes the obtained regret bounds to be somewhat irrelevant. Our main result is a regret guarantee that scales with the variation parameter of the environment, without requiring any prior knowledge about it whatsoever. By that, we also resolve an open problem posted by [Gur, Zeevi and Besbes, NIPS' 14]. An important key component in our result is a statistical test for identifying non-stationarity in a sequence of independent random variables. This test either identifies non-stationarity or upper-bounds the absolute deviation of the corresponding sequence of mean values in terms of its total variation. This test is interesting on its own right and has the potential to be found useful in additional settings.


Fairness in Repeated Matching: A Maximin Perspective

Lim, Eugene, Neoh, Tzeh Yuan, Teh, Nicholas

arXiv.org Artificial Intelligence

Traditional machine learning (ML) algorithms often focus on global objectives such as efficiency (e.g., maximizing accuracy or minimizing error rates in decision-making systems) or maximizing revenue/profit (e.g., maximizing click-through rates for recommendation systems), as they align closely with organizational goals and are more straightforward to quantify and optimize. However, modern approaches increasingly emphasize fairness as a key desideratum, as societal and regulatory demands push for more equitable and responsible ML systems. We consider a multi-agent sequential decision-making scenario where a set of resources must be allocated among agents repeatedly over time, with the objective of achieving fairness in the assignment process. This framework encompasses applications such as dynamic spectrum allocation in wireless networks and energy distribution in smart grids [Elhachmi, 2022, Jain et al., 2022, Rony et al., 2021, Soares et al., 2024]. In the case of spectrum allocation, communication channels must be repeatedly assigned to devices, with each device requiring exclusive access to one channel in each time slot. Persistent disparities in access can degrade system efficiency, reduce user satisfaction, and undermine trust. Similarly, in many other ML-driven resource allocation systems, disparities in the distribution of resources--such as GPUs in distributed computing--can lead to unfair outcomes that compromise the perceived and actual effectiveness of the system. Numerous other applications where decisions are made dynamically--such as assigning tasks to workers in crowdsourcing platforms [Moayedikia et al., 2020], or distributing compute resources in cloud systems [Belgacem, 2022, Gupta et al., 2017, Saraswathi et al., 2015]--call for central decision-makers to ensure that no agent is persistently disadvantaged, which is critical for both fairness and long-term trust in the system. The scenarios described above can be captured using the repeated matching framework--a multi-agent sequential decision-making model in which a set of goods is repeatedly matched to agents over time, and each agent is assigned exactly one good at each round.


AI-Assisted Decision Making with Human Learning

Noti, Gali, Donahue, Kate, Kleinberg, Jon, Oren, Sigal

arXiv.org Artificial Intelligence

AI systems increasingly support human decision-making. In many cases, despite the algorithm's superior performance, the final decision remains in human hands. For example, an AI may assist doctors in determining which diagnostic tests to run, but the doctor ultimately makes the diagnosis. This paper studies such AI-assisted decision-making settings, where the human learns through repeated interactions with the algorithm. In our framework, the algorithm -- designed to maximize decision accuracy according to its own model -- determines which features the human can consider. The human then makes a prediction based on their own less accurate model. We observe that the discrepancy between the algorithm's model and the human's model creates a fundamental tradeoff. Should the algorithm prioritize recommending more informative features, encouraging the human to recognize their importance, even if it results in less accurate predictions in the short term until learning occurs? Or is it preferable to forgo educating the human and instead select features that align more closely with their existing understanding, minimizing the immediate cost of learning? This tradeoff is shaped by the algorithm's time-discounted objective and the human's learning ability. Our results show that optimal feature selection has a surprisingly clean combinatorial characterization, reducible to a stationary sequence of feature subsets that is tractable to compute. As the algorithm becomes more "patient" or the human's learning improves, the algorithm increasingly selects more informative features, enhancing both prediction accuracy and the human's understanding. Notably, early investment in learning leads to the selection of more informative features than a later investment. We complement our analysis by showing that the impact of errors in the algorithm's knowledge is limited as it does not make the prediction directly.


Multi-armed Bandits: Competing with Optimal Sequences

Neural Information Processing Systems

We consider sequential decision making problem in the adversarial setting, where regret is measured with respect to the optimal sequence of actions and the feedback adheres the bandit setting. It is well-known that obtaining sublinear regret in this setting is impossible in general, which arises the question of when can we do better than linear regret? Previous works show that when the environment is guaranteed to vary slowly and furthermore we are given prior knowledge regarding its variation (i.e., a limit on the amount of changes suffered by the environment), then this task is feasible. The caveat however is that such prior knowledge is not likely to be available in practice, which causes the obtained regret bounds to be somewhat irrelevant. Our main result is a regret guarantee that scales with the variation parameter of the environment, without requiring any prior knowledge about it whatsoever.


Optimizing adaptive sampling via Policy Ranking

Nadeem, Hassan, Shukla, Diwakar

arXiv.org Machine Learning

Efficient sampling in biomolecular simulations is critical for accurately capturing the complex dynamical behaviors of biological systems. Adaptive sampling techniques aim to improve efficiency by focusing computational resources on the most relevant regions of phase space. In this work, we present a framework for identifying the optimal sampling policy through metric driven ranking. Our approach systematically evaluates the policy ensemble and ranks the policies based on their ability to explore the conformational space effectively. Through a series of biomolecular simulation case studies, we demonstrate that choice of a different adaptive sampling policy at each round significantly outperforms single policy sampling, leading to faster convergence and improved sampling performance. This approach takes an ensemble of adaptive sampling policies and identifies the optimal policy for the next round based on current data. Beyond presenting this ensemble view of adaptive sampling, we also propose two sampling algorithms that approximate this ranking framework on the fly. The modularity of this framework allows incorporation of any adaptive sampling policy making it versatile and suitable as a comprehensive adaptive sampling scheme.


Chain of Compression: A Systematic Approach to Combinationally Compress Convolutional Neural Networks

Shen, Yingtao, Sun, Minqing, Zhao, Jie, Zou, An

arXiv.org Artificial Intelligence

Convolutional neural networks (CNNs) have achieved significant popularity, but their computational and memory intensity poses challenges for resource-constrained computing systems, particularly with the prerequisite of real-time performance. To release this burden, model compression has become an important research focus. Many approaches like quantization, pruning, early exit, and knowledge distillation have demonstrated the effect of reducing redundancy in neural networks. Upon closer examination, it becomes apparent that each approach capitalizes on its unique features to compress the neural network, and they can also exhibit complementary behavior when combined. To explore the interactions and reap the benefits from the complementary features, we propose the Chain of Compression, which works on the combinational sequence to apply these common techniques to compress the neural network. Validated on the image-based regression and classification networks across different data sets, our proposed Chain of Compression can significantly compress the computation cost by 100-1000 times with ignorable accuracy loss compared with the baseline model.


Optimal Dynamic Regret in Exp-Concave Online Learning

Baby, Dheeraj, Wang, Yu-Xiang

arXiv.org Machine Learning

We consider the problem of the Zinkevich (2003)-style dynamic regret minimization in online learning with exp-concave losses. We show that whenever improper learning is allowed, a Strongly Adaptive online learner achieves the dynamic regret of $\tilde O(d^{3.5}n^{1/3}C_n^{2/3} \vee d\log n)$ where $C_n$ is the total variation (a.k.a. path length) of the an arbitrary sequence of comparators that may not be known to the learner ahead of time. Achieving this rate was highly nontrivial even for squared losses in 1D where the best known upper bound was $O(\sqrt{nC_n} \vee \log n)$ (Yuan and Lamperski, 2019). Our new proof techniques make elegant use of the intricate structures of the primal and dual variables imposed by the KKT conditions and could be of independent interest. Finally, we apply our results to the classical statistical problem of locally adaptive non-parametric regression (Mammen, 1991; Donoho and Johnstone, 1998) and obtain a stronger and more flexible algorithm that do not require any statistical assumptions or any hyperparameter tuning.


Fatigue-aware Bandits for Dependent Click Models

Cao, Junyu, Sun, Wei, Zuo-Jun, null, Shen, null, Ettl, Markus

arXiv.org Machine Learning

As recommender systems send a massive amount of content to keep users engaged, users may experience fatigue which is contributed by 1) an overexposure to irrelevant content, 2) boredom from seeing too many similar recommendations. To address this problem, we consider an online learning setting where a platform learns a policy to recommend content that takes user fatigue into account. We propose an extension of the Dependent Click Model (DCM) to describe users' behavior. We stipulate that for each piece of content, its attractiveness to a user depends on its intrinsic relevance and a discount factor which measures how many similar contents have been shown. Users view the recommended content sequentially and click on the ones that they find attractive. Users may leave the platform at any time, and the probability of exiting is higher when they do not like the content. Based on user's feedback, the platform learns the relevance of the underlying content as well as the discounting effect due to content fatigue. We refer to this learning task as "fatigue-aware DCM Bandit" problem. We consider two learning scenarios depending on whether the discounting effect is known. For each scenario, we propose a learning algorithm which simultaneously explores and exploits, and characterize its regret bound.


Multi-armed Bandits: Competing with Optimal Sequences

Karnin, Zohar S., Anava, Oren

Neural Information Processing Systems

We consider sequential decision making problem in the adversarial setting, where regret is measured with respect to the optimal sequence of actions and the feedback adheres the bandit setting. It is well-known that obtaining sublinear regret in this setting is impossible in general, which arises the question of when can we do better than linear regret? Previous works show that when the environment is guaranteed to vary slowly and furthermore we are given prior knowledge regarding its variation (i.e., a limit on the amount of changes suffered by the environment), then this task is feasible. The caveat however is that such prior knowledge is not likely to be available in practice, which causes the obtained regret bounds to be somewhat irrelevant. Our main result is a regret guarantee that scales with the variation parameter of the environment, without requiring any prior knowledge about it whatsoever.


Dynamic Learning of Sequential Choice Bandit Problem under Marketing Fatigue

Cao, Junyu, Sun, Wei

arXiv.org Machine Learning

Motivated by the observation that overexposure to unwanted marketing activities leads to customer dissatisfaction, we consider a setting where a platform offers a sequence of messages to its users and is penalized when users abandon the platform due to marketing fatigue. We propose a novel sequential choice model to capture multiple interactions taking place between the platform and its user: Upon receiving a message, a user decides on one of the three actions: accept the message, skip and receive the next message, or abandon the platform. Based on user feedback, the platform dynamically learns users' abandonment distribution and their valuations of messages to determine the length of the sequence and the order of the messages, while maximizing the cumulative payoff over a horizon of length T. We refer to this online learning task as the sequential choice bandit problem. For the offline combinatorial optimization problem, we show that an efficient polynomial-time algorithm exists. For the online problem, we propose an algorithm that balances exploration and exploitation, and characterize its regret bound. Lastly, we demonstrate how to extend the model with user contexts to incorporate personalization.