Plotting

 Li, Xiaocheng


Learning to Make Adherence-Aware Advice

arXiv.org Machine Learning

As artificial intelligence (AI) systems play an increasingly prominent role in human decision-making, challenges surface in the realm of human-AI interactions. One challenge arises from the suboptimal AI policies due to the inadequate consideration of humans disregarding AI recommendations, as well as the need for AI to provide advice selectively when it is most pertinent. This paper presents a sequential decision-making model that (i) takes into account the human's adherence level (the probability that the human follows/rejects machine advice) and (ii) incorporates a defer option so that the machine can temporarily refrain from making advice. We provide learning algorithms that learn the optimal advice policy and make advice only at critical time stamps. Compared to problem-agnostic reinforcement learning algorithms, our specialized learning algorithms not only enjoy better theoretical convergence properties but also show strong empirical performance.


A Neural Network Based Choice Model for Assortment Optimization

arXiv.org Artificial Intelligence

Discrete-choice models are used in economics, marketing and revenue management to predict customer purchase probabilities, say as a function of prices and other features of the offered assortment. While they have been shown to be expressive, capturing customer heterogeneity and behaviour, they are also hard to estimate, often based on many unobservables like utilities; and moreover, they still fail to capture many salient features of customer behaviour. A natural question then, given their success in other contexts, is if neural networks can eliminate the necessity of carefully building a context-dependent customer behaviour model and hand-coding and tuning the estimation. It is unclear however how one would incorporate assortment effects into such a neural network, and also how one would optimize the assortment with such a black-box generative model of choice probabilities. In this paper we investigate first whether a single neural network architecture can predict purchase probabilities for datasets from various contexts and generated under various models and assumptions. Next, we develop an assortment optimization formulation that is solvable by off-the-shelf integer programming solvers. We compare against a variety of benchmark discrete-choice models on simulated as well as real-world datasets, developing training tricks along the way to make the neural network prediction and subsequent optimization robust and comparable in performance to the alternates.


Understanding Uncertainty Sampling

arXiv.org Artificial Intelligence

Uncertainty sampling is a prevalent active learning algorithm that queries sequentially the annotations of data samples which the current prediction model is uncertain about. However, the usage of uncertainty sampling has been largely heuristic: (i) There is no consensus on the proper definition of "uncertainty" for a specific task under a specific loss; (ii) There is no theoretical guarantee that prescribes a standard protocol to implement the algorithm, for example, how to handle the sequentially arrived annotated data under the framework of optimization algorithms such as stochastic gradient descent. In this work, we systematically examine uncertainty sampling algorithms under both stream-based and pool-based active learning. We propose a notion of equivalent loss which depends on the used uncertainty measure and the original loss function and establish that an uncertainty sampling algorithm essentially optimizes against such an equivalent loss. The perspective verifies the properness of existing uncertainty measures from two aspects: surrogate property and loss convexity. Furthermore, we propose a new notion for designing uncertainty measures called \textit{loss as uncertainty}. The idea is to use the conditional expected loss given the features as the uncertainty measure. Such an uncertainty measure has nice analytical properties and generality to cover both classification and regression problems, which enable us to provide the first generalization bound for uncertainty sampling algorithms under both stream-based and pool-based settings, in the full generality of the underlying model and problem. Lastly, we establish connections between certain variants of the uncertainty sampling algorithms with risk-sensitive objectives and distributional robustness, which can partly explain the advantage of uncertainty sampling algorithms when the sample size is small.


Maximum Optimality Margin: A Unified Approach for Contextual Linear Programming and Inverse Linear Programming

arXiv.org Artificial Intelligence

The predict-then-optimize problem considers a learning problem under a decision making context where the output of a machine learning model serves as the input of a downstream optimization problem (e.g. a linear program). The ultimate goal of the learner is to prescribe a decision/solution for the downstream optimization problem using directly the input (variables) of the machine learning model but without full observation of the input of the optimization problem. A similar problem formulation was also studied as prescriptive analytics (Bertsimas and Kallus, 2020) and contextual linear programming (Hu et al., 2022). While Elmachtoub and Grigas (2022) justifies the importance of leveraging the optimization problem structure when building the machine learning model, the existing efforts on exploiting the optimization structure have been largely inadequate. In this paper, we delve deeper into the structural properties of the optimization problem and propose a new approach called maximum optimality margin which builds a max-margin learning model based on the optimality condition of the downstream optimization problem. More importantly, our approach only needs the observations of the optimal solution in the training data rather than the objective function, thus it draws an interesting connection to the inverse optimization problem. The connection gives a new shared perspective on both the predict-then-optimize problem and the inverse optimization problem, and our analysis reveals a scale inconsistency issue that arises practically and theoretically for many existing methods.


On Dynamic Pricing with Covariates

arXiv.org Machine Learning

We consider the dynamic pricing problem with covariates under a generalized linear demand model: a seller can dynamically adjust the price of a product over a horizon of $T$ time periods, and at each time period $t$, the demand of the product is jointly determined by the price and an observable covariate vector $x_t\in\mathbb{R}^d$ through an unknown generalized linear model. Most of the existing literature assumes the covariate vectors $x_t$'s are independently and identically distributed (i.i.d.); the few papers that relax this assumption either sacrifice model generality or yield sub-optimal regret bounds. In this paper we show that a simple pricing algorithm has an $O(d\sqrt{T}\log T)$ regret upper bound without assuming any statistical structure on the covariates $x_t$ (which can even be arbitrarily chosen). The upper bound on the regret matches the lower bound (even under the i.i.d. assumption) up to logarithmic factors. Our paper thus shows that (i) the i.i.d. assumption is not necessary for obtaining low regret, and (ii) the regret bound can be independent of the (inverse) minimum eigenvalue of the covariance matrix of the $x_t$'s, a quantity present in previous bounds. Furthermore, we discuss a condition under which a better regret is achievable and how a Thompson sampling algorithm can be applied to give an efficient computation of the prices.


Quantile Markov Decision Process

arXiv.org Artificial Intelligence

In this paper, we consider the problem of optimizing the quantiles of the cumulative rewards of Markov Decision Processes (MDP), to which we refers as Quantile Markov Decision Processes (QMDP). Traditionally, the goal of a Markov Decision Process (MDP) is to maximize expected cumulative reward over a defined horizon (possibly to be infinite). In many applications, however, a decision maker may be interested in optimizing a specific quantile of the cumulative reward instead of its expectation. Our framework of QMDP provides analytical results characterizing the optimal QMDP solution and presents the algorithm for solving the QMDP. We provide analytical results characterizing the optimal QMDP solution and present the algorithms for solving the QMDP. We illustrate the model with two experiments: a grid game and a HIV optimal treatment experiment.


Hierarchical Modeling of Seed Variety Yields and Decision Making for Future Planting Plans

arXiv.org Machine Learning

Eradicating hunger and malnutrition is a key development goal of the 21st century. We address the problem of optimally identifying seed varieties to reliably increase crop yield within a risk-sensitive decision-making framework. Specifically, we introduce a novel hierarchical machine learning mechanism for predicting crop yield (the yield of different seed varieties of the same crop). We integrate this prediction mechanism with a weather forecasting model, and propose three different approaches for decision making under uncertainty to select seed varieties for planting so as to balance yield maximization and risk.We apply our model to the problem of soybean variety selection given in the 2016 Syngenta Crop Challenge. Our prediction model achieves a median absolute error of 3.74 bushels per acre and thus provides good estimates for input into the decision models.Our decision models identify the selection of soybean varieties that appropriately balance yield and risk as a function of the farmer's risk aversion level. More generally, our models support farmers in decision making about which seed varieties to plant.


Graph Convolution: A High-Order and Adaptive Approach

arXiv.org Machine Learning

In this paper, we presented a novel convolutional neural network framework for graph modeling, with the introduction of two new modules specially designed for graph-structured data: the $k$-th order convolution operator and the adaptive filtering module. Importantly, our framework of High-order and Adaptive Graph Convolutional Network (HA-GCN) is a general-purposed architecture that fits various applications on both node and graph centrics, as well as graph generative models. We conducted extensive experiments on demonstrating the advantages of our framework. Particularly, our HA-GCN outperforms the state-of-the-art models on node classification and molecule property prediction tasks. It also generates 32% more real molecules on the molecule generation task, both of which will significantly benefit real-world applications such as material design and drug screening.


Deep Gaussian Process for Crop Yield Prediction Based on Remote Sensing Data

AAAI Conferences

Agricultural monitoring, especially in developing countries, can help prevent famine and support humanitarian efforts. A central challenge is yield estimation, i.e., predicting crop yields before harvest. We introduce a scalable, accurate, and inexpensive method to predict crop yields using publicly available remote sensing data. Our approach improves existing techniques in three ways. First, we forego hand-crafted features traditionally used in the remote sensing community and propose an approach based on modern representation learning ideas. We also introduce a novel dimensionality reduction technique that allows us to train a Convolutional Neural Network or Long-short Term Memory network and automatically learn useful features even when labeled training data are scarce. Finally, we incorporate a Gaussian Process component to explicitly model the spatio-temporal structure of the data and further improve accuracy. We evaluate our approach on county-level soybean yield prediction in the U.S. and show that it outperforms competing techniques.