Not enough data to create a plot.
Try a different view from the menu above.
Christianson, Nicolas
Fast and Reliable $N-k$ Contingency Screening with Input-Convex Neural Networks
Christianson, Nicolas, Cui, Wenqi, Low, Steven, Yang, Weiwei, Zhang, Baosen
Power system operators must ensure that dispatch decisions remain feasible in case of grid outages or contingencies to prevent cascading failures and ensure reliable operation. However, checking the feasibility of all $N - k$ contingencies -- every possible simultaneous failure of $k$ grid components -- is computationally intractable for even small $k$, requiring system operators to resort to heuristic screening methods. Because of the increase in uncertainty and changes in system behaviors, heuristic lists might not include all relevant contingencies, generating false negatives in which unsafe scenarios are misclassified as safe. In this work, we propose to use input-convex neural networks (ICNNs) for contingency screening. We show that ICNN reliability can be determined by solving a convex optimization problem, and by scaling model weights using this problem as a differentiable optimization layer during training, we can learn an ICNN classifier that is both data-driven and has provably guaranteed reliability. Namely, our method can ensure a zero false negative rate. We empirically validate this methodology in a case study on the IEEE 39-bus test network, observing that it yields substantial (10-20x) speedups while having excellent classification accuracy.
End-to-End Conformal Calibration for Optimization Under Uncertainty
Yeh, Christopher, Christianson, Nicolas, Wu, Alan, Wierman, Adam, Yue, Yisong
Machine learning can significantly improve performance for decision-making under uncertainty in a wide range of domains. However, ensuring robustness guarantees requires well-calibrated uncertainty estimates, which can be difficult to achieve in high-capacity prediction models such as deep neural networks. Moreover, in high-dimensional settings, there may be many valid uncertainty estimates, each with their own performance profile - i.e., not all uncertainty is equally valuable for downstream decision-making. To address this problem, this paper develops an end-to-end framework to learn the uncertainty estimates for conditional robust optimization, with robustness and calibration guarantees provided by conformal prediction. In addition, we propose to represent arbitrary convex uncertainty sets with partially input-convex neural networks, which are learned as part of our framework. Our approach consistently improves upon two-stage estimate-then-optimize baselines on concrete applications in energy storage arbitrage and portfolio optimization.
Chasing Convex Functions with Long-term Constraints
Lechowicz, Adam, Christianson, Nicolas, Sun, Bo, Bashir, Noman, Hajiesmaili, Mohammad, Wierman, Adam, Shenoy, Prashant
This paper introduces and studies a novel class of online metric problems with long-term demand constraints motivated by emerging applications in the design of sustainable systems. In convex function chasing with a long-term constraint, an online player aims to satisfy a demand by making decisions in a normed vector space, paying a hitting cost based on time-varying convex cost functions which are revealed online, and switching cost defined by the norm. The player is constrained to ensure that the entire demand is satisfied at or before the time horizon ends, and their objective is to minimize their total cost. The generality of this problem makes it applicable to a wide variety of online resource allocation problems; in this paper, we consider one such special case, discussing its connections to other online settings and suggestions towards broad new areas of inquiry in online optimization with long-term constraints. Our motivation to introduce these problems is rooted in an emerging class of carbon-aware control problems for sustainable systems. A shared objective involves minimizing carbon emissions by shifting flexible workloads temporally and/or spatially to better leverage low-carbon electricity generation (e.g., renewables such as solar and wind).
Online Conversion with Switching Costs: Robust and Learning-Augmented Algorithms
Lechowicz, Adam, Christianson, Nicolas, Sun, Bo, Bashir, Noman, Hajiesmaili, Mohammad, Wierman, Adam, Shenoy, Prashant
This paper introduces and studies online conversion with switching costs (OCS), a novel class of online problems motivated by emerging control problems in the design of sustainable systems. We consider both minimization (OCS-min) and maximization (OCS-max) variants of the problem. In OCS-min, an online player aims to purchase one item over a sequence of time-varying cost functions and decides the fractional amount of item to purchase in each round. The player must purchase the entire item before a deadline, and they incur a movement cost whenever their decision changes, i.e., whenever they purchase different amounts of the item in consecutive time steps. From the player's perspective, the goal is to minimize their total cost, including the total purchasing cost and any movement cost incurred over the time horizon. In OCS-max, the setting is almost the same, except the player sells an item fractionally according to time-varying price functions, so the goal is to maximize their total profit, and any movement costs are subtracted from the revenue. In both settings, the cost/price functions are revealed one by one in an online manner, and the player makes an irrevocable decision at each time step without the knowledge of future cost/price functions. Our motivation behind introducing OCS is an emerging class of carbon-aware problems such as carbon-aware electric vehicle (EV) charging [12] and carbon-aware compute shifting [1, 3, 22, 23, 46, 57], which have attracted significant attention in recent years.
Online Algorithms with Uncertainty-Quantified Predictions
Sun, Bo, Huang, Jerry, Christianson, Nicolas, Hajiesmaili, Mohammad, Wierman, Adam
Online algorithms with predictions have become a trending topic in the field of beyond worst-case analysis of algorithms. These algorithms incorporate predictions about the future to obtain performance guarantees that are of high quality when the predictions are good, while still maintaining bounded worst-case guarantees when predictions are arbitrarily poor. In general, the algorithm is assumed to be unaware of the prediction's quality. However, recent developments in the machine learning literature have studied techniques for providing uncertainty quantification on machine-learned predictions, which describes how certain a model is about its quality. This paper examines the question of how to optimally utilize uncertainty-quantified predictions in the design of online algorithms. In particular, we consider predictions augmented with uncertainty quantification describing the likelihood of the ground truth falling in a certain range, designing online algorithms with these probabilistic predictions for two classic online problems: ski rental and online search. In each case, we demonstrate that non-trivial modifications to algorithm design are needed to fully leverage the probabilistic predictions. Moreover, we consider how to utilize more general forms of uncertainty quantification, proposing a framework based on online learning that learns to exploit uncertainty quantification to make optimal decisions in multi-instance settings.
Chasing Convex Bodies and Functions with Black-Box Advice
Christianson, Nicolas, Handina, Tinashe, Wierman, Adam
We consider the problem of convex function chasing with black-box advice, where an online decision-maker aims to minimize the total cost of making and switching between decisions in a normed vector space, aided by black-box advice such as the decisions of a machine-learned algorithm. The decision-maker seeks cost comparable to the advice when it performs well, known as $\textit{consistency}$, while also ensuring worst-case $\textit{robustness}$ even when the advice is adversarial. We first consider the common paradigm of algorithms that switch between the decisions of the advice and a competitive algorithm, showing that no algorithm in this class can improve upon 3-consistency while staying robust. We then propose two novel algorithms that bypass this limitation by exploiting the problem's convexity. The first, INTERP, achieves $(\sqrt{2}+\epsilon)$-consistency and $\mathcal{O}(\frac{C}{\epsilon^2})$-robustness for any $\epsilon > 0$, where $C$ is the competitive ratio of an algorithm for convex function chasing or a subclass thereof. The second, BDINTERP, achieves $(1+\epsilon)$-consistency and $\mathcal{O}(\frac{CD}{\epsilon})$-robustness when the problem has bounded diameter $D$. Further, we show that BDINTERP achieves near-optimal consistency-robustness trade-off for the special case where cost functions are $\alpha$-polyhedral.