Goto

Collaborating Authors

 Terekhov, Daria


Learning Linear Programs from Optimal Decisions

arXiv.org Machine Learning

We propose a flexible gradient-based framework for learning linear programs from optimal decisions. Linear programs are often specified by hand, using prior knowledge of relevant costs and constraints. In some applications, linear programs must instead be learned from observations of optimal decisions. Learning from optimal decisions is a particularly challenging bi-level problem, and much of the related inverse optimization literature is dedicated to special cases. We tackle the general problem, learning all parameters jointly while allowing flexible parametrizations of costs, constraints, and loss functions. We also address challenges specific to learning linear programs, such as empty feasible regions and non-unique optimal decisions. Experiments show that our method successfully learns synthetic linear programs and minimum-cost multi-commodity flow instances for which previous methods are not directly applicable. We also provide a fast batch-mode PyTorch implementation of the homogeneous interior point algorithm, which supports gradients by implicit differentiation or backpropagation.


Deep Inverse Optimization

arXiv.org Machine Learning

Given a set of observations generated by an optimization process, the goal of inverse optimization is to determine likely parameters of that process. We cast inverse optimization as a form of deep learning. Our method, called deep inverse optimization, is to unroll an iterative optimization process and then use backpropagation to learn parameters that generate the observations. We demonstrate that by backpropagating through the interior point algorithm we can learn the coefficients determining the cost vector and the constraints, independently or jointly, for both non-parametric and parametric linear programs, starting from one or multiple observations. With this approach, inverse optimization can leverage concepts and algorithms from deep learning.


Hybrid Queueing Theory and Scheduling Models for Dynamic Environments with Sequence-Dependent Setup Times

AAAI Conferences

Classically, scheduling research in artificial intelligence has concentrated on the combinatorial challenges arising in a large, static domain where the set of jobs, resource capacities, and other problem parameters are known with certainty and do not change. In contrast, queueing theory has focused primarily on the stochastic arrival and resource requirements of new jobs, de-emphasizing the combinatorics. We study a dynamic parallel scheduling problem with sequence-dependent setup times: arriving jobs must be assigned (online) to one of a set of resources. The jobs have different service times on different resources and there exist setup times that are required to elapse between jobs, depending on both the resource used and the job sequence. We investigate four models that hybridize a scheduling model with techniques from queueing theory to address the dynamic problem. We demonstrate that one of the hybrid models can significantly reduce observed mean flow time performance when compared to the pure scheduling and queueing theory methods. More specifically, at high system loads, our hybrid model achieves a 15% to 60% decrease in mean flow time compared to the pure methodologies. This paper illustrates the advantages of integrating techniques from queueing theory and scheduling to improve performance in dynamic problems with complex combinatorics.


Long-Run Stability in Dynamic Scheduling

AAAI Conferences

Stability analysis consists of identifying conditions under which the number of jobs in a system is guaranteed to remain bounded over time. To date, such long-run performance guarantees have not been available for periodic approaches to dynamic scheduling problems. However, stability has been extensively studied in queueing theory. In this paper, we introduce stability to the dynamic scheduling literature and demonstrate that stability guarantees can be obtained for methods that build the schedule for a dynamic problem by periodically solving static deterministic sub-problems. Specifically, we analyze the stability of two dynamic environments: a two-machine flow shop, which has received significant attention in scheduling research, and a polling system with a flow-shop server, an extension of systems typically considered in queueing. We demonstrate that, among stable policies, methods based on periodic optimization of static schedules may achieve better mean flow times than traditional queueing approaches.


A Constraint Programming Approach for Solving a Queueing Control Problem

arXiv.org Artificial Intelligence

In a facility with front room and back room operations, it is useful to switch workers between the rooms in order to cope with changing customer demand. Assuming stochastic customer arrival and service times, we seek a policy for switching workers such that the expected customer waiting time is minimized while the expected back room staffing is sufficient to perform all work. Three novel constraint programming models and several shaving procedures for these models are presented. Experimental results show that a model based on closed-form expressions together with a combination of shaving procedures is the most efficient. This model is able to find and prove optimal solutions for many problem instances within a reasonable run-time. Previously, the only available approach was a heuristic algorithm. Furthermore, a hybrid method combining the heuristic and the best constraint programming method is shown to perform as well as the heuristic in terms of solution quality over time, while achieving the same performance in terms of proving optimality as the pure constraint programming model. This is the first work of which we are aware that solves such queueing-based problems with constraint programming.