Plotting

Optimization: Instructional Materials


Genetic Algorithm: A to Z with Combinatorial Problems

#artificialintelligence

This is one of the most applied courses on Genetic Algorithms (GA), which presents an integrated framework to solve real-world optimization problems in the most simple way. For the first time, we have presented a practical course in the domain of metaheuristics algorithms required for students, researchers and practitioners. Firstly, we will introduce the basic theory of GA, then implement the simplest version of GA, namely Binary GA, into Matlab, and then present the continuous version, real GA, of it. Therefore, the main focus will be on the Genetic Algorithm as the most well-regarded optimization algorithm in the literature. In the following sections, we will introduce some well-known operation research problems, including transportation problems, hub location problems (HLP), quadratic assignment problems and travelling salesman problems (TSP) and try to solve them via GA.


Genetic Algorithm: A to Z with Combinatorial Problems

#artificialintelligence

This is one of the most applied courses on Genetic Algorithms (GA), which presents an integrated framework to solve real-world optimization problems in the most simple way. For the first time, we have presented a practical course in the domain of metaheuristics algorithms required for students, researchers and practitioners. Firstly, we will introduce the basic theory of GA, then implement the simplest version of GA, namely Binary GA, into Matlab, and then present the continuous version, real GA, of it. Therefore, the main focus will be on the Genetic Algorithm as the most well-regarded optimization algorithm in the literature. In the following sections, we will introduce some well-known operation research problems, including transportation problems, hub location problems (HLP), quadratic assignment problems and travelling salesman problems (TSP) and try to solve them via GA.


Optimization with Python: Complete Pyomo Bootcamp A-Z

#artificialintelligence

Mathematical Optimization is getting more and more popular in most quantitative disciplines, such as engineering, management, economics, and operations research. Furthermore, Python is one of the most famous programming languages that is getting more attention nowadays. Therefore, we decided to create a course for mastering the development of optimization problems in the Python environment. Since this course is designed for all levels (from beginner to advanced), we start from the beginning that you need to formulate a problem. Therefore, after finishing this course, you will be able to find and formulate decision variables, objective function, constraints and define your parameters.


DSC Webinar Series: How to Create Mathematical Optimization Models with Python - DataScienceCentral.com

#artificialintelligence

With mathematical optimization, companies can capture the key features of their business problems in an optimization model and can generate optimal solutions (which are used as the basis to make optimal decisions). Data scientists with some basic mathematical programming skills can easily learn how to build, implement, and maintain mathematical optimization applications. The Gurobi Python API borrows ideas from modeling languages, enabling users to deploy and solve mathematical optimization models with scripts that are easy to write, read, and maintain. Such modules can even be embedded in decision support systems for production-ready applications.


DSC Webinar Series: Mathematical Optimization Modeling: Learn the Basics - DataScienceCentral.com

#artificialintelligence

Mathematical optimization (MO) technologies are being utilized today by leading global companies across industries – including aviation, energy, finance, logistics, telecommunications, manufacturing, media, and many more – to solve a wide range of complex, real-world problems, make optimal, data-driven decisions, and achieve greater operational efficiency. An increasing number of data scientists are adding MO into their analytics toolbox and developing applications that combine MO and machine learning (ML) technologies. In this series of webinars, we will show you how – with MO techniques – you can build interpretable models to tackle your prediction and classification problems. How to formulate an MO model. How to build an MO model using the Gurobi Python API.


Turnpike in optimal control of PDEs, ResNets, and beyond

arXiv.org Machine Learning

The \emph{turnpike property} in contemporary macroeconomics asserts that if an economic planner seeks to move an economy from one level of capital to another, then the most efficient path, as long as the planner has enough time, is to rapidly move stock to a level close to the optimal stationary or constant path, then allow for capital to develop along that path until the desired term is nearly reached, at which point the stock ought to be moved to the final target. Motivated in part by its nature as a resource allocation strategy, over the past decade, the turnpike property has also been shown to hold for several classes of partial differential equations arising in mechanics. When formalized mathematically, the turnpike theory corroborates the insights from economics: for an optimal control problem set in a finite-time horizon, optimal controls and corresponding states, are close (often exponentially), during most of the time, except near the initial and final time, to the optimal control and corresponding state for the associated stationary optimal control problem. In particular, the former are mostly constant over time. This fact provides a rigorous meaning to the asymptotic simplification that some optimal control problems appear to enjoy over long time intervals, allowing the consideration of the corresponding stationary problem for computing and applications. We review a slice of the theory developed over the past decade --the controllability of the underlying system is an important ingredient, and can even be used to devise simple turnpike-like strategies which are nearly optimal--, and present several novel applications, including, among many others, the characterization of Hamilton-Jacobi-Bellman asymptotics, and stability estimates in deep learning via residual neural networks.


Tutorial on amortized optimization for learning to optimize over continuous domains

arXiv.org Artificial Intelligence

Optimization is a ubiquitous modeling tool and is often deployed in settings which repeatedly solve similar instances of the same problem. Amortized optimization methods use learning to predict the solutions to problems in these settings. This leverages the shared structure between similar problem instances. In this tutorial, we will discuss the key design choices behind amortized optimization, roughly categorizing 1) models into fully-amortized and semi-amortized approaches, and 2) learning methods into regression-based and objectivebased. We then view existing applications through these foundations to draw connections between them, including for manifold optimization, variational inference, sparse coding, meta-learning, control, reinforcement learning, convex optimization, and deep equilibrium networks. This framing enables us easily see, for example, that the amortized inference in variational autoencoders is conceptually identical to value gradients in control and reinforcement learning as they both use fully-amortized models with an objective-based loss.


Submodularity In Machine Learning and Artificial Intelligence

arXiv.org Artificial Intelligence

In this manuscript, we offer a gentle review of submodularity and supermodularity and their properties. We offer a plethora of submodular definitions; a full description of a number of example submodular functions and their generalizations; example discrete constraints; a discussion of basic algorithms for maximization, minimization, and other operations; a brief overview of continuous submodular extensions; and some historical applications. We then turn to how submodularity is useful in machine learning and artificial intelligence. This includes summarization, and we offer a complete account of the differences between and commonalities amongst sketching, coresets, extractive and abstractive summarization in NLP, data distillation and condensation, and data subset selection and feature selection. We discuss a variety of ways to produce a submodular function useful for machine learning, including heuristic hand-crafting, learning or approximately learning a submodular function or aspects thereof, and some advantages of the use of a submodular function as a coreset producer. We discuss submodular combinatorial information functions, and how submodularity is useful for clustering, data partitioning, parallel machine learning, active and semi-supervised learning, probabilistic modeling, and structured norms and loss functions.


HarmoFL: Harmonizing Local and Global Drifts in Federated Learning on Heterogeneous Medical Images

arXiv.org Artificial Intelligence

Multiple medical institutions collaboratively training a model using federated learning (FL) has become a promising solution for maximizing the potential of data-driven models, yet the non-independent and identically distributed (non-iid) data in medical images is still an outstanding challenge in real-world practice. The feature heterogeneity caused by diverse scanners or protocols introduces a drift in the learning process, in both local (client) and global (server) optimizations, which harms the convergence as well as model performance. Many previous works have attempted to address the non-iid issue by tackling the drift locally or globally, but how to jointly solve the two essentially coupled drifts is still unclear. In this work, we concentrate on handling both local and global drifts and introduce a new harmonizing framework called HarmoFL. First, we propose to mitigate the local update drift by normalizing amplitudes of images transformed into the frequency domain to mimic a unified imaging setting, in order to generate a harmonized feature space across local clients. Second, based on harmonized features, we design a client weight perturbation guiding each local model to reach a flat optimum, where a neighborhood area of the local optimal solution has a uniformly low loss. Without any extra communication cost, the perturbation assists the global model to optimize towards a converged optimal solution by aggregating several local flat optima. We have theoretically analyzed the proposed method and empirically conducted extensive experiments on three medical image classification and segmentation tasks, showing that HarmoFL outperforms a set of recent state-of-the-art methods with promising convergence behavior. Code is available at https://github.com/med-air/HarmoFL.


Thinking inside the box: A tutorial on grey-box Bayesian optimization

arXiv.org Machine Learning

Bayesian optimization (BO) is a framework for global optimization of expensive-to-evaluate objective functions. Classical BO methods assume that the objective function is a black box. However, internal information about objective function computation is often available. For example, when optimizing a manufacturing line's throughput with simulation, we observe the number of parts waiting at each workstation, in addition to the overall throughput. Recent BO methods leverage such internal information to dramatically improve performance. We call these "grey-box" BO methods because they treat objective computation as partially observable and even modifiable, blending the black-box approach with so-called "white-box" first-principles knowledge of objective function computation. This tutorial describes these methods, focusing on BO of composite objective functions, where one can observe and selectively evaluate individual constituents that feed into the overall objective; and multi-fidelity BO, where one can evaluate cheaper approximations of the objective function by varying parameters of the evaluation oracle.