On Uncensored Mean First-Passage-Time Performance Experiments with Multiwalk in $\mathbb{R}^p$: a New Stochastic Optimization Algorithm

arXiv.org Artificial Intelligence

A rigorous empirical comparison of two stochastic solvers is important when one of the solvers is a prototype of a new algorithm such as multiwalk (MWA). When searching for global minima in $\mathbb{R}^p$, the key data structures of MWA include: $p$ rulers with each ruler assigned $m$ marks and a set of $p$ neighborhood matrices of size up to $m(m-2)$, where each entry represents absolute values of pairwise differences between $m$ marks. Before taking the next step, a controller links the tableau of neighborhood matrices and computes new and improved positions for each of the $m$ marks. The number of columns in each neighborhood matrix is denoted as the neighborhood radius $r_n \le m-2$. Any variant of the DEA (differential evolution algorithm) has an effective population neighborhood of radius not larger than 1. Uncensored first-passage-time performance experiments that vary the neighborhood radius of a MW-solver can thus be readily compared to existing variants of DE-solvers. The paper considers seven test cases of increasing complexity and demonstrates, under uncensored first-passage-time performance experiments: (1) significant variability in convergence rate for seven DE-based solver configurations, and (2) consistent, monotonic, and significantly faster rate of convergence for the MW-solver prototype as we increase the neighborhood radius from 4 to its maximum value.


POTs: The revolution will not be optimized?

arXiv.org Artificial Intelligence

Optimization systems infer, induce, and shape events in the real world to fulfill objective functions. Protective optimization technologies (POTs) reconfigure these events as a response to the effects of optimization on a group of users or local environment. POTs analyze how events (or lack thereof) affect users and environments, then manipulate these events to influence system outcomes, e.g., by altering the optimization constraints and poisoning system inputs.


Small ensembles of kriging models for optimization

arXiv.org Machine Learning

The Efficient Global Optimization (EGO) algorithm uses a conditional Gaus-sian Process (GP) to approximate an objective function known at a finite number of observation points and sequentially adds new points which maximize the Expected Improvement criterion according to the GP. The important factor that controls the efficiency of EGO is the GP covariance function (or kernel) which should be chosen according to the objective function. Traditionally, a pa-rameterized family of covariance functions is considered whose parameters are learned through statistical procedures such as maximum likelihood or cross-validation. However, it may be questioned whether statistical procedures for learning covariance functions are the most efficient for optimization as they target a global agreement between the GP and the observations which is not the ultimate goal of optimization. Furthermore, statistical learning procedures are computationally expensive. The main alternative to the statistical learning of the GP is self-adaptation, where the algorithm tunes the kernel parameters based on their contribution to objective function improvement. After questioning the possibility of self-adaptation for kriging based optimizers, this paper proposes a novel approach for tuning the length-scale of the GP in EGO: At each iteration, a small ensemble of kriging models structured by their length-scales is created. All of the models contribute to an iterate in an EGO-like fashion. Then, the set of models is densified around the model whose length-scale yielded the best iterate and further points are produced. Numerical experiments are provided which motivate the use of many length-scales. The tested implementation does not perform better than the classical EGO algorithm in a sequential context but show the potential of the approach for parallel implementations.


One line Bayesian optimization of scikit_learn model hyperparameters • /r/MachineLearning

@machinelearnbot

The wrapper code uses our python API to communicate with our optimization service. Our service combines several ideas from Bayesian Optimization research, in particular we do support mixed parameter spaces of discrete / categorical / integer parameters. As for pricing, an easy to use, managed, state of the art black-box optimization service is something we think adds a lot of value to research and development pipelines so we are offering it as a paid service. We do also offer a free trial, so you can sign up and test our service with no cost https://sigopt.com/signup


Interactive urban design generation and optimization

#artificialintelligence

The first video shows the set-up and process of optimizing a simple parametric design for an neighborhood. The algorithm takes five fitness objectives into account: Solar comfort on the streets (weighted by pedestrian frequency); wind comfort; footfall through the neighborhood; access to the neighborhood, overall access to local transit stations; The latter two indicators are computed for the whole area, thus enabling to include a positive impact of the new quarter's spatial arrangement on the whole neighborhood as a goal dimension. By using our deep learning based predictions for solar and wind related measures, one iteration takes just about three seconds to be computed.