Goto

Collaborating Authors

 michalewicz


Genetic Algorithms + Data Structures = Evolution Programs: Michalewicz, Zbigniew: 9783540606765: Amazon.com: Books

#artificialintelligence

Zbigniew Michalewicz is Emeritus Professor of Computer Science at the University of Adelaide in Australia. He completed his Masters degree at Technical University of Warsaw in 1974 and he received Ph.D. degree from the Institute of Computer Science, Polish Academy of Sciences, in 1981. He also holds a Doctor of Science degree in Computer Science from the Polish Academy of Science. Zbigniew Michalewicz also holds Professor positions at the Institute of Computer Science, Polish Academy of Sciences, the Polish-Japanese Institute of Information Technology, and the State Key Laboratory of Software Engineering of Wuhan University, China. He is also associated with the Structural Complexity Laboratory at Seoul National University, South Korea.


How Bayesian Should Bayesian Optimisation Be?

De Ath, George, Everson, Richard, Fieldsend, Jonathan

arXiv.org Machine Learning

Bayesian optimisation (BO) uses probabilistic surrogate models - usually Gaussian processes (GPs) - for the optimisation of expensive black-box functions. At each BO iteration, the GP hyperparameters are fit to previously-evaluated data by maximising the marginal likelihood. However, this fails to account for uncertainty in the hyperparameters themselves, leading to overconfident model predictions. This uncertainty can be accounted for by taking the Bayesian approach of marginalising out the model hyperparameters. We investigate whether a fully-Bayesian treatment of the Gaussian process hyperparameters in BO (FBBO) leads to improved optimisation performance. Since an analytic approach is intractable, we compare FBBO using three approximate inference schemes to the maximum likelihood approach, using the Expected Improvement (EI) and Upper Confidence Bound (UCB) acquisition functions paired with ARD and isotropic Matern kernels, across 15 well-known benchmark problems for 4 observational noise settings. FBBO using EI with an ARD kernel leads to the best performance in the noise-free setting, with much less difference between combinations of BO components when the noise is increased. FBBO leads to over-exploration with UCB, but is not detrimental with EI. Therefore, we recommend that FBBO using EI with an ARD kernel as the default choice for BO.


A Profit Guided Coordination Heuristic for Travelling Thief Problems

Namazi, Majid (Griffith University) | Newton, M.A. Hakim (Griffith University) | Sattar, Abdul (Griffith University) | Sanderson, Conrad (Data61 / CSIRO)

AAAI Conferences

The travelling thief problem (TTP) is a combination of two interdependent NP-hard components: travelling salesman problem (TSP) and knapsack problem (KP). Existing approaches for TTP typically solve the TSP and KP components in an interleaved fashion, where the solution to one component is held fixed while the other component is changed. This indicates poor coordination between solving the two components and may lead to poor quality TTP solutions. For solving the TSP component, the 2-OPT segment reversing heuristic is often used for modifying the tour. We propose an extended and modified form of the reversing heuristic in order to concurrently consider both the TSP and KP components. Items deemed as less profitable and picked in cities earlier in the reversed segment are replaced by items that tend to be equally or more profitable and not picked in the later cities. Comparative evaluations on a broad range of benchmark TTP instances indicate that the proposed approach outperforms existing state-of-the-art TTP solvers.


A study of problems with multiple interdependent components - Part I

Yafrani, Mohamed El

arXiv.org Artificial Intelligence

Recognising that real-world optimisation problems have multiple interdependent components can be quite easy. However, providing a generic and formal model for dependencies between components can be a tricky task. In fact, a PMIC can be considered simply as a single optimisation problem and the dependencies between components could be investigated by studying the decomposability of the problem and the correlations between the sub-problems. In this work, we attempt to define PMICs by reasoning from a reverse perspective. Instead of considering a decomposable problem, we model multiple problems (the components) and define how these components could be connected. In this document, we introduce notions related to problems with mutliple interndependent components. We start by introducing realistic examples from logistics and supply chain management to illustrate the composite nature and dependencies in these problems. Afterwards, we provide our attempt to formalise and classify dependency in multi-component problems.


Reasoning and Facts Explanation in Valuation Based Systems

Wierzchoń, S. T., Kłopotek, M. A., Michalewicz, M.

arXiv.org Artificial Intelligence

In the literature, the optimization problem to identify a set of composite hypotheses H, which will yield the $k$ largest $P(H|S_e)$ where a composite hypothesis is an instantiation of all the nodes in the network except the evidence nodes \cite{KSy:93} is of significant interest. This problem is called "finding the $k$ Most Plausible Explanation (MPE) of a given evidence $S_e$ in a Bayesian belief network". The problem of finding $k$ most probable hypotheses is generally NP-hard \cite{Cooper:90}. Therefore in the past various simplifications of the task by restricting $k$ (to 1 or 2), restricting the structure (e.g. to singly connected networks), or shifting the complexity to spatial domain have been investigated. A genetic algorithm is proposed in this paper to overcome some of these restrictions while stepping out from probabilistic domain onto the general Valuation based System (VBS) framework is also proposed by generalizing the genetic algorithm approach to the realm of Dempster-Shafer belief calculus.


Project Manager Today

#artificialintelligence

A ROBOT with an algorithm-based persona is being used to help companies make data-driven decisions in real time. South Australian company Complexica has developed Larry, the Digital Analyst, which is basically a set of algorithms tuned to complex problems to quickly generate answers that would otherwise take people a very long time to work out. Big Data software algorithms are taking decision-making to a new level, delivering solutions and efficiencies like never before. The global Artificial Intelligence market is forecast to exceed USD 5 billion by 2020. Father and son team Matthew Michalewicz and Dr Zbigniew "Mike" Michalewicz, a former professor at the University of Adelaide's School of Computer Science and Artificial Intelligence pioneer, started the company in 2014 with software architect Constantin Chiriac.


Machine learning is the new face of enterprise data

#artificialintelligence

While the complexity of the searching and result-ranking technology behind Apple's Siri would likely elude most of its users, the value of a context-sensitive personal assistant certainly has not. Yet while Siri spawned a new generation of anthropomorphic digital assistants, researchers in machine learning and artificial intelligence (AI) are taking the concept much further to help enterprises catch up to the growth of data. Industrial products distributor Coventry Group is among the latest companies to jump onto the trend. The company, whose fasteners, fluid systems, gasket and hardware divisions collectively employ around 650 people, is working with Adelaide-based data-analytics specialist Complexica to apply that company's AI technology – personified as Larry, the Digital Analyst – to guide decisions around sales and pricing strategies. Introducing Larry – a collection of algorithms delivered on a software-as-a-service (SaaS) basis via Amazon's cloud – to Coventry's business is a two to four month process that will see the technology finetuned to the company's operating parameters.


Characterization of the convergence of stationary Fokker-Planck learning

Berrones, Arturo

arXiv.org Artificial Intelligence

The convergence properties of the stationary Fokker-Planck algorithm for the estimation of the asymptotic density of stochastic search processes is studied. Theoretical and empirical arguments for the characterization of convergence of the estimation in the case of separable and nonseparable nonlinear optimization problems are given. Some implications of the convergence of stationary Fokker-Planck learning for the inference of parameters in artificial neural network models are outlined.