Goto

Collaborating Authors

optimization problem


Global Optimum Search in Quantum Deep Learning

#artificialintelligence

This paper aims to solve machine learning optimization problem by using quantum circuit. Two approaches, namely the average approach and the Partial Swap Test Cut-off method (PSTC) was proposed to search for the global minimum/maximum of two different objective functions. The current cost is O( ( Θ) N), but there is potential to improve PSTC further to O( ( Θ)· sublinear N) by enhancing the checking process.


The mathematics and Intuitions of Principal Component Analysis (PCA) Using Truncated Singular…

#artificialintelligence

As data scientists or Machine learning experts, we are faced with tonnes of columns of data to extract insight from, among these features are redundant ones, in more fancier mathematical term -- co-linear features. The numerous columns of features without prior treatment leads to curse of dimensionality which in turn leads to over fitting. To ameliorate this curse of dimensionality, principal component analysis (PCA for short) which is one of many ways to address this, is employed using truncated Singular Value Decomposition (SVD). Principal Component Analysis starts to make sense when the number of measured variables are more than three (3) where visualization of the cloud of the data point is difficult and it is near impossible to get insight from. First: Let's try to grasp the goal of Principal Component Analysis.


Computational Intelligent Data Analysis for Sustainable Development - Programmer Books

#artificialintelligence

Going beyond performing simple analyses, researchers involved in the highly dynamic field of computational intelligent data analysis design algorithms that solve increasingly complex data problems in changing environments, including economic, environmental, and social data. Computational Intelligent Data Analysis for Sustainable Development presents novel methodologies for automatically processing these types of data to support rational decision making for sustainable development. Through numerous case studies and applications, it illustrates important data analysis methods, including mathematical optimization, machine learning, signal processing, and temporal and spatial analysis, for quantifying and describing sustainable development problems. With a focus on integrated sustainability analysis, the book presents a large-scale quadratic programming algorithm to expand high-resolution input-output tables from the national scale to the multinational scale to measure the carbon footprint of the entire trade supply chain. It also quantifies the error or dispersion between different reclassification and aggregation schemas, revealing that aggregation errors have a high concentration over specific regions and sectors. A profuse amount of climate data of various types is available, providing a rich and fertile playground for future data mining and machine learning research.


Decentralized Reinforcement Learning

#artificialintelligence

Many associations in the world like the biological ecosystems, government and corporations are physically decentralized however they are unified in the sense of their functionality. For instance, a financial institution operates with a global policy of maximizing their profits, hence appearing as a single entity; however, this entity abstraction is an illusion, as a financial institution is composed of a group of individual human agents solving their optimization problems with our without collaboration. The policy function parameters are fine-tuned depending on the gradients of the defined objective function. This approach is called the monolithic decision-making framework as the policy function's learning parameters are coupled globally solely using an objective function. Having covered a brief background of a centralized reinforcement learning framework, let us move forward to some promising decentralized reinforcement learning frameworks.


Guide to Interpretable Machine Learning

#artificialintelligence

If you can't explain it simply, you don't understand it well enough. Disclaimer: This article draws and expands upon material from (1) Christoph Molnar's excellent book on Interpretable Machine Learning which I definitely recommend to the curious reader, (2) a deep learning visualization workshop from Harvard ComputeFest 2020, as well as (3) material from CS282R at Harvard University taught by Ike Lage and Hima Lakkaraju, who are both prominent researchers in the field of interpretability and explainability. This article is meant to condense and summarize the field of interpretable machine learning to the average data scientist and to stimulate interest in the subject. Machine learning systems are becoming increasingly employed in complex high-stakes settings such as medicine (e.g. Despite this increased utilization, there is still a lack of sufficient techniques available to be able to explain and interpret the decisions of these deep learning algorithms. This can be very problematic in some areas where the decisions of algorithms must be explainable or attributable to certain features due to laws or regulations (such as the right to explanation), or where accountability is required. The need for algorithmic accountability has been highlighted many times, the most notable cases of which are Google's facial recognition algorithm that labeled some black people as gorillas, and Uber's self-driving car which ran a stop sign. Due to the inability of Google to fix the algorithm and remove the algorithmic bias that resulted in this issue, they solved the problem by removing words relating to monkeys from Google Photo's search engine. This illustrates the alleged black box nature of many machine learning algorithms. The black box problem is predominantly associated with the supervised machine learning paradigm due to its predictive nature. Accuracy alone is no longer enough. Academics in deep learning are acutely aware of this interpretability and explainability problem, and whilst some argue that these models are essentially black boxes, there have been several developments in recent years which have been developed for visualizing aspects of deep neural networks such the features and representations they have learned. The term info-besity has been thrown around to refer to the difficulty of providing transparency when decisions are made on the basis of many individual features, due to an overload of information.


In defense of weight-sharing for neural architecture search: an optimization perspective

AIHub

Neural architecture search (NAS) -- selecting which neural model to use for your learning problem -- is a promising but computationally expensive direction for automating and democratizing machine learning. The weight-sharing method, whose initial success at dramatically accelerating NAS surprised many in the field, has come under scrutiny due to its poor performance as a surrogate for full model-training (a miscorrelation problem known as rank disorder) and inconsistent results on recent benchmarks. In this post, we give a quick overview of weight-sharing and argue in favor of its continued use for NAS. First-generation NAS methods were astronomically expensive due to the combinatorially large search space, requiring the training of thousands of neural networks to completion. Then, in their 2018 ENAS (for Efficient NAS) paper, Pham et al. introduced the idea of weight-sharing, in which only one shared set of model parameters is trained for all architectures.


Linear and Nonlinear Programming

#artificialintelligence

This new edition covers the central concepts of practical optimization techniques, with an emphasis on methods that are both state-of-the-art and popular. Again a connection between the purely analytical character of an optimization problem and the behavior of algorithms used to solve the problem. As in the earlier editions, the material in this fourth edition is organized into three separate parts. Part I is a self-contained introduction to linear programming covering numerical algorithms and many of its important special applications. Part II, which is independent of Part I, covers the theory of unconstrained optimization, including both derivations of the appropriate optimality conditions and an introduction to basic algorithms.


Genetic algorithm --Learning from nature to solve complexe optimization problems.

#artificialintelligence

It's a method for solving both constrained and unconstrained optimization problems based on a natural selection process that mimics biological evolution. I know, it's even worse, but keep reading. Natural selection is the process by which individual organisms with favorable traits are more likely to survive and reproduce. said Charles Darwin. Also expressed as '' the survival of the fittest'', it means that if you can suit the conditions and environment you live in, then you're more likely to survive and reproduce so that your traits could be passed to next generations. Sum up: we keep individuals with particular traits that make them good for a particular task and get rid of bad ones.


Bayesian Inference: The Maximum Entropy Principle

#artificialintelligence

In this article, I will explain what the maximum entropy principle is, how to apply it and why it's useful in the context of Bayesian inference. The code to reproduce the results and figures can be found in this notebook. The maximum entropy principle is a method to create probability distributions that is most consistent with a given set of assumptions and nothing more. The rest of the article will explain what this means. First, we need to a way to measure the uncertainty in a probability distribution.


RSS 2020 – all the papers and videos!

Robohub

RSS 2020 was held virtually this year, from the RSS Pioneers Workshop on July 11 to the Paper Awards and Farewell on July 16. Many talks are now available online, including 103 accepted papers, each presented as an online Spotlight Talk on the RSS Youtube channel, and of course the plenaries and much of the workshop content as well. We've tried to link here to all of the goodness from RSS 2020. The RSS Keynote on July 15 was delivered by Josh Tenenbaum, Professor of Computational Cognitive Science at MIT in the Department of Brain and Cognitive Sciences, CSAIL. Titled "It's all in your head: Intuitive physics, planning, and problem-solving in brains, minds and machines".