Country
Measurements of collective machine intelligence
Independent from the still ongoing research in measuring individual intelligence, we anticipate and provide a framework for measuring collective intelligence. Collective intelligence refers to the idea that several individuals can collaborate in order to achieve high levels of intelligence. We present thus some ideas on how the intelligence of a group can be measured and simulate such tests. We will however focus here on groups of artificial intelligence agents (i.e., machines). We will explore how a group of agents is able to choose the appropriate problem and to specialize for a variety of tasks. This is a feature which is an important contributor to the increase of intelligence in a group (apart from the addition of more agents and the improvement due to common decision making). Our results reveal some interesting results about how (collective) intelligence can be modeled, about how collective intelligence tests can be designed and about the underlying dynamics of collective intelligence. As it will be useful for our simulations, we provide also some improvements of the threshold allocation model originally used in the area of swarm intelligence but further generalized here.
A Fuzzy Topsis Multiple-Attribute Decision Making for Scholarship Selection
As the education fees are becoming more expensive, more students apply for scholarships. Consequently, hundreds and even thousands of applications need to be handled by the sponsor. To solve the problems, some alternatives based on several attributes (criteria) need to be selected. In order to make a decision on such fuzzy problems, Fuzzy Multiple Attribute Decision Making (FMDAM) can be applied. In this study, Unified Modeling Language (UML) in FMADM with TOPSIS and Weighted Product (WP) methods is applied to select the candidates for academic and non-academic scholarships at Universitas Islam Negeri Sunan Kalijaga. Data used were a crisp and fuzzy data. The results show that TOPSIS and Weighted Product FMADM methods can be used to select the most suitable candidates to receive the scholarships since the preference values applied in this method can show applicants with the highest eligibility
Scaling Up Robust MDPs by Reinforcement Learning
Tamar, Aviv, Xu, Huan, Mannor, Shie
We consider large-scale Markov decision processes (MDPs) with parameter uncertainty, under the robust MDP paradigm. Previous studies showed that robust MDPs, based on a minimax approach to handle uncertainty, can be solved using dynamic programming for small to medium sized problems. However, due to the "curse of dimensionality", MDPs that model real-life problems are typically prohibitively large for such approaches. In this work we employ a reinforcement learning approach to tackle this planning problem: we develop a robust approximate dynamic programming method based on a projected fixed point equation to approximately solve large scale robust MDPs. We show that the proposed method provably succeeds under certain technical conditions, and demonstrate its effectiveness through simulation of an option pricing problem. To the best of our knowledge, this is the first attempt to scale up the robust MDPs paradigm.
A Data Mining Approach to Solve the Goal Scoring Problem
Oliveira, Renato, Adeodato, Paulo, Carvalho, Arthur, Viegas, Icamaan, Diego, Christian, Ing-Ren, Tsang
In soccer, scoring goals is a fundamental objective which depends on many conditions and constraints. Considering the RoboCup soccer 2D-simulator, this paper presents a data mining-based decision system to identify the best time and direction to kick the ball towards the goal to maximize the overall chances of scoring during a simulated soccer match. Following the CRISP-DM methodology, data for modeling were extracted from matches of major international tournaments (10691 kicks), knowledge about soccer was embedded via transformation of variables and a Multilayer Perceptron was used to estimate the scoring chance. Experimental performance assessment to compare this approach against previous LDA-based approach was conducted from 100 matches. Several statistical metrics were used to analyze the performance of the system and the results showed an increase of 7.7% in the number of kicks, producing an overall increase of 78% in the number of goals scored.
Metaheuristics in Flood Disaster Management and Risk Assessment
Bongolan, Vena Pearl, Ballesteros,, Florencio C. Jr., Banting, Joyce Anne M., Olaes, Aina Marie Q., Aquino, Charlymagne R.
A conceptual area is divided into units or barangays, each was allowed to evolve under a physical constraint. A risk assessment method was then used to identify the flood risk in each community using the following risk factors: the area's urbanized area ratio, literacy rate, mortality rate, poverty incidence, radio/TV penetration, and state of structural and non-structural measures. Vulnerability is defined as a weighted-sum of these components. A penalty was imposed for reduced vulnerability. Optimization comparison was done with MatLab's Genetic Algorithms and Simulated Annealing; results showed 'extreme' solutions and realistic designs, for simulated annealing and genetic algorithm, respectively.
Supersparse Linear Integer Models for Predictive Scoring Systems
Ustun, Berk, Traca, Stefano, Rudin, Cynthia
We introduce Supersparse Linear Integer Models (SLIM) as a tool to create scoring systems for binary classification. We derive theoretical bounds on the true risk of SLIM scoring systems, and present experimental results to show that SLIM scoring systems are accurate, sparse, and interpretable classification models.
Direct Uncertainty Estimation in Reinforcement Learning
Rodionov, Sergey, Potapov, Alexey, Vinogradov, Yurii
Optimal probabilistic approach in reinforcement learning is computationally infeasible. Its simplification consisting in neglecting difference between true environment and its model estimated using limited number of observations causes exploration vs exploitation problem. Uncertainty can be expressed in terms of a probability distribution over the space of environment models, and this uncertainty can be propagated to the action-value function via Bellman iterations, which are computationally insufficiently efficient though. We consider possibility of directly measuring uncertainty of the action-value function, and analyze sufficiency of this facilitated approach.
Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals
Hitziger, Sebastian, Clerc, Maureen, Gramfort, Alexandre, Saillet, Sandrine, Bénar, Christian, Papadopoulo, Théodore
Dictionary Learning has proven to be a powerful tool for many image processing tasks, where atoms are typically defined on small image patches. As a drawback, the dictionary only encodes basic structures. In addition, this approach treats patches of different locations in one single set, which means a loss of information when features are well-aligned across signals. This is the case, for instance, in multi-trial magneto- or electroencephalography (M/EEG). Learning the dictionary on the entire signals could make use of the alignement and reveal higher-level features. In this case, however, small missalignements or phase variations of features would not be compensated for. In this paper, we propose an extension to the common dictionary learning framework to overcome these limitations by allowing atoms to adapt their position across signals. The method is validated on simulated and real neuroelectric data.
Constrained Optimization for a Subset of the Gaussian Parsimonious Clustering Models
Browne, Ryan P., Subedi, Sanjeena, McNicholas, Paul
The expectation-maximization (EM) algorithm is an iterative method for finding maximum likelihood estimates when data are incomplete or are treated as being incomplete. The EM algorithm and its variants are commonly used for parameter estimation in applications of mixture models for clustering and classification. This despite the fact that even the Gaussian mixture model likelihood surface contains many local maxima and is singularity riddled. Previous work has focused on circumventing this problem by constraining the smallest eigenvalue of the component covariance matrices. In this paper, we consider constraining the smallest eigenvalue, the largest eigenvalue, and both the smallest and largest within the family setting. Specifically, a subset of the GPCM family is considered for model-based clustering, where we use a re-parameterized version of the famous eigenvalue decomposition of the component covariance matrices. Our approach is illustrated using various experiments with simulated and real data.
The AdaBoost Flow
Lykov, A., Muzychka, S., Vaninsky, K.
We introduce a dynamical system which we call the AdaBoost flow. The flow is defined by a system of ODEs with control. We show that three algorithms of the AdaBoost family (i) the AdaBoost algorithm of Schapire and Freund (ii) the arc-gv algorithm of Breiman (iii) the confidence rated prediction of Schapire and Singer can be can be embedded in the AdaBoost flow. The nontrivial part of the AdaBoost flow equations coincides with the equations of dynamics of nonperiodic Toda system written in terms of spectral variables. We provide a novel invariant geometrical description of the AdaBoost algorithm as a gradient flow on a foliation defined by level sets of the potential function. We propose a new approach for constructing boosting algorithms as a continuous time gradient flow on measures defined by various metrics and potential functions. Finally we explain similarity of the AdaBoost algorithm with the Perelman's construction for the Ricci flow.