Goto

Collaborating Authors

 weighting vector


Modified EDAS Method Based on Cumulative Prospect Theory for Multiple Attributes Group Decision Making with Interval-valued Intuitionistic Fuzzy Information

Wang, Jing, Cai, Qiang, Wei, Guiwu, Liao, Ningna

arXiv.org Artificial Intelligence

The Interval-valued intuitionistic fuzzy sets (IVIFSs) based on the intuitionistic fuzzy sets combines the classical decision method is in its research and application is attracting attention. After comparative analysis, there are multiple classical methods with IVIFSs information have been applied into many practical issues. In this paper, we extended the classical EDAS method based on cumulative prospect theory (CPT) considering the decision makers (DMs) psychological factor under IVIFSs. Taking the fuzzy and uncertain character of the IVIFSs and the psychological preference into consideration, the original EDAS method based on the CPT under IVIFSs (IVIF-CPT-MABAC) method is built for MAGDM issues. Meanwhile, information entropy method is used to evaluate the attribute weight. Finally, a numerical example for project selection of green technology venture capital has been given and some comparisons is used to illustrate advantages of IVIF-CPT-MABAC method and some comparison analysis and sensitivity analysis are applied to prove this new methods effectiveness and stability.


Weighting vectors for machine learning: numerical harmonic analysis applied to boundary detection

Bunch, Eric, Kline, Jeffery, Dickinson, Daniel, Bhat, Suhaas, Fung, Glenn

arXiv.org Machine Learning

Metric space magnitude, an active field of research in algebraic topology, is a scalar quantity that summarizes the effective number of distinct points that live in a general metric space. The {\em weighting vector} is a closely-related concept that captures, in a nontrivial way, much of the underlying geometry of the original metric space. Recent work has demonstrated that when the metric space is Euclidean, the weighting vector serves as an effective tool for boundary detection. We recast this result and show the weighting vector may be viewed as a solution to a kernelized SVM. As one consequence, we apply this new insight to the task of outlier detection, and we demonstrate performance that is competitive or exceeds performance of state-of-the-art techniques on benchmark data sets. Under mild assumptions, we show the weighting vector, which has computational cost of matrix inversion, can be efficiently approximated in linear time. We show how nearest neighbor methods can approximate solutions to the minimization problems defined by SVMs.


Bayesian preference elicitation for multiobjective combinatorial optimization

Bourdache, Nadjet, Perny, Patrice, Spanjaard, Olivier

arXiv.org Artificial Intelligence

We introduce a new incremental preference elicitation procedure able to deal with noisy responses of a Decision Maker (DM). The originality of the contribution is to propose a Bayesian approach for determining a preferred solution in a multiobjective decision problem involving a combinatorial set of alternatives. We assume that the preferences of the DM are represented by an aggregation function whose parameters are unknown and that the uncertainty about them is represented by a density function on the parameter space. Pairwise comparison queries are used to reduce this uncertainty (by Bayesian revision). The query selection strategy is based on the solution of a mixed integer linear program with a combinatorial set of variables and constraints, which requires to use columns and constraints generation methods. Numerical tests are provided to show the practicability of the approach.


Practical applications of metric space magnitude and weighting vectors

Bunch, Eric, Dickinson, Daniel, Kline, Jeffery, Fung, Glenn

arXiv.org Machine Learning

Metric space magnitude, an active subject of research in algebraic topology, originally arose in the context of biology, where it was used to represent the effective number of distinct species in an environment. In a more general setting, the magnitude of a metric space is a real number that aims to quantify the effective number of distinct points in the space. The contribution of each point to a metric space's global magnitude, which is encoded by the {\em weighting vector}, captures much of the underlying geometry of the original metric space. Surprisingly, when the metric space is Euclidean, the weighting vector also serves as an effective tool for boundary detection. This allows the weighting vector to serve as the foundation of novel algorithms for classic machine learning tasks such as classification, outlier detection and active learning. We demonstrate, using experiments and comparisons on classic benchmark datasets, the promise of the proposed magnitude and weighting vector-based approaches.


Machine learning, meet quantum computing

#artificialintelligence

Back in 1958, in the earliest days of the computing revolution, the US Office of Naval Research organized a press conference to unveil a device invented by a psychologist named Frank Rosenblatt at the Cornell Aeronautical Laboratory. Rosenblatt called his device a perceptron, and the New York Times reported that it was "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself, and be conscious of its existence." Those claims turned out to be somewhat overblown. But the device kick-started a field of research that still has huge potential today. A perceptron is a single-layer neural network.


Elliptical Distributions-Based Weights-Determining Method for OWA Operators

Sha, Xiuyan, Xu, Zeshui, Yin, Chuancun

arXiv.org Artificial Intelligence

The ordered weighted averaging (OWA) operators play a crucial role in aggregating multiple criteria evaluations into an overall assessment supporting the decision makers' choice. One key point steps is to determine the associated weights. In this paper, we first briefly review some main methods for determining the weights by using distribution functions. Then we propose a new approach for determining OWA weights by using the RIM quantifier. Motivated by the idea of normal distribution-based method to determine the OWA weights, we develop a method based on elliptical distributions for determining the OWA weights, and some of its desirable properties have been investigated.


A Deep Learning Approach with an Attention Mechanism for Automatic Sleep Stage Classification

Längkvist, Martin, Loutfi, Amy

arXiv.org Machine Learning

Automatic sleep staging is a challenging problem and state-of-the-art algorithms have not yet reached satisfactory performance to be used instead of manual scoring by a sleep technician. Much research has been done to find good feature representations that extract the useful information to correctly classify each epoch into the correct sleep stage. While many useful features have been discovered, the amount of features have grown to an extent that a feature reduction step is necessary in order to avoid the curse of dimensionality. One reason for the need of such a large feature set is that many features are good for discriminating only one of the sleep stages and are less informative during other stages. This paper explores how a second feature representation over a large set of pre-defined features can be learned using an auto-encoder with a selective attention for the current sleep stage in the training batch. This selective attention allows the model to learn feature representations that focuses on the more relevant inputs without having to perform any dimensionality reduction of the input data. The performance of the proposed algorithm is evaluated on a large data set of polysomnography (PSG) night recordings of patients with sleep-disordered breathing. The performance of the auto-encoder with selective attention is compared with a regular auto-encoder and previous works using a deep belief network (DBN).


Simplifying the minimax disparity model for determining OWA weights in large-scale problems

Nguyen, Thuy Hong

arXiv.org Artificial Intelligence

In the context of multicriteria decision making, the ordered weighted averaging (OWA) functions play a crucial role in aggregating multiple criteria evaluations into an overall assessment supporting the decision makers' choice. Determining OWA weights, therefore, is an essential part of this process. Available methods for determining OWA weights, however, often require heavy computational loads in real-life large-scale optimization problems. In this paper, we propose a new approach to simplify the well-known minimax disparity model for determining OWA weights. For this purpose, we use to the binomial decomposition framework in which natural constraints can be imposed on the level of complexity of the weight distribution. The original problem of determining OWA weights is thereby transformed into a smaller scale optimization problem, formulated in terms of the coefficients in the binomial decomposition. Our preliminary results show that a small set of these coefficients can encode for an appropriate full-dimensional set of OWA weights.


Ensemble Feature Weighting Based on Local Learning and Diversity

Li, Yun (Nanjing University of Posts and Telecommunications) | Gao, Suyan (Nanjing University of Posts and Telecommunications) | Chen, Songcan (Nanjing University of Aeronautics and Astronautics)

AAAI Conferences

Recently, besides the performance, the stability (robustness, i.e., the variation in feature selection results due to small changes in the data set) of feature selection is received more attention. Ensemble feature selection where multiple feature selection outputs are combined to yield more robust results without sacrificing the performance is an effective method for stable feature selection. In order to make further improvements of the performance (classification accuracy), the diversity regularized ensemble feature weighting framework is presented, in which the base feature selector is based on local learning with logistic loss for its robustness to huge irrelevant features and small samples. At the same time, the sample complexity of the proposed ensemble feature weighting algorithm is analyzed based on the VC-theory. The experiments on different kinds of data sets show that the proposed ensemble method can achieve higher accuracy than other ensemble ones and other stable feature selection strategy (such as sample weighting) without sacrificing stability