Not enough data to create a plot.
Try a different view from the menu above.
Country
A hybrid model for bankruptcy prediction using genetic algorithm, fuzzy c-means and mars
Martin, A., Gayathri, V., Saranya, G., Gayathri, P., Venkatesan, Prasanna
Bankruptcy prediction is very important for all the organization since it affects the economy and rise many social problems with high costs. There are large number of techniques have been developed to predict the bankruptcy, which helps the decision makers such as investors and financial analysts. One of the bankruptcy prediction models is the hybrid model using Fuzzy C-means clustering and MARS, which uses static ratios taken from the bank financial statements for prediction, which has its own theoretical advantages. The performance of existing bankruptcy model can be improved by selecting the best features dynamically depend on the nature of the firm. This dynamic selection can be accomplished by Genetic Algorithm and it improves the performance of prediction model.
Back and Forth Between Rules and SE-Models (Extended Version)
Rules in logic programming encode information about mutual interdependencies between literals that is not captured by any of the commonly used semantics. This information becomes essential as soon as a program needs to be modified or further manipulated. We argue that, in these cases, a program should not be viewed solely as the set of its models. Instead, it should be viewed and manipulated as the set of sets of models of each rule inside it. With this in mind, we investigate and highlight relations between the SE-model semantics and individual rules. We identify a set of representatives of rule equivalence classes induced by SE-models, and so pinpoint the exact expressivity of this semantics with respect to a single rule. We also characterise the class of sets of SE-interpretations representable by a single rule. Finally, we discuss the introduction of two notions of equivalence, both stronger than strong equivalence [1] and weaker than strong update equivalence [2], which seem more suitable whenever the dependency information found in rules is of interest.
Decision Making Agent Searching for Markov Models in Near-Deterministic World
Reinforcement learning has solid foundations, but becomes inefficient in partially observed (non-Markovian) environments. Thus, a learning agent -born with a representation and a policy- might wish to investigate to what extent the Markov property holds. We propose a learning architecture that utilizes combinatorial policy optimization to overcome non-Markovity and to develop efficient behaviors, which are easy to inherit, tests the Markov property of the behavioral states, and corrects against non-Markovity by running a deterministic factored Finite State Model, which can be learned. We illustrate the properties of architecture in the near deterministic Ms. Pac-Man game. We analyze the architecture from the point of view of evolutionary, individual, and social learning.
Practical inventory routing: A problem definition and an optimization method
Geiger, Martin Josef, Sevaux, Marc
The global objective of this work is to provide practical optimization methods to companies involved in inventory routing problems, taking into account this new type of data. Also, companies are sometimes not able to deal with changing plans every period and would like to adopt regular structures for serving customers.
Neyman-Pearson classification, convexity and stochastic constraints
Motivated by problems of anomaly detection, this paper implements the Neyman-Pearson paradigm to deal with asymmetric errors in binary classification with a convex loss. Given a finite collection of classifiers, we combine them and obtain a new classifier that satisfies simultaneously the two following properties with high probability: (i) its probability of type I error is below a pre-specified level and (ii), it has probability of type II error close to the minimum possible. The proposed classifier is obtained by solving an optimization problem with an empirical objective and an empirical constraint. New techniques to handle such problems are developed and have consequences on chance constrained programming.
Reduction of fuzzy automata by means of fuzzy quasi-orders
Stamenkoviฤ, Aleksandar, ฤiriฤ, Miroslav, Ignjatoviฤ, Jelena
In our recent paper we have established close relationships between state reduction of a fuzzy recognizer and resolution of a particular system of fuzzy relation equations. In that paper we have also studied reductions by means of those solutions which are fuzzy equivalences. In this paper we will see that in some cases better reductions can be obtained using the solutions of this system that are fuzzy quasi-orders. Generally, fuzzy quasi-orders and fuzzy equivalences are equally good in the state reduction, but we show that right and left invariant fuzzy quasi-orders give better reductions than right and left invariant fuzzy equivalences. We also show that alternate reductions by means of fuzzy quasi-orders give better results than alternate reductions by means of fuzzy equivalences. Furthermore we study a more general type of fuzzy quasi-orders, weakly right and left invariant ones, and we show that they are closely related to determinization of fuzzy recognizers. We also demonstrate some applications of weakly left invariant fuzzy quasi-orders in conflict analysis of fuzzy discrete event systems.
Universal Higher Order Grammar
We examine the class of languages that can be defined entirely in terms of provability in an extension of the sorted type theory (Ty_n) by embedding the logic of phonologies, without introduction of special types for syntactic entities. This class is proven to precisely coincide with the class of logically closed languages that may be thought of as functions from expressions to sets of logically equivalent Ty_n terms. For a specific sub-class of logically closed languages that are described by finite sets of rules or rule schemata, we find effective procedures for building a compact Ty_n representation, involving a finite number of axioms or axiom schemata. The proposed formalism is characterized by some useful features unavailable in a two-component architecture of a language model. A further specialization and extension of the formalism with a context type enable effective account of intensional and dynamic semantics.
Randomized algorithms for statistical image analysis and site percolation on square lattices
Langovoy, Mikhail A., Wittich, Olaf
We propose a novel probabilistic method for detection of objects in noisy images. The method uses results from percolation and random graph theories. We present an algorithm that allows to detect objects of unknown shapes in the presence of random noise. The algorithm has linear complexity and exponential accuracy and is appropriate for real-time systems. We prove results on consistency and algorithmic complexity of our procedure.
An Artificial Immune System Model for Multi-Agents Resource Sharing in Distributed Environments
Chingtham, Tejbanta Singh, Sahoo, G., Ghose, M. K.
Natural Immune system plays a vital role in the survival of the all living being. It provides a mechanism to defend itself from external predates making it consistent systems, capable of adapting itself for survival incase of changes. The human immune system has motivated scientists and engineers for finding powerful information processing algorithms that has solved complex engineering tasks. This paper explores one of the various possibilities for solving problem in a Multiagent scenario wherein multiple robots are deployed to achieve a goal collectively. The final goal is dependent on the performance of individual robot and its survival without having to lose its energy beyond a predetermined threshold value by deploying an evolutionary computational technique otherwise called the artificial immune system that imitates the biological immune system.
New Worst-Case Upper Bound for #XSAT
An algorithm running in O(1.1995n) is presented for counting models for exact satisfiability formulae(#XSAT). This is faster than the previously best algorithm which runs in O(1.2190n). In order to improve the efficiency of the algorithm, a new principle, i.e. the common literals principle, is addressed to simplify formulae. This allows us to eliminate more common literals. In addition, we firstly inject the resolution principles into solving #XSAT problem, and therefore this further improves the efficiency of the algorithm.