Country
A $O(\log m)$, deterministic, polynomial-time computable approximation of Lewis Carroll's scoring rule
Covey, Jason, Homan, Christopher
We provide deterministic, polynomial-time computable voting rules that approximate Dodgson's and (the ``minimization version'' of) Young's scoring rules to within a logarithmic factor. Our approximation of Dodgson's rule is tight up to a constant factor, as Dodgson's rule is $\NP$-hard to approximate to within some logarithmic factor. The ``maximization version'' of Young's rule is known to be $\NP$-hard to approximate by any constant factor. Both approximations are simple, and natural as rules in their own right: Given a candidate we wish to score, we can regard either its Dodgson or Young score as the edit distance between a given set of voter preferences and one in which the candidate to be scored is the Condorcet winner. (The difference between the two scoring rules is the type of edits allowed.) We regard the marginal cost of a sequence of edits to be the number of edits divided by the number of reductions (in the candidate's deficit against any of its opponents in the pairwise race against that opponent) that the edits yield. Over a series of rounds, our scoring rules greedily choose a sequence of edits that modify exactly one voter's preferences and whose marginal cost is no greater than any other such single-vote-modifying sequence.
On the underestimation of model uncertainty by Bayesian K-nearest neighbors
Su, Wanhua, Chipman, Hugh, Zhu, Mu
When using the K-nearest neighbors method, one often ignores uncertainty in the choice of K. To account for such uncertainty, Holmes and Adams (2002) proposed a Bayesian framework for K-nearest neighbors (KNN). Their Bayesian KNN (BKNN) approach uses a pseudo-likelihood function, and standard Markov chain Monte Carlo (MCMC) techniques to draw posterior samples. Holmes and Adams (2002) focused on the performance of BKNN in terms of misclassification error but did not assess its ability to quantify uncertainty. We present some evidence to show that BKNN still significantly underestimates model uncertainty.
On the Influence of Selection Operators on Performances in Cellular Genetic Algorithms
Simoncini, David, Collard, Philippe, Verel, Sรฉbastien, Clergue, Manuel
In this paper, we study the influence of the selective pressure on the performance of cellular genetic algorithms. Cellular genetic algorithms are genetic algorithms where the population is embedded on a toroidal grid. This structure makes the propagation of the best so far individual slow down, and allows to keep in the population potentially good solutions. We present two selective pressure reducing strategies in order to slow down even more the best solution propagation. We experiment these strategies on a hard optimization problem, the quadratic assignment problem, and we show that there is a value for of the control parameter for both which gives the best performance. This optimal value does not find explanation on only the selective pressure, measured either by take over time and diversity evolution. This study makes us conclude that we need other tools than the sole selective pressure measures to explain the performances of cellular genetic algorithms.
Agent-Based Perception of an Environment in an Emergency Situation
Kebair, Fahem, Serin, Frรฉdรฉric, Bertelle, Cyrille
We are interested in the problem of multiagent systems development for risk detecting and emergency response in an uncertain and partially perceived environment. The evaluation of the current situation passes by three stages inside the multiagent system. In a first time, the situation is represented in a dynamic way. The second step, consists to characterise the situation and finally, it is compared with other similar known situations. In this paper, we present an information modelling of an observed environment, that we have applied on the RoboCupRescue Simulation System. Information coming from the environment are formatted according to a taxonomy and using semantic features. The latter are defined thanks to a fine ontology of the domain and are managed by factual agents that aim to represent dynamically the current situation.
Symmetry Breaking for Maximum Satisfiability
Marques-Silva, Joao, Lynce, Ines, Manquinho, Vasco
Symmetries are intrinsic to many combinatorial problems including Boolean Satisfiability (SAT) and Constraint Programming (CP). In SAT, the identification of symmetry breaking predicates (SBPs) is a well-known, often effective, technique for solving hard problems. The identification of SBPs in SAT has been the subject of significant improvements in recent years, resulting in more compact SBPs and more effective algorithms. The identification of SBPs has also been applied to pseudo-Boolean (PB) constraints, showing that symmetry breaking can also be an effective technique for PB constraints. This paper extends further the application of SBPs, and shows that SBPs can be identified and used in Maximum Satisfiability (MaxSAT), as well as in its most well-known variants, including partial MaxSAT, weighted MaxSAT and weighted partial MaxSAT. As with SAT and PB, symmetry breaking predicates for MaxSAT and variants are shown to be effective for a representative number of problem domains, allowing solving problem instances that current state of the art MaxSAT solvers could not otherwise solve.
Explicit Learning: an Effort towards Human Scheduling Algorithms
Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem.
An Artificial Immune System as a Recommender System for Web Sites
Artificial Immune Systems have been used successfully to build recommender systems for film databases. In this research, an attempt is made to extend this idea to web site recommendation. A collection of more than 1000 individuals web profiles (alternatively called preferences / favourites / bookmarks file) will be used. URLs will be classified using the DMOZ (Directory Mozilla) database of the Open Directory Project as our ontology. This will then be used as the data for the Artificial Immune Systems rather than the actual addresses. The first attempt will involve using a simple classification code number coupled with the number of pages within that classification code. However, this implementation does not make use of the hierarchical tree-like structure of DMOZ. Consideration will then be given to the construction of a similarity measure for web profiles that makes use of this hierarchical information to build a better-informed Artificial Immune System.
Application of Rough Set Theory to Analysis of Hydrocyclone Operation
Owladeghaffari, H., Ejtemaei, M., Irannajad, M.
This paper describes application of rough set theory, on the analysis of hydrocyclone operation. In this manner, using Self Organizing Map (SOM) as preprocessing step, best crisp granules of data are obtained. Then, using a combining of SOM and rough set theory (RST)-called SORST-, the dominant rules on the information table, obtained from laboratory tests, are extracted. Based on these rules, an approximate estimation on decision attribute is fulfilled. Finally, a brief comparison of this method with the SOM-NFIS system (briefly SONFIS) is highlighted.
Permeability Analysis based on information granulation theory
Sharifzadeh, M., Owladeghaffari, H., Shahriar, K., Bakhtavar, E.
This paper describes application of information granulation theory, on the analysis of "lugeon data". In this manner, using a combining of Self Organizing Map (SOM) and Neuro-Fuzzy Inference System (NFIS), crisp and fuzzy granules are obtained. Balancing of crisp granules and sub- fuzzy granules, within non fuzzy information (initial granulation), is rendered in open-close iteration. Using two criteria, "simplicity of rules "and "suitable adaptive threshold error level", stability of algorithm is guaranteed. In other part of paper, rough set theory (RST), to approximate analysis, has been employed >.Validation of the proposed methods, on the large data set of in-situ permeability in rock masses, in the Shivashan dam, Iran, has been highlighted. By the implementation of the proposed algorithm on the lugeon data set, was proved the suggested method, relating the approximate analysis on the permeability, could be applied.