Goto

Collaborating Authors

 Country


Finding Still Lifes with Memetic/Exact Hybrid Algorithms

arXiv.org Artificial Intelligence

The maximum density still life problem (MDSLP) is a hard constraint optimization problem based on Conway's game of life. It is a prime example of weighted constrained optimization problem that has been recently tackled in the constraint-programming community. Bucket elimination (BE) is a complete technique commonly used to solve this kind of constraint satisfaction problem. When the memory required to apply BE is too high, a heuristic method based on it (denominated mini-buckets) can be used to calculate bounds for the optimal solution. Nevertheless, the curse of dimensionality makes these techniques unpractical for large size problems. In response to this situation, we present a memetic algorithm for the MDSLP in which BE is used as a mechanism for recombining solutions, providing the best possible child from the parental set. Subsequently, a multi-level model in which this exact/metaheuristic hybrid is further hybridized with branch-and-bound techniques and mini-buckets is studied. Extensive experimental results analyze the performance of these models and multi-parent recombination. The resulting algorithm consistently finds optimal patterns for up to date solved instances in less time than current approaches. Moreover, it is shown that this proposal provides new best known solutions for very large instances.


A Growing Self-Organizing Network for Reconstructing Curves and Surfaces

arXiv.org Artificial Intelligence

Self-organizing networks such as Neural Gas, Growing Neural Gas and many others have been adopted in actual applications for both dimensionality reduction and manifold learning. Typically, in these applications, the structure of the adapted network yields a good estimate of the topology of the unknown subspace from where the input data points are sampled. The approach presented here takes a different perspective, namely by assuming that the input space is a manifold of known dimension. In return, the new type of growing self-organizing network presented gains the ability to adapt itself in way that may guarantee the effective and stable recovery of the exact topological structure of the input manifold.


A New Method for Knowledge Representation in Expert System's (XMLKR)

arXiv.org Artificial Intelligence

Knowledge representation it is an essential section of a Expert Systems, Because in this section we have a framework to establish an expert system then we can modeling and use by this to design an expert system. Many method it is exist for knowledge representation but each method have problems, in this paper we introduce a new method of object oriented by XML language as XMLKR to knowledge representation, and we want to discuss advantage and disadvantage of this method.


Efficient Exact Inference in Planar Ising Models

arXiv.org Machine Learning

We give polynomial-time algorithms for the exact computation of lowest-energy (ground) states, worst margin violators, log partition functions, and marginal edge probabilities in certain binary undirected graphical models. Our approach provides an interesting alternative to the well-known graph cut paradigm in that it does not impose any submodularity constraints; instead we require planarity to establish a correspondence with perfect matchings (dimer coverings) in an expanded dual graph. We implement a unified framework while delegating complex but well-understood subproblems (planar embedding, maximum-weight perfect matching) to established algorithms for which efficient implementations are freely available. Unlike graph cut methods, we can perform penalized maximum-likelihood as well as maximum-margin parameter estimation in the associated conditional random fields (CRFs), and employ marginal posterior probabilities as well as maximum a posteriori (MAP) states for prediction. Maximum-margin CRF parameter estimation on image denoising and segmentation problems shows our approach to be efficient and effective. A C++ implementation is available from http://nic.schraudolph.org/isinf/


On the Distribution of the Adaptive LASSO Estimator

arXiv.org Machine Learning

We study the distribution of the adaptive LASSO estimator (Zou (2006)) in finite samples as well as in the large-sample limit. The large-sample distributions are derived both for the case where the adaptive LASSO estimator is tuned to perform conservative model selection as well as for the case where the tuning results in consistent model selection. We show that the finite-sample as well as the large-sample distributions are typically highly non-normal, regardless of the choice of the tuning parameter. The uniform convergence rate is also obtained, and is shown to be slower than $n^{-1/2}$ in case the estimator is tuned to perform consistent model selection. In particular, these results question the statistical relevance of the `oracle' property of the adaptive LASSO estimator established in Zou (2006). Moreover, we also provide an impossibility result regarding the estimation of the distribution function of the adaptive LASSO estimator.The theoretical results, which are obtained for a regression model with orthogonal design, are complemented by a Monte Carlo study using non-orthogonal regressors.


A study of structural properties on profiles HMMs

arXiv.org Artificial Intelligence

Motivation: Profile hidden Markov Models (pHMMs) are a popular and very useful tool in the detection of the remote homologue protein families. Unfortunately, their performance is not always satisfactory when proteins are in the 'twilight zone'. We present HMMER-STRUCT, a model construction algorithm and tool that tries to improve pHMM performance by using structural information while training pHMMs. As a first step, HMMER-STRUCT constructs a set of pHMMs. Each pHMM is constructed by weighting each residue in an aligned protein according to a specific structural property of the residue. Properties used were primary, secondary and tertiary structures, accessibility and packing. HMMER-STRUCT then prioritizes the results by voting. Results: We used the SCOP database to perform our experiments. Throughout, we apply leave-one-family-out cross-validation over protein superfamilies. First, we used the MAMMOTH-mult structural aligner to align the training set proteins. Then, we performed two sets of experiments. In a first experiment, we compared structure weighted models against standard pHMMs and against each other. In a second experiment, we compared the voting model against individual pHMMs. We compare method performance through ROC curves and through Precision/Recall curves, and assess significance through the paired two tailed t-test. Our results show significant performance improvements of all structurally weighted models over default HMMER, and a significant improvement in sensitivity of the combined models over both the original model and the structurally weighted models.


Prediction with Restricted Resources and Finite Automata

arXiv.org Machine Learning

We obtain an index of the complexity of a random sequence by allowing the role of the measure in classical probability theory to be played by a function we call the generating mechanism. Typically, this generating mechanism will be a finite automata. We generate a set of biased sequences by applying a finite state automata with a specified number, $m$, of states to the set of all binary sequences. Thus we can index the complexity of our random sequence by the number of states of the automata. We detail optimal algorithms to predict sequences generated in this way.


Logic programs with propositional connectives and aggregates

arXiv.org Artificial Intelligence

Answer set programming (ASP) is a logic programming paradigm that can be used to solve complex combinatorial search problems. Aggregates are an ASP construct that plays an important role in many applications. Defining a satisfactory semantics of aggregates turned out to be a difficult problem, and in this paper we propose a new approach, based on an analogy between aggregates and propositional connectives. First, we extend the definition of an answer set/stable model to cover arbitrary propositional theories; then we define aggregates on top of them both as primitive constructs and as abbreviations for formulas. Our definition of an aggregate combines expressiveness and simplicity, and it inherits many theorems about programs with nested expressions, such as theorems about strong equivalence and splitting.


Missing Data using Decision Forest and Computational Intelligence

arXiv.org Machine Learning

Autoencoder neural network is implemented to estimate the missing data. Genetic algorithm is implemented for network optimization and estimating the missing data. Missing data is treated as Missing At Random mechanism by implementing maximum likelihood algorithm. The network performance is determined by calculating the mean square error of the network prediction. The network is further optimized by implementing Decision Forest. The impact of missing data is then investigated and decision forrests are found to improve the results.


An analysis of a random algorithm for estimating all the matchings

arXiv.org Artificial Intelligence

Counting the number of all the matchings on a bipartite graph has been transformed into calculating the permanent of a matrix obtained from the extended bipartite graph by Yan Huo, and Rasmussen presents a simple approach (RM) to approximate the permanent, which just yields a critical ratio O($n\omega(n)$) for almost all the 0-1 matrices, provided it's a simple promising practical way to compute this #P-complete problem. In this paper, the performance of this method will be shown when it's applied to compute all the matchings based on that transformation. The critical ratio will be proved to be very large with a certain probability, owning an increasing factor larger than any polynomial of $n$ even in the sense for almost all the 0-1 matrices. Hence, RM fails to work well when counting all the matchings via computing the permanent of the matrix. In other words, we must carefully utilize the known methods of estimating the permanent to count all the matchings through that transformation.