In this paper we propose a new algorithm for solving general two-player turn-taking games that performs symbolic search utilizing binary decision diagrams (BDDs). It consists of two stages: First, it determines all breadth-first search (BFS) layers using forward search and omitting duplicate detection, next, the solving process operates in backward direction only within these BFS layers thereby partitioning all BDDs according to the layers the states reside in. We provide experimental results for selected games and compare to a previous approach. This comparison shows that in most cases the new algorithm outperforms the existing one in terms of runtime and used memory so that it can solve games that could not be solved before with a general approach.
Traditional case-based classification methods are based on feature similarity. In contrast, class-to-class (C2C) weighting also considers whether the difference between two cases has been seen before. Combined with instance-specific weighting, C2C weighting learns the local patterns of both similarities and differences (shortened as patterns). Once C2C weightings has learned the pattern between case A of class C_1 and some set of cases R of class C_2, given a query Q whose difference from A matches the pattern between A and R, then we can skip cases around A and continue the search for near neighbors around R. Based on this, we developed an algorithm, C2C trace retrieval, which quickly traverses promising cases, retrieves relevant cases from different classes, and provides an informed hypothesis of the query's class. C2C trace retrieval achieves great efficiency at a reasonable cost of accuracy. Therefore, C2C trace retrieval can be used as a fast classification method or as the first pass for a more sophisticated method.
We present several new algorithms for bidirectional best-first search that employ a front-to-front strategy of estimating distances from newly-generated frontier nodes in one search direction to existing frontier nodes in the other search direction, rather than estimating distances to terminal nodes in both searches. Unlike previous front-to-front strategies that use a shared priority queue to manage both frontiers, we use a separate data structure for each search, and choose that data structure to minimize the amount of computational effort required by the best-first search algorithm it supports.
Learning strategies to address problems on graph and tree structures with no a-priori size limitations in cases where no known solution exists (and thus supervised data is hard to obtain), is a difficult problem with potential applications in a wide range of domains ranging from biological networks to protein folding and social network search. The main challenges here arise from the variable size representation that needs to be resolved in the context of Reinforcement Learning (RL) to address the problem. In this paper we consider a common, specific tree problem and show that it can be addressed using a combination of feature engineering and carefully designed learning processes. In particular, We consider the classical Nearest Neighbor Interchange (NNI) distance between unrooted labeled trees, which is defined as the minimum-cost sequence of operations that transform one tree into another. We introduce a representation and a reinforcement learning method that learns the transition dynamics and iteratively changes an arbitrary initial labeled tree into a goal configuration reachable through NNI. The differential tree representation and NNI actions permits the system to learn a strategy that is applicable to arbitrary sized trees. To train the system, we introduce a training process that uses randomly sampled trajectories to incrementally train more and more complex problems to overcome the difficulty of the overall strategy space. Experiments performed show that the system can successfully learn a strategy for effective NNI on complex trees.
The similarity assumption in Case-Based Reasoning (similar problems have similar solutions) has been questioned by several researchers. If knowledge about the adaptability of solutions is available, it can be exploited in order to guide retrieval. Several approaches have been proposed in this context, often assuming a similarity or cost measure defined over the solution space. In this paper, we propose a novel approach where the adaptability of the solutions is captured inside a metric Markov Random Field (MRF). Each case is represented as a node in the MRF, and edges connect cases whose solutions are close in the solution space. States of the nodes represent the adaptability effort with respect to the query. Potentals are defined to enforce connected nodes to share the same state; this models the fact that cases having similar solutions should have the same adaptability effort with respect to the query. The main goal is to enlarge the set of potentially adaptable cases that are retrieved (the recall) without significantly sacrificing the precision of retrieval. We will report on some experiments concerning a retrieval architecture where a simple kNN retrieval is followed by a further retrieval step based on MRF inference.
Bhuiyan, Farzana Ahamed (Tennessee Technological University) | Sharif, MD Bulbul (Tennessee Technological University) | Tinker, Paul Joshua (Tennessee Technological University) | Eberle, William (Tennessee Technological University) | Talbert, Douglas A. (Tennessee Technological University) | Ghafoor, Sheikh Khaled (Tennessee Technological University) | Frey, Lewis (Medical University of South Carolina)
In this work, we first attempt to replicate an earlier study on gene selection and clustering, and then we extend this work by applying a different type of hierarchical clustering to dis- cover interesting subsets of genes from breast cancer data. Replication of such studies is a known challenge and an ac- tive area of research in bioinformatics. The work presented in this paper is three-fold. First, we replicate a study conducted at the University of North Carolina to generate an initial set of genes. Second, we apply an approach called Distance Weighted Discrimination to fuse multiple, disparate breast cancer datasets into a single validation set. Third, we per- form hierarchical clustering and k-means clustering on this validation set to discover natural groupings and compare the clusters generated by both methods. While applying the hi- erarchical clustering is part of the reproduction step, we ex- tend the research by trying two different forms of hierarchi- cal clustering. We also apply k-means clustering for the same purpose and compare all three methods using Kaplan-Meier estimation and Cox proportional hazards regression. We dis- cover that among the three methods, k-means clustering gives us the best results.
One of the most essential parts of any recommender system is personalization how acceptable the recommendations are from the user's perspective. However, in many real-world applications, there are multiple objectives often from multiple stakeholders that need to be incorporated into the recommendation generation. In this work, we define the problem of multi-stakeholder recommendation and we focus on finding algorithms for a special case where the recommender system itself is also a stakeholder. We also define different types of system-level objectives and find algorithmic solutions for each of them such that similar problems can be solved by the same class of algorithms. Finally, we will explore the idea of incremental incorporation of system-level objectives into recommender systems to tackle the existing problems with the optimization techniques which only look for optimizing the individual users' lists rather than looking at the whole picture of system performance over time. With autonomous robots being exposed to unstructured environments, they inevitably run across cases in which they stand unable to overcome their limitations, such as removing objects in their path, or location uncertainty. These limitations can be overcome when robots obtain help from humans. We are investigating how robots can effectively interact and request help from human. We test a scenario where the robot needs a door to be opened by a human, so the robot could complete its other tasks. The robot is too short to reach the door handle himself. In the polite request, the robot may say "Can you open the door please?", while in the indirect request, the robot may say "I cannot open the door and it is blocking my way". In the friendly way, the robot asks with less formal tones like "You seem taller than me. Would you open the door for me?". The robot can point its hand to the door that it needs opened. Later humans were interviewed as to why they did or did not take the robot seriously. The polite interaction manner was significantly more efficient. We show how to factor the situational awareness effects (whether participants realize the nature of the experiment) in the analysis. The proposed evaluation procedure allows identifying promising mechanisms for such human-robot interactions.
A Computer-Assisted Reading and Analysis of Texts (CARAT) process is a complex technology that connects language, text, information and knowledge theories with computational formalizations, statistical approaches, symbolic approaches, standard and non-standard logics, etc. This process should be, always, under the control of the user according to his subjectivity, his knowledge and the purpose of his analysis. It becomes important to design platforms to support the design of CARAT tools, their management, their adaptation to new needs and the experiments. Even, in the last years, several platforms for digging data, including textual data have emerged; they lack flexibility and sound formal foundations. We propose, in this paper, a formal model with strong logical foundations, based on typed applicative systems.