Goto

Collaborating Authors

 University of Manchester


Recognizing and Justifying Text Entailment Through Distributional Navigation on Definition Graphs

AAAI Conferences

Text entailment, the task of determining whether a piece of text logically follows from another piece of text, has become an important component for many natural language processing tasks, such as question answering and information retrieval. For entailments requiring world knowledge, most systems still work as a "black box," providing a yes/no answer that doesn't explain the reasoning behind it. We propose an interpretable text entailment approach that, given a structured definition graph, uses a navigation algorithm based on distributional semantic models to find a path in the graph which links text and hypothesis. If such path is found, it is used to provide a human-readable justification explaining why the entailment holds. Experiments show that the proposed approach present results comparable to some well-established entailment algorithms, while also meeting Explainable AI requirements, supplying clear explanations which allow the inference model interpretation.


Learning Bayesian Networks with Incomplete Data by Augmentation

AAAI Conferences

We present new algorithms for learning Bayesian networks from data with missing values using a data augmentation approach. An exact Bayesian network learning algorithm is obtained by recasting the problem into a standard Bayesian network learning problem without missing data. As expected, the exact algorithm does not scale to large domains. We build on the exact method to create an approximate algorithm using a hill-climbing technique. This algorithm scales to large domains so long as a suitable standard structure learning method for complete data is available. We perform a wide range of experiments to demonstrate the benefits of learning Bayesian networks with such new approach.


Unsupervised Domain Adaptation with a Relaxed Covariate Shift Assumption

AAAI Conferences

The distributions can be different (Storkey and Sugiyama 2006; training and test domains are commonly referred to in the Ben-David and Urner 2012; 2014). Covariate shift is a valid domain adaptation literature as the source and target domains, assumption in some problems, but it can as well be quite respectively. Domain diversity can emerge as a result of the unrealistic for many other domain adaptation tasks where the scarcity of available labeled data from the target domain. It conditional label distributions are not (or, more precisely, not can as well be innate in the problem itself due to, for example, guaranteed to be) identical. The simplification resulting from an ongoing change occurring to the source domain like assuming identical labeling distributions facilitates the quest in cases where the original source domain keeps changing for a tractable learning algorithm, albeit possibly at the cost over time. Domain adaptation aims at finding solutions for of reducing the expressiveness power of the representation, this kind of problem, where the training (source) data are and consequently the accuracy of the resulting hypothesis.


Uniform Interpolation and Forgetting for ALC Ontologies with ABoxes

AAAI Conferences

Uniform interpolation and the dual task of forgetting restrict the ontology to a specified subset of concept and role names. This makes them useful tools for ontology analysis, ontology evolution and information hiding. Most previous research focused on uniform interpolation of TBoxes. However, especially for applications in privacy and information hiding, it is essential that uniform interpolation methods can deal with ABoxes as well. We present the first method that can compute uniform interpolants of any ALC ontology with ABoxes. ABoxes bring their own challenges when computing uniform interpolants, possibly requiring disjunctive statements or nominals in the resulting ABox. Our method can compute representations of uniform interpolants in ALCO. An evaluation on realistic ontologies shows that these uniform interpolants can be practically computed, and can often even be presented in pure ALC.


Property Directed Reachability for Automated Planning

AAAI Conferences

Property Directed Reachability (PDR), also known as IC3, is a very promising recent method for deciding reachability in symbolically represented transition systems. While originally conceived as a model checking algorithm for hardware circuits, it has already been successfully applied in several other areas. This paper summarizes the first investigation of PDR from the perspective of automated planning.


Optimizing Objective Function Parameters for Strength in Computer Game-Playing

AAAI Conferences

The learning of evaluation functions from game records has been widely studied in the field of computer game-playing. Conventional learning methods optimize the evaluation function parameters by using the game records of expert players in order to imitate their plays. Such conventional methods utilize objective functions to increase the agreement between the moves selected by game-playing programs and the moves in the records of actual games. The methods, however, have a problem in that increasing the agreement does not always improve the strength of a program. Indeed, it is not clear how this agreement relates to the strength of a trained program. To address this problem, this paper presents a learning method to optimize objective function parameters for strength in game-playing. The proposed method employs an evolutionary learning algorithm with the strengths (Elo ratings) of programs as their fitness scores. Experimental results show that the proposed method is effective since programs using the objective function produced by the proposed method are superior to those using conventional objective functions.


Invited Talk Abstracts

AAAI Conferences

Both Lawrence Carin tools utilize the same transformed Robust PCA model for the visual data: D A E, and use practically the same Hierarchical Bayesian methods are employed to learn a reversible algorithm for extracting the low-rank structures A from the statistical embedding. The proposed embedding visual data D, despite image domain transformation T and procedure is connected to spectral embedding methods (e.g., corruptions E. We will show how these two seemingly simple diffusion maps and Isomap), yielding a new statistical spectral tools can help unleash tremendous information in images framework. The proposed approach allows one to discard and videos that we used to struggle to get. We believe these the training data when embedding new data, allows synthesis new tools will bring disruptive changes to many challenging of high-dimensional data from the embedding space, tasks in computer vision and image processing, including and provides accurate estimation of the latent-space dimensionality.