Country
Cognitive Bias for Universal Algorithmic Intelligence
Potapov, Alexey, Rodionov, Sergey, Myasnikov, Andrew, Begimov, Galymzhan
Existing theoretical universal algorithmic intelligence models are not practically realizable. More pragmatic approach to artificial general intelligence is based on cognitive architectures, which are, however, non-universal in sense that they can construct and use models of the environment only from Turing-incomplete model spaces. We believe that the way to the real AGI consists in bridging the gap between these two approaches. This is possible if one considers cognitive functions as a "cognitive bias" (priors and search heuristics) that should be incorporated into the models of universal algorithmic intelligence without violating their universality. Earlier reported results suiting this approach and its overall feasibility are discussed on the example of perception, planning, knowledge representation, attention, theory of mind, language, and some others.
Hybrid technique for effective knowledge representation & a comparative study
Tanwar, Poonam, Prasad, T. V., Datta, Dr. Kamlesh
Knowledge representation (KR) and inference mechanism are most desirable thing to make the system intelligent. System is known to an intelligent if its intelligence is equivalent to the intelligence of human being for a particular domain or general. Because of incomplete ambiguous and uncertain information the task of making intelligent system is very difficult. The objective of this paper is to present the hybrid KR technique for making the system effective & Optimistic. The requirement for (effective & optimistic) is because the system must be able to reply the answer with a confidence of some factor. This paper also presents the comparison between various hybrid KR techniques with the proposed one.
Qualitative Modelling via Constraint Programming: Past, Present and Future
Kelsey, Thomas W., Kotthoff, Lars, Jefferson, Christoffer A., Linton, Stephen A., Miguel, Ian, Nightingale, Peter, Gent, Ian P.
Qualitative modelling is a technique integrating the fields of theoretical computer science, artificial intelligence and the physical and biological sciences. The aim is to be able to model the behaviour of systems without estimating parameter values and fixing the exact quantitative dynamics. Traditional applications are the study of the dynamics of physical and biological systems at a higher level of abstraction than that obtained by estimation of numerical parameter values for a fixed quantitative model. Qualitative modelling has been studied and implemented to varying degrees of sophistication in Petri nets, process calculi and constraint programming. In this paper we reflect on the strengths and weaknesses of existing frameworks, we demonstrate how recent advances in constraint programming can be leveraged to produce high quality qualitative models, and we describe the advances in theory and technology that would be needed to make constraint programming the best option for scientific investigation in the broadest sense.
Generalized Canonical Correlation Analysis for Disparate Data Fusion
Sun, Ming, Priebe, Carey E., Tang, Minh
Manifold matching works to identify embeddings of multiple disparate data spaces into the same low-dimensional space, where joint inference can be pursued. It is an enabling methodology for fusion and inference from multiple and massive disparate data sources. In this paper we focus on a method called Canonical Correlation Analysis (CCA) and its generalization Generalized Canonical Correlation Analysis (GCCA), which belong to the more general Reduced Rank Regression (RRR) framework. We present an efficiency investigation of CCA and GCCA under different training conditions for a particular text document classification task.
Textual Features for Programming by Example
Menon, Aditya Krishna, Tamuz, Omer, Gulwani, Sumit, Lampson, Butler, Kalai, Adam Tauman
In Programming by Example, a system attempts to infer a program from input and output examples, generally by searching for a composition of certain base functions. Performing a naive brute force search is infeasible for even mildly involved tasks. We note that the examples themselves often present clues as to which functions to compose, and how to rank the resulting programs. In text processing, which is our domain of interest, clues arise from simple textual features: for example, if parts of the input and output strings are permutations of one another, this suggests that sorting may be useful. We describe a system that learns the reliability of such clues, allowing for faster search and a principled ranking over programs. Experiments on a prototype of this system show that this learning scheme facilitates efficient inference on a range of text processing tasks.
RIO: Minimizing User Interaction in Ontology Debugging
Rodler, Patrick, Shchekotykhin, Kostyantyn, Fleiss, Philipp, Friedrich, Gerhard
Efficient ontology debugging is a cornerstone for many activities in the context of the Semantic Web, especially when automatic tools produce (parts of) ontologies such as in the field of ontology matching. The best currently known interactive debugging systems rely upon some meta information in terms of fault probabilities, which can speed up the debugging procedure in the good case, but can also have negative impact on the performance in the bad case. The problem is that assessment of the meta information is only possible a-posteriori. Consequently, as long as the actual fault is unknown, there is always some risk of suboptimal interactive diagnoses discrimination. As an alternative, one might prefer to rely on a tool which pursues a no-risk strategy. In this case, however, possibly well-chosen meta information cannot be exploited, resulting again in inefficient debugging actions. In this work we present a reinforcement learning strategy that continuously adapts its behavior depending on the performance achieved and minimizes the risk of using low-quality meta information. Therefore, this method is suitable for application scenarios where reliable a-priori fault estimates are difficult to obtain. Using problematic ontologies in the field of ontology matching, we show that the proposed risk-aware query strategy outperforms both active learning approaches and no-risk strategies on average in terms of required amount of user interaction.
Submodularity in Batch Active Learning and Survey Problems on Gaussian Random Fields
Ma, Yifei, Garnett, Roman, Schneider, Jeff
Many real-world datasets can be represented in the form of a graph whose edge weights designate similarities between instances. A discrete Gaussian random field (GRF) model is a finite-dimensional Gaussian process (GP) whose prior covariance is the inverse of a graph Laplacian. Minimizing the trace of the predictive covariance Sigma (V-optimality) on GRFs has proven successful in batch active learning classification problems with budget constraints. However, its worst-case bound has been missing. We show that the V-optimality on GRFs as a function of the batch query set is submodular and hence its greedy selection algorithm guarantees an (1-1/e) approximation ratio. Moreover, GRF models have the absence-of-suppressor (AofS) condition. For active survey problems, we propose a similar survey criterion which minimizes 1'(Sigma)1. In practice, V-optimality criterion performs better than GPs with mutual information gain criteria and allows nonuniform costs for different nodes.
A framework for large-scale distributed AI search across disconnected heterogeneous infrastructures
Kotthoff, Lars, Kelsey, Tom, McCaffery, Martin
We present a framework for a large-scale distributed eScience Artificial Intelligence search. Our approach is generic and can be used for many different problems. Unlike many other approaches, we do not require dedicated machines, homogeneous infrastructure or the ability to communicate between nodes. We give special consideration to the robustness of the framework, minimising the loss of effort even after total loss of infrastructure, and allowing easy verification of every step of the distribution process. In contrast to most eScience applications, the input data and specification of the problem is very small, being easily given in a paragraph of text. The unique challenges our framework tackles are related to the combinatorial explosion of the space that contains the possible solutions and the robustness of long-running computations. Not only is the time required to finish the computations unknown, but also the resource requirements may change during the course of the computation. We demonstrate the applicability of our framework by using it to solve a challenging and hitherto open problem in computational mathematics. The results demonstrate that our approach easily scales to computations of a size that would have been impossible to tackle in practice just a decade ago.
Tractable Optimization Problems through Hypergraph-Based Structural Restrictions
Gottlob, Georg, Greco, Gianluigi, Scarcello, Francesco
Several variants of the Constraint Satisfaction Problem have been proposed and investigated in the literature for modelling those scenarios where solutions are associated with some given costs. Within these frameworks computing an optimal solution is an NP-hard problem in general; yet, when restricted over classes of instances whose constraint interactions can be modelled via (nearly-)acyclic graphs, this problem is known to be solvable in polynomial time. In this paper, larger classes of tractable instances are singled out, by discussing solution approaches based on exploiting hypergraph acyclicity and, more generally, structural decomposition methods, such as (hyper)tree decompositions.