We consider the problem of learning generalized first-order representations of concepts from a single example. To address this challenging problem, we augment an inductive logic programming learner with two novel algorithmic contributions. First, we define a distance measure between candidate concept representations that improves the efficiency of search for target concept and generalization. Second, we leverage richer human inputs in the form of advice to improve the sample-efficiency of learning. We prove that the proposed distance measure is semantically valid and use that to derive a P AC bound. Our experimental analysis on diverse concept learning tasks demonstrates both the effectiveness and efficiency of the proposed approach over a first-order concept learner using only examples.
One of the principal problems of philosophy has been to explain how this accumulation of observations can be used to fill in the gaps in our knowledge, particularly of the future. Without such an ability, rationality, which requires the prediction of the outcome of our actions, would be impossible. In AI, the problem is doubly acute: not only do we desire to understand the process for its own sake, but also without such an understanding we cannot build machines that learn. The basic answer to the problem is that we come to believe in some generally applicable rules (universals) by a process of induction from prior instances of their application; we then apply these rules in situations of incomplete knowledge using deduction.
In this paper, a concept of alternative programming is defined by analogy with alternative medicine. This is a method of programming where natural data examples (e.g., verbal descriptions of some model objects) are aligned, generalized and composed into an algorithm definition instead of using artificial, previously created formalisms. We analyze the alternative components of some existing programming methods and describe a specific method, where the usage of the artificial formalisms is reduced to a minimum. Logic programming language Sampletalk is described as a supporting tool for the method. Advantages and disadvantages of alternative programming are discussed in a Knowledge Representation context.
Many researchers have suggested that the psychological complexity of a concept is related to the length of its representation in a language of thought. As yet, however, there are few concrete proposals about the nature of this language. This paper makes one such proposal: the language of thought allows first order quantification (quantificationover objects) more readily than second-order quantification (quantification over features). To support this proposal we present behavioral results froma concept learning study inspired by the work of Shepard, Hovland and Jenkins. Humans can learn and think about many kinds of concepts, including natural kinds such as elephant and water and nominal kinds such as grandmother and prime number.
We present a heuristic based algorithm to induce non-monotonic logic programs that would explain the behavior of XGBoost trained classifiers. We use the LIME technique to locally select the most important features contributing to the classification decision. Then, in order to explain the model's global behavior, we propose the UFOLD algorithm ---a heuristic-based ILP algorithm capable of learning non-monotonic logic programs--- that we apply to a transformed dataset produced by LIME. Our experiments with UCI standard benchmarks suggest a significant improvement in terms of the classification evaluation metrics. Meanwhile, the number of induced rules dramatically decreases compared ALEPH, a state-of-the-art ILP system. While the proposed approach is agnostic to the choice of ILP algorithm, our experiments suggest that the UFOLD algorithm almost always outperforms ALEPH once incorporated in this approach.