Thon, Ingo
Grasping Partially Occluded Objects Using Autoencoder-Based Point Cloud Inpainting
Koebler, Alexander, Gross, Ralf, Buettner, Florian, Thon, Ingo
Flexible industrial production systems will play a central role in the future of manufacturing due to higher product individualization and customization. A key component in such systems is the robotic grasping of known or unknown objects in random positions. Real-world applications often come with challenges that might not be considered in grasping solutions tested in simulation or lab settings. Partial occlusion of the target object is the most prominent. Examples of occlusion can be supporting structures in the camera's field of view, sensor imprecision, or parts occluding each other due to the production process. In all these cases, the resulting lack of information leads to shortcomings in calculating grasping points. In this paper, we present an algorithm to reconstruct the missing information. Our inpainting solution facilitates the real-world utilization of robust object matching approaches for grasping point calculation. We demonstrate the benefit of our solution by enabling an existing grasping system embedded in a real-world industrial application to handle occlusions in the input. With our solution, we drastically decrease the number of objects discarded by the process.
Inducing Probabilistic Relational Rules from Probabilistic Examples
Raedt, Luc De (KU Leuven) | Dries, Anton (KU Leuven) | Thon, Ingo (KU Leuven) | Broeck, Guy Van den (KU Leuven) | Verbeke, Mathias (KU Leuven)
We study the problem of inducing logic programs in a probabilistic setting, in which both the example descriptions and their classification can be probabilistic. The setting is incorporated in the probabilistic rule learner ProbFOIL+, which combines principles of the rule learner FOIL with ProbLog, a probabilistic Prolog. We illustrate the approach by applying it to the knowledge base of NELL, the Never-Ending Language Learner.
Inference and learning in probabilistic logic programs using weighted Boolean formulas
Fierens, Daan, Broeck, Guy Van den, Renkens, Joris, Shterionov, Dimitar, Gutmann, Bernd, Thon, Ingo, Janssens, Gerda, De Raedt, Luc
Probabilistic logic programs are logic programs in which some of the facts are annotated with probabilities. This paper investigates how classical inference and learning tasks known from the graphical model community can be tackled for probabilistic logic programs. Several such tasks such as computing the marginals given evidence and learning from (partial) interpretations have not really been addressed for probabilistic logic programs before. The first contribution of this paper is a suite of efficient algorithms for various inference tasks. It is based on a conversion of the program and the queries and evidence to a weighted Boolean formula. This allows us to reduce the inference tasks to well-studied tasks such as weighted model counting, which can be solved using state-of-the-art methods known from the graphical model and knowledge compilation literature. The second contribution is an algorithm for parameter estimation in the learning from interpretations setting. The algorithm employs Expectation Maximization, and is built on top of the developed inference algorithms. The proposed approach is experimentally evaluated. The results show that the inference algorithms improve upon the state-of-the-art in probabilistic logic programming and that it is indeed possible to learn the parameters of a probabilistic logic program from interpretations.
Inference in Probabilistic Logic Programs using Weighted CNF's
Fierens, Daan, Broeck, Guy Van den, Thon, Ingo, Gutmann, Bernd, De Raedt, Luc
Probabilistic logic programs are logic programs in which some of the facts are annotated with probabilities. Several classical probabilistic inference tasks (such as MAP and computing marginals) have not yet received a lot of attention for this formalism. The contribution of this paper is that we develop efficient inference algorithms for these tasks. This is based on a conversion of the probabilistic logic program and the query and evidence to a weighted CNF formula. This allows us to reduce the inference tasks to well-studied tasks such as weighted model counting. To solve such tasks, we employ state-of-the-art methods. We consider multiple methods for the conversion of the programs as well as for inference on the weighted CNF. The resulting approach is evaluated experimentally and shown to improve upon the state-of-the-art in probabilistic logic programming.
DTProbLog: A Decision-Theoretic Probabilistic Prolog
Broeck, Guy Van den (Katholieke Universiteit Leuven) | Thon, Ingo (Katholieke Universiteit Leuven) | Otterlo, Martijn van (Katholieke Universiteit Leuven) | Raedt, Luc De (Katholieke Universiteit Leuven)
We introduce DTProbLog, a decision-theoretic extension of Prolog and its probabilistic variant ProbLog. DTProbLog is a simple but expressive probabilistic programming language that allows the modeling of a wide variety of domains, such as viral marketing. In DTProbLog, the utility of a strategy (a particular choice of actions) is defined as the expected reward for its execution in the presence of probabilistic effects. The key contribution of this paper is the introduction of exact, as well as approximate, solvers to compute the optimal strategy for a DTProbLog program and the decision problem it represents, by making use of binary and algebraic decision diagrams. We also report on experimental results that show the effectiveness and the practical usefulness of the approach.