Learning Neural-Symbolic Descriptive Planning Models via Cube-Space Priors: The Voyage Home (to STRIPS)
Asai, Masataro, Muise, Christian
–arXiv.org Artificial Intelligence
E.g., its search space was shown to be compatible with symbolic Goal Recognition [Amado et al., 2018]. We achieved a new milestone in the difficult task One major drawback of the previous work was that it used of enabling agents to learn about their environment a non-descriptive, black-box neural model as the successor autonomously. Our neuro-symbolic architecture is generator. Not only that such a black-box model is incompatible trained end-to-end to produce a succinct and effective with the existing heuristic search techniques, but also, discrete state transition model from images since a neural network is able to model a very complex function, alone. Our target representation (the Planning Domain its direct translation into a compact logical formula via Definition Language) is already in a form that a rule-based transfer learning method turned out futile [Asai, off-the-shelf solvers can consume, and opens the 2019a]: The model complexity causes an exponentially large door to the rich array of modern heuristic search grounded action model that cannot be processed by the modern capabilities. We demonstrate how the sophisticated classical planners. Thus, obtaining the descriptive action innate prior we place on the learning process significantly models from the raw observations with minimal human interference reduces the complexity of the learned representation, is the next key milestone for expanding the scope of and reveals a connection to the graphtheoretic applying Automated Planning to the raw unstructured inputs.
arXiv.org Artificial Intelligence
Aug-11-2020