Goto

Collaborating Authors

 Dechter, Eyal


Dimensionality Reduction via Program Induction

AAAI Conferences

How can techniques drawn from machine learning be appliedto the learning of structured, compositional representations? In this work, we adopt functional programs as our representation, and cast the problem of learning symbolic representations as a symbolic analog of dimensionality reduction. By placing program synthesis within a probabilistic machinelearning framework, we are able to model the learning ofsome English inflectional morphology and solve a set of synthetic regression problems.


Latent Predicate Networks: Concept Learning with Probabilistic Context-Sensitive Grammars

AAAI Conferences

For humans, learning abstract concepts and learning language  go hand in hand: we acquire abstract knowledge primarily through  linguistic experience, and acquiring abstract concepts is a crucial  step in learning the meanings of linguistic expressions. Number  knowledge is a case in point: we largely acquire concepts such as  seventy-three through linguistic means, and we can only know what  the sentence ``seventy-three is more than twice as big as  thirty-one" means if we can grasp the meanings of its component  number words. How do we begin to solve this problem? One approach is  to estimate the distribution from which sentences are drawn, and, in  doing so, infer the latent concepts and relationships that best  explain those sentences. We present early work on a learning  framework called Latent Predicate Networks (LPNs) which learns  concepts by inferring the parameters of probabilistic  context-sensitive grammars over sentences.  We show that for a small  fragment of sentences expressing relationships between English  number words, we can use hierarchical Bayesian inference to learn  grammars that can answer simple queries about previously unseen  relationships within this domain. These generalizations demonstrate  LPNs' promise as a tool for learning and representing conceptual  knowledge in language.