bayesian program learning
Sampling for Bayesian Program Learning
Towards learning programs from data, we introduce the problem of sampling programs from posterior distributions conditioned on that data. Within this setting, we propose an algorithm that uses a symbolic solver to efficiently sample programs. The proposal combines constraint-based program synthesis with sampling via random parity constraints. We give theoretical guarantees on how well the samples approximate the true posterior, and have empirical results showing the algorithm is efficient in practice, evaluating our approach on 22 program learning problems in the domains of text editing and computer-aided programming.
Reviews: Sampling for Bayesian Program Learning
I found this paper interesting and well-written, but I have some significant questions and comments about the approach. The paper argues that sampling is useful because we can find the C most frequently sampled programs and show them to a user. As shown in Figure 6, there is more likely to be a correct program in the top 3 programs than in the top 1. But if we want to show the top C programs, do we really need to perform sampling, which the paper says is complicated by the existence of many long and unlikely programs that match the training examples? Why can't we simply find the MDL program and then run the solver again with length restrictions to find other consistent programs of the same length, or slightly longer lengths?
AI that can learn the patterns of human language
Human languages are notoriously complex, and linguists have long thought it would be impossible to teach a machine how to analyze speech sounds and word structures in the way human investigators do. But researchers at MIT, Cornell University, and McGill University have taken a step in this direction. They have demonstrated an artificial intelligence system that can learn the rules and patterns of human languages on its own. When given words and examples of how those words change to express different grammatical functions (like tense, case, or gender) in one language, this machine-learning model comes up with rules that explain why the forms of those words change. For instance, it might learn that the letter "a" must be added to end of a word to make the masculine form feminine in Serbo-Croatian. This model can also automatically learn higher-level language patterns that can apply to many languages, enabling it to achieve better results.
AI that can learn the patterns of human language
Human languages are notoriously complex, and linguists have long thought it would be impossible to teach a machine how to analyze speech sounds and word structures in the way human investigators do. But researchers at MIT, Cornell University, and McGill University have taken a step in this direction. They have demonstrated an artificial intelligence system that can learn the rules and patterns of human languages on its own. When given words and examples of how those words change to express different grammatical functions (like tense, case, or gender) in one language, this machine-learning model comes up with rules that explain why the forms of those words change. For instance, it might learn that the letter "a" must be added to end of a word to make the masculine form feminine in Serbo-Croatian.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.40)
- North America > Canada > Quebec > Montreal (0.25)
Sampling for Bayesian Program Learning
Ellis, Kevin, Solar-Lezama, Armando, Tenenbaum, Josh
Towards learning programs from data, we introduce the problem of sampling programs from posterior distributions conditioned on that data. Within this setting, we propose an algorithm that uses a symbolic solver to efficiently sample programs. The proposal combines constraint-based program synthesis with sampling via random parity constraints. We give theoretical guarantees on how well the samples approximate the true posterior, and have empirical results showing the algorithm is efficient in practice, evaluating our approach on 22 program learning problems in the domains of text editing and computer-aided programming. Papers published at the Neural Information Processing Systems Conference.
Bayesian Program Learning: Computers Make a Leap Forward
MIT's scientists claim they can teach a new concept to a computer using a single example rather than thousands. They make use of an algorithm that takes advantage of "Bayesian Program Learning," or BPL. This is when a computer creates its own additional examples after being fed data, and then determines which ones fit the pattern best. The researchers behind BPL say they're attempting to recreate the way humans are able to learn a new task after seeing it done once. "The gap between machine learning and human learning capacities remains vast," one of the authors of the research paper, which was published last week in the journal Science, told GeekWire.