Human-like Few-Shot Learning via Bayesian Reasoning over Natural Language
–arXiv.org Artificial Intelligence
A core tension in models of concept learning is that the model must carefully balance the tractability of inference against the expressivity of the hypothesis class. Humans, however, can efficiently learn a broad range of concepts. We introduce a model of inductive learning that seeks to be human-like in that sense. It implements a Bayesian reasoning process where a language model first proposes candidate hypotheses expressed in natural language, which are then re-weighed by a prior and a likelihood. By estimating the prior from human data, we can predict human judgments on learning problems involving numbers and sets, spanning concepts that are generative, discriminative, propositional, and higher-order.
arXiv.org Artificial Intelligence
Sep-29-2023
- Country:
- Asia > Middle East
- Jordan (0.04)
- UAE > Abu Dhabi Emirate
- Abu Dhabi (0.04)
- Europe
- Italy > Tuscany
- Florence (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Oxfordshire > Oxford (0.04)
- Italy > Tuscany
- North America > United States
- Massachusetts (0.04)
- New York (0.04)
- South America > Chile
- Asia > Middle East
- Genre:
- Research Report > Experimental Study (0.46)
- Industry:
- Education (1.00)
- Technology: