lo-shot
Can Humans Do Less-Than-One-Shot Learning?
Malaviya, Maya, Sucholutsky, Ilia, Oktar, Kerem, Griffiths, Thomas L.
Being able to learn from small amounts of data is a key characteristic of human intelligence, but exactly {\em how} small? In this paper, we introduce a novel experimental paradigm that allows us to examine classification in an extremely data-scarce setting, asking whether humans can learn more categories than they have exemplars (i.e., can humans do "less-than-one shot" learning?). An experiment conducted using this paradigm reveals that people are capable of learning in such settings, and provides several insights into underlying mechanisms. First, people can accurately infer and represent high-dimensional feature spaces from very little data. Second, having inferred the relevant spaces, people use a form of prototype-based categorization (as opposed to exemplar-based) to make categorical inferences. Finally, systematic, machine-learnable patterns in responses indicate that people may have efficient inductive biases for dealing with this class of data-scarce problems.
How to Make Artificial Intelligence More Democratic
This year, GPT-3, a large language model capable of understanding text, responding to questions and generating new writing examples, has drawn international media attention. The model, released by OpenAI, a California-based nonprofit that builds general-purpose artificial intelligence systems, has an impressive ability to mimic human writing, but just as notable is its massive size. To build it, researchers collected 175 billion parameters (a type of computational unit) and more than 45 terabytes of text from Common Crawl, Reddit, Wikipedia and other sources, then trained it in a process that occupied hundreds of processing units for thousands of hours. GPT-3 demonstrates a broader trend in artificial intelligence. Deep learning, which has in recent years become the dominant technique for creating new AIs, uses enormous amounts of data and computing power to fuel complex, accurate models.
How to make artificial intelligence more democratic
This year, GPT-3, a large language model capable of understanding text, responding to questions and generating new writing examples, has drawn international media attention. The model, released by OpenAI, a California-based nonprofit that builds general-purpose artificial intelligence systems, has an impressive ability to mimic human writing, but just as notable is its massive size. To build it, researchers collected 175 billion parameters (a type of computational unit) and more than 45 terabytes of text from Common Crawl, Reddit, Wikipedia and other sources, then trained it in a process that occupied hundreds of processing units for thousands of hours. GPT-3 demonstrates a broader trend in artificial intelligence. Deep learning, which has in recent years become the dominant technique for creating new AIs, uses enormous amounts of data and computing power to fuel complex, accurate models.
Doing the impossible? Machine learning with less than one example - KDnuggets
"Less-than-one-shot learning" enables machine learning algorithms to classify N labels with less than N training examples. If I told you to imagine something between a horse and a bird--say, a flying horse--would you need to see a concrete example? Such a creature does not exist, but nothing prevents us from using our imagination to create one: the Pegasus. The human mind has all kinds of mechanisms to create new concepts by combining abstract and concrete knowledge it has of the real world. We can imagine existing things that we might have never seen (a horse with a long neck--a giraffe), as well as things that do not exist in real life (a winged serpent that breathes fire--a dragon).
Researchers Demonstrate Less-than-One Shot Machine Learning
We're accustomed to thinking that bigger is better in machine learning. If 10 samples are good, then 100 samples must be even better. However, researchers from the University of Waterloo recently demonstrated the feasibility of "less than one-shot" learning, or a model that can learn to identify something, even if it's never seen an example of it. In their September paper, titled "'Less Than One'-Shot Learning: Learning N Classes From M N Samples," researchers Ilia Sucholutsky and Matthias Schonlau explain how they created a machine learning model that can learn to classify something when trained with less than one example per class. For example, consider an alien zoologist who lands on earth and is instructed to capture a unicorn.
A radical new technique lets AI learn with practically no data
Machine learning typically requires tons of examples. To get an AI model to recognize a horse, you need to show it thousands of images of horses. This is what makes the technology computationally expensive--and very different from human learning. A child often needs to see just a few examples of an object, or even only one, before being able to recognize it for life. In fact, children sometimes don't need any examples to identify something.
- North America > Canada > Quebec > Montreal (0.05)
- North America > Canada > Ontario (0.05)
Machine learning with less than one example
This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. If I told you to imagine something between a horse and a bird--say, a flying horse--would you need to see a concrete example? Such a creature does not exist, but nothing prevents us from using our imagination to create one: the Pegasus. The human mind has all kinds of mechanisms to create new concepts by combining abstract and concrete knowledge it has of the real world. We can imagine existing things that we might have never seen (a horse with a long neck--a giraffe), as well as things that do not exist in real life (a winged serpent that breathes fire--a dragon).