Study shows how large language models like GPT-3 can learn a new task from just a few examples
Large language models like OpenAI's GPT-3 are massive neural networks that can generate human-like text, from poetry to programming code. Trained using troves of internet data, these machine-learning models take a small bit of input text and then predict the text that is likely to come next. But that's not all these models can do. Researchers are exploring a curious phenomenon known as in-context learning, in which a large language model learns to accomplish a task after seeing only a few examples--despite the fact that it wasn't trained for that task. For instance, someone could feed the model several example sentences and their sentiments (positive or negative), then prompt it with a new sentence, and the model can give the correct sentiment.
Feb-7-2023, 23:15:46 GMT
- Country:
- North America
- Canada > Alberta (0.15)
- United States > Massachusetts
- Middlesex County > Cambridge (0.05)
- North America
- Technology: