model language
Shrinking massive neural networks used to model language
Jonathan Frankle is researching artificial intelligence -- not noshing pistachios -- but the same philosophy applies to his "lottery ticket hypothesis." It posits that, hidden within massive neural networks, leaner subnetworks can complete the same task more efficiently. The trick is finding those "lucky" subnetworks, dubbed winning lottery tickets. In a new paper, Frankle and colleagues discovered such subnetworks lurking within BERT, a state-of-the-art neural network approach to natural language processing (NLP). As a branch of artificial intelligence, NLP aims to decipher and analyze human language, with applications like predictive text generation or online chatbots.
Shrinking massive neural networks used to model language
BEGIN ARTICLE PREVIEW: You don’t need a sledgehammer to crack a nut. Jonathan Frankle is researching artificial intelligence — not noshing pistachios — but the same philosophy applies to his “lottery ticket hypothesis.” It posits that, hidden within massive neural networks, leaner subnetworks can complete the same task more efficiently. The trick is finding those “lucky” subnetworks, dubbed winning lottery tickets. In a new paper, Frankle and colleagues discovered such subnetworks lurking within BERT, a state-of-the-art neural network approach to natural language processing (NLP). As a branch of artificial intelligence, NLP aims to decipher and analyze human language, with applications like predictive te
Shrinking massive neural networks used to model language
You don't need a sledgehammer to crack a nut. Jonathan Frankle is researching artificial intelligence -- not noshing pistachios -- but the same philosophy applies to his "lottery ticket hypothesis." It posits that, hidden within massive neural networks, leaner subnetworks can complete the same task more efficiently. The trick is finding those "lucky" subnetworks, dubbed winning lottery tickets. In a new paper, Frankle and colleagues discovered such subnetworks lurking within BERT, a state-of-the-art neural network approach to natural language processing (NLP).
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.40)
- North America > United States > Texas > Travis County > Austin (0.05)