retrain
- North America > United States > California (0.04)
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- Information Technology > Security & Privacy (1.00)
- Law (0.68)
- North America > United States > Michigan (0.04)
- North America > United States > California > Santa Cruz County > Santa Cruz (0.04)
- Asia (0.04)
- Research Report (0.67)
- Overview (0.45)
- Information Technology > Security & Privacy (1.00)
- Law (0.67)
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > California > Santa Clara County > Mountain View (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Asia > Vietnam > Hanoi > Hanoi (0.04)
The big AI job swap: why white-collar workers are ditching their careers
Have you retrained or moved careers due to your previous career path being at risk of an artificial intelligence takeover? Please include as much detail as possible. Did you have a dream profession that you have decided not to pursue because of fears it will be thwarted by AI? Optional Please include as much detail as possible.
- Europe > United Kingdom (0.14)
- Europe > Sweden > Skåne County > Malmö (0.04)
- Oceania > Australia (0.04)
- (4 more...)
- Education (1.00)
- Banking & Finance (0.94)
- Leisure & Entertainment > Sports (0.68)
- (2 more...)
- Information Technology > Communications > Social Media (0.95)
- Information Technology > Artificial Intelligence > Robots (0.68)
GroupReduce: Block-Wise Low-Rank Approximation for Neural Language Model Shrinking
Patrick Chen, Si Si, Yang Li, Ciprian Chelba, Cho-Jui Hsieh
For problems with a very large vocabulary size, the embedding and the softmax matrices can account for more than half of the model size. For instance, the bigLSTM model achieves great performance on the One-Billion-Word (OBW) dataset with around 800k vocabulary, and its word embedding and softmax matrices use more than 6GBytes space, and are responsible for over 90% of the model parameters. In this paper, we propose GroupReduce, a novel compression method for neural language models, based on vocabulary-partition (block) based low-rank matrix approximation and the inherent frequency distribution of tokens (the power-law distribution of words).
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > California > Santa Clara County > Mountain View (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Asia > Vietnam > Hanoi > Hanoi (0.04)
- North America > United States > California (0.04)
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- Information Technology > Security & Privacy (1.00)
- Law (0.68)
- North America > United States > Michigan (0.04)
- North America > United States > California > Santa Cruz County > Santa Cruz (0.04)
- Asia (0.04)
- Research Report (0.67)
- Overview (0.45)
- Information Technology > Security & Privacy (1.00)
- Law (0.67)