Learning Directly from Grammar Compressed Text
Sasaki, Yoichi, Akimoto, Kosuke, Maehara, Takanori
Neural networks using numerous text data have been successfully applied to a variety of tasks. While massive text data is usually compressed using techniques such as grammar compression, almost all of the previous machine learning methods assume already decompressed sequence data as their input. In this paper, we propose a method to directly apply neural sequence models to text data compressed with grammar compression algorithms without decompression. To encode the unique symbols that appear in compression rules, we introduce composer modules to incrementally encode the symbols into vector representations. Through experiments on real datasets, we empirically showed that the proposal model can achieve both memory and computational efficiency while maintaining moderate performance.
Feb-28-2020
- Country:
- Asia
- China > Beijing
- Beijing (0.04)
- Japan > Honshū
- Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- Middle East
- China > Beijing
- Europe
- North America
- Canada
- Ontario > Middlesex County
- London (0.04)
- Quebec > Montreal (0.04)
- Ontario > Middlesex County
- United States
- California
- Los Angeles County > Long Beach (0.04)
- San Diego County > San Diego (0.04)
- San Francisco County > San Francisco (0.14)
- Colorado > Denver County
- Denver (0.04)
- Massachusetts > Middlesex County
- Cambridge (0.04)
- Minnesota > Hennepin County
- Minneapolis (0.14)
- Utah (0.04)
- California
- Canada
- Asia
- Genre:
- Research Report (1.00)
- Industry:
- Technology: