Wang, Xingyao
POTATO: The Portable Text Annotation Tool
Pei, Jiaxin, Ananthasubramaniam, Aparna, Wang, Xingyao, Zhou, Naitian, Sargent, Jackson, Dedeloudis, Apostolos, Jurgens, David
We present POTATO, the Portable text annotation tool, a free, fully open-sourced annotation system that 1) supports labeling many types of text and multimodal data; 2) offers easy-to-configure features to maximize the productivity of both deployers and annotators (convenient templates for common ML/NLP tasks, active learning, keypress shortcuts, keyword highlights, tooltips); and 3) supports a high degree of customization (editable UI, inserting pre-screening questions, attention and qualification tests). Experiments over two annotation tasks suggest that POTATO improves labeling speed through its specially-designed productivity features, especially for long documents and complex tasks. POTATO is available at https://github.com/davidjurgens/potato and will continue to be updated.
Towards Scalable Distributed Training of Deep Learning on Public Cloud Clusters
Shi, Shaohuai, Zhou, Xianhao, Song, Shutao, Wang, Xingyao, Zhu, Zilin, Huang, Xue, Jiang, Xinan, Zhou, Feihu, Guo, Zhenyu, Xie, Liqiang, Lan, Rui, Ouyang, Xianbin, Zhang, Yan, Wei, Jieqian, Gong, Jing, Lin, Weiliang, Gao, Ping, Meng, Peng, Xu, Xiaomin, Guo, Chenyang, Yang, Bo, Chen, Zhibo, Wu, Yongjian, Chu, Xiaowen
Distributed training techniques have been widely deployed in large-scale deep neural networks (DNNs) training on dense-GPU clusters. However, on public cloud clusters, due to the moderate inter-connection bandwidth between instances, traditional state-of-the-art distributed training systems cannot scale well in training large-scale models. In this paper, we propose a new computing and communication efficient top-k sparsification communication library for distributed training. To further improve the system scalability, we optimize I/O by proposing a simple yet efficient multi-level data caching mechanism and optimize the update operation by introducing a novel parallel tensor operator. Experimental results on a 16-node Tencent Cloud cluster (each node with 8 Nvidia Tesla V100 GPUs) show that our system achieves 25%-40% faster than existing state-of-the-art systems on CNNs and Transformer. We finally break the record on DAWNBench on training ResNet-50 to 93% top-5 accuracy on ImageNet.