Xu, Jiayi
MultiBalance: Multi-Objective Gradient Balancing in Industrial-Scale Multi-Task Recommendation System
He, Yun, Chen, Xuxing, Xu, Jiayi, Cai, Renqin, You, Yiling, Cao, Jennifer, Huang, Minhui, Yang, Liu, Liu, Yiqun, Liu, Xiaoyi, Jin, Rong, Park, Sem, Long, Bo, Feng, Xue
In industrial recommendation systems, multi-task learning (learning multiple tasks simultaneously on a single model) is a predominant approach to save training/serving resources and improve recommendation performance via knowledge transfer between the joint learning tasks. However, multi-task learning often suffers from negative transfer: one or several tasks are less optimized than training them separately. To carefully balance the optimization, we propose a gradient balancing approach called MultiBalance, which is suitable for industrial-scale multi-task recommendation systems. It balances the per-task gradients to alleviate the negative transfer, while saving the huge cost for grid search or manual explorations for appropriate task weights. Moreover, compared with prior work that normally balance the per-task gradients of shared parameters, MultiBalance is more efficient since only requiring to access per-task gradients with respect to the shared feature representations. We conduct experiments on Meta's large-scale ads and feeds multi-task recommendation system, and observe that MultiBalance achieves significant gains (e.g., 0.738% improvement for normalized entropy (NE)) with neutral training cost in Queries Per Second (QPS), which is significantly more efficient than prior methods that balance per-task gradients of shared parameters with 70~80% QPS degradation.
Weaver: Foundation Models for Creative Writing
Wang, Tiannan, Chen, Jiamin, Jia, Qingrui, Wang, Shuai, Fang, Ruoyu, Wang, Huilin, Gao, Zhaowei, Xie, Chunzhao, Xu, Chuou, Dai, Jihong, Liu, Yibin, Wu, Jialong, Ding, Shengwei, Li, Long, Huang, Zhiwei, Deng, Xinle, Yu, Teng, Ma, Gangan, Xiao, Han, Chen, Zixin, Xiang, Danjun, Wang, Yunxia, Zhu, Yuanyuan, Xiao, Yi, Wang, Jing, Wang, Yiru, Ding, Siran, Huang, Jiayang, Xu, Jiayi, Tayier, Yilihamu, Hu, Zhenyu, Gao, Yuan, Zheng, Chengfeng, Ye, Yueshu, Li, Yihang, Wan, Lei, Jiang, Xinyue, Wang, Yujie, Cheng, Siyu, Song, Zhule, Tang, Xiangru, Xu, Xiaohua, Zhang, Ningyu, Chen, Huajun, Jiang, Yuchen Eleanor, Zhou, Wangchunshu
This work introduces Weaver, our first family of large language models (LLMs) dedicated to content creation. Weaver is pre-trained on a carefully selected corpus that focuses on improving the writing capabilities of large language models. We then fine-tune Weaver for creative and professional writing purposes and align it to the preference of professional writers using a suit of novel methods for instruction data synthesis and LLM alignment, making it able to produce more human-like texts and follow more diverse instructions for content creation. The Weaver family consists of models of Weaver Mini (1.8B), Weaver Base (6B), Weaver Pro (14B), and Weaver Ultra (34B) sizes, suitable for different applications and can be dynamically dispatched by a routing agent according to query complexity to balance response quality and computation cost. Evaluation on a carefully curated benchmark for assessing the writing capabilities of LLMs shows Weaver models of all sizes outperform generalist LLMs several times larger than them. Notably, our most-capable Weaver Ultra model surpasses GPT-4, a state-of-the-art generalist LLM, on various writing scenarios, demonstrating the advantage of training specialized LLMs for writing purposes. Moreover, Weaver natively supports retrieval-augmented generation (RAG) and function calling (tool usage). We present various use cases of these abilities for improving AI-assisted writing systems, including integration of external knowledge bases, tools, or APIs, and providing personalized writing assistance. Furthermore, we discuss and summarize a guideline and best practices for pre-training and fine-tuning domain-specific LLMs.
IDLat: An Importance-Driven Latent Generation Method for Scientific Data
Shen, Jingyi, Li, Haoyu, Xu, Jiayi, Biswas, Ayan, Shen, Han-Wei
Abstract-- Deep learning based latent representations have been widely used for numerous scientific visualization applications such as isosurface similarity analysis, volume rendering, flow field synthesis, and data reduction, just to name a few. However, existing latent representations are mostly generated from raw data in an unsupervised manner, which makes it difficult to incorporate domain interest to control the size of the latent representations and the quality of the reconstructed data. In this paper, we present a novel importance-driven latent representation to facilitate domain-interest-guided scientific data visualization and analysis. We utilize spatial importance maps to represent various scientific interests and take them as the input to a feature transformation network to guide latent generation. We further reduced the latent size by a lossless entropy encoding algorithm trained together with the autoencoder, improving the storage and memory efficiency. We qualitatively and quantitatively evaluate the effectiveness and efficiency of latent representations generated by our method with data from multiple scientific visualization applications. First, to incorporate domain by autoencoders have attracted great attentions of researchers in recent interests into latent representations, we extend the basic autoencoder years. Latent representations have been successfully demonstrated to with a feature transformation network that takes domain interest as an retain essential information in the original data, and can be used for input to guide the mapping from scientific data to latent representations. Every been applied to multivariate volumetric data [28], streamlines and element in the importance map is a real value indicating how vital this stream surfaces [18], isosurfaces [12], and particles [25]. The importance Although latent representations for large-scale scientific data have values can be derived mathematically based on the domain or been used extensively, there are still several challenges. First, domain heuristically based on distances, distributions, locations, etc., depending scientists have diverse interests in different data portions, but latent on the underlying scientific applications.
Reinforcement Learning for Load-balanced Parallel Particle Tracing
Xu, Jiayi, Guo, Hanqi, Shen, Han-Wei, Raj, Mukund, Wurster, Skylar Wolfgang, Peterka, Tom
We explore an online learning reinforcement learning (RL) paradigm for optimizing parallel particle tracing performance in distributed-memory systems. Our method combines three novel components: (1) a workload donation model, (2) a high-order workload estimation model, and (3) a communication cost model, to optimize the performance of data-parallel particle tracing dynamically. First, we design an RL-based workload donation model. Our workload donation model monitors the workload of processes and creates RL agents to donate particles and data blocks from high-workload processes to low-workload processes to minimize the execution time. The agents learn the donation strategy on-the-fly based on reward and cost functions. The reward and cost functions are designed to consider the processes' workload change and the data transfer cost for every donation action. Second, we propose an online workload estimation model, in order to help our RL model estimate the workload distribution of processes in future computations. Third, we design the communication cost model that considers both block and particle data exchange costs, helping the agents make effective decisions with minimized communication cost. We demonstrate that our algorithm adapts to different flow behaviors in large-scale fluid dynamics, ocean, and weather simulation data. Our algorithm improves parallel particle tracing performance in terms of parallel efficiency, load balance, and costs of I/O and communication for evaluations up to 16,384 processors.