Huang, Junming
Uncovering inequalities in new knowledge learning by large language models across different languages
Wang, Chenglong, Tang, Haoyu, Yang, Xiyuan, Xie, Yueqi, Suh, Jina, Sitaram, Sunayana, Huang, Junming, Xie, Yu, Gong, Zhaoya, Xie, Xing, Wu, Fangzhao
Existing research has primarily focused on static analyses that assess the disparities in the existing knowledge and capabilities of LLMs across languages. However, LLMs are continuously evolving, acquiring new knowledge to generate up-to-date, domain-specific responses. Investigating linguistic inequalities within this dynamic process is, therefore, also essential. In this paper, we explore inequalities in new knowledge learning by LLMs across different languages and four key dimensions: effectiveness, transferability, prioritization, and robustness. Through extensive experiments under two settings (in-context learning and fine-tuning) using both proprietary and open-source models, we demonstrate that low-resource languages consistently face disadvantages across all four dimensions. By shedding light on these disparities, we aim to raise awareness of linguistic inequities in LLMs' new knowledge learning, fostering the development of more inclusive and equitable future LLMs. This transformation is both inevitable and global in scale. One notable example is ChatGPT, which, as of December 2024, serves 300 million weekly active users worldwide (6, 7). Given such widespread adoption, it is crucial to study fairness in multilingual environments to ensure that users of different languages can benefit equally from these systems (9). Existing research on multilingual equality in LLMs primarily focuses on static analyses that evaluate disparities in the knowledge and capabilities of LLMs across different languages (10, 11, 12, 13, 14, 15, 16, 17). Some studies, for example, have examined the amount of factual knowledge encoded in different languages and revealed significant variations. In particular, they reveal that knowledge available in low-resource languages remains limited due to the lack of pre-training data in these languages (18, 19, 20). These studies have significantly advanced our understanding of the extent and nature of multilingual inequalities in LLMs' existing knowledge and capabilities. However, we still lack an understanding of inequalities in the process of acquiring new knowledge, an evolving perspective in research on LLMs. Learning new knowledge is crucial for LLMs, as illustrated in Figure 1a. On the one hand, general-purpose LLMs are pre-trained on static datasets that were collected prior to training and may not include real-time or recent information. As a result, these models do not possess new knowledge, and their knowledge base can quickly become outdated.
A Recommendation Model Utilizing Separation Embedding and Self-Attention for Feature Mining
Liu, Wenyi, Wang, Rui, Luo, Yuanshuai, Wei, Jianjun, Zhao, Zihao, Huang, Junming
With the explosive growth of Internet data, users are facing the problem of information overload, which makes it a challenge to efficiently obtain the required resources. Recommendation systems have emerged in this context. By filtering massive amounts of information, they provide users with content that meets their needs, playing a key role in scenarios such as advertising recommendation and product recommendation. However, traditional click-through rate prediction and TOP-K recommendation mechanisms are gradually unable to meet the recommendations needs in modern life scenarios due to high computational complexity, large memory consumption, long feature selection time, and insufficient feature interaction. This paper proposes a recommendations system model based on a separation embedding cross-network. The model uses an embedding neural network layer to transform sparse feature vectors into dense embedding vectors, and can independently perform feature cross operations on different dimensions, thereby improving the accuracy and depth of feature mining. Experimental results show that the model shows stronger adaptability and higher prediction accuracy in processing complex data sets, effectively solving the problems existing in existing models.
Optimizing News Text Classification with Bi-LSTM and Attention Mechanism for Efficient Data Processing
Liu, Bingyao, Chen, Jiajing, Wang, Rui, Huang, Junming, Luo, Yuanshuai, Wei, Jianjun
The development of Internet technology has led to a rapid increase in news information. Filtering out valuable content from complex information has become an urgentproblem that needs to be solved. In view of the shortcomings of traditional manual classification methods that are time-consuming and inefficient, this paper proposes an automaticclassification scheme for news texts based on deep learning. This solution achieves efficient classification and management of news texts by introducing advanced machine learning algorithms, especially an optimization model that combines Bi-directional Long Short-Term Memory Network (Bi-LSTM) and Attention Mechanism. Experimental results show that this solution can not only significantly improve the accuracy and timeliness of classification, but also significantly reduce the need for manual intervention. It has important practical significance for improving the information processing capabilities of the news industry and accelerating the speed of information flow. Through comparative analysis of multiple common models, the effectiveness and advancement of the proposed method are proved, laying a solid foundation for future news text classification research.
How COVID-19 has Impacted American Attitudes Toward China: A Study on Twitter
Cook, Gavin, Huang, Junming, Xie, Yu
Past research has studied social determinants of attitudes toward foreign countries. Confounded by potential endogeneity biases due to unobserved factors or reverse causality, the causal impact of these factors on public opinion is usually difficult to establish. Using social media data, we leverage the suddenness of the COVID-19 pandemic to examine whether a major global event has causally changed American views of another country. We collate a database of more than 297 million posts on the social media platform Twitter about China or COVID-19 up to June 2020, and we treat tweeting about COVID-19 as a proxy for individual awareness of COVID-19. Using regression discontinuity and difference-in-difference estimation, we find that awareness of COVID-19 causes a sharp rise in anti-China attitudes. Our work has implications for understanding how self-interest affects policy preference and how Americans view migrant communities.
Large-scale Quantitative Evidence of Media Impact on Public Opinion toward China
Huang, Junming, Cook, Gavin, Xie, Yu
Do mass media influence people's opinion of other countries? Using BERT, a deep neural network-based natural language processing model, we analyze a large corpus of 267,907 China-related articles published by The New York Times since 1970. We then compare our output from The New York Times to a longitudinal data set constructed from 101 cross-sectional surveys of the American public's views on China. We find that the reporting of The New York Times on China in one year explains 54% of the variance in American public opinion on China in the next. Our result confirms hypothesized links between media and public opinion and helps shed light on how mass media can influence public opinion of foreign countries.
Conquering the rating bound problem in neighborhood-based collaborative filtering: a function recovery approach
Huang, Junming, Cheng, Xue-Qi, Shen, Hua-Wei, Sun, Xiaoming, Zhou, Tao, Jin, Xiaolong
As an important tool for information filtering in the era of socialized web, recommender systems have witnessed rapid development in the last decade. As benefited from the better interpretability, neighborhood-based collaborative filtering techniques, such as item-based collaborative filtering adopted by Amazon, have gained a great success in many practical recommender systems. However, the neighborhood-based collaborative filtering method suffers from the rating bound problem, i.e., the rating on a target item that this method estimates is bounded by the observed ratings of its all neighboring items. Therefore, it cannot accurately estimate the unobserved rating on a target item, if its ground truth rating is actually higher (lower) than the highest (lowest) rating over all items in its neighborhood. In this paper, we address this problem by formalizing rating estimation as a task of recovering a scalar rating function. With a linearity assumption, we infer all the ratings by optimizing the low-order norm, e.g., the $l_1/2$-norm, of the second derivative of the target scalar function, while remaining its observed ratings unchanged. Experimental results on three real datasets, namely Douban, Goodreads and MovieLens, demonstrate that the proposed approach can well overcome the rating bound problem. Particularly, it can significantly improve the accuracy of rating estimation by 37% than the conventional neighborhood-based methods.