ernie
- North America > United States > Washington > King County > Seattle (0.04)
- North America > Dominican Republic (0.04)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents > Agent Societies (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.74)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.68)
Robust Multi-Agent Reinforcement Learning via Adversarial Regularization: Theoretical Foundation and Stable Algorithms
Multi-Agent Reinforcement Learning (MARL) has shown promising results across several domains. Despite this promise, MARL policies often lack robustness and are therefore sensitive to small changes in their environment. This presents a serious concern for the real world deployment of MARL algorithms, where the testing environment may slightly differ from the training environment. In this work we show that we can gain robustness by controlling a policy's Lipschitz constant, and under mild conditions, establish the existence of a Lipschitz and close-to-optimal policy. Motivated by these insights, we propose a new robust MARL framework, ERNIE, that promotes the Lipschitz continuity of the policies with respect to the state observations and actions by adversarial regularization.
- North America > United States > Washington > King County > Seattle (0.04)
- North America > Dominican Republic (0.04)
The fragility of "cultural tendencies" in LLMs
In a recent study, Lu, Song, and Zhang (2025) (LSZ) propose that large language models (LLMs), when prompted in different languages, display culturally specific tendencies. They report that the two models (i.e., GPT and ERNIE) respond in more interdependent and holistic ways when prompted in Chinese, and more independent and analytic ways when prompted in English. LSZ attribute these differences to deep-seated cultural patterns in the models, claiming that prompt language alone can induce substantial cultural shifts. While we acknowledge the empirical patterns they observed, we find their experiments, methods, and interpretations problematic. In this paper, we critically re-evaluate the methodology, theoretical framing, and conclusions of LSZ. We argue that the reported "cultural tendencies" are not stable traits but fragile artifacts of specific models and task design. To test this, we conducted targeted replications using a broader set of LLMs and a larger number of test items. Our results show that prompt language has minimal effect on outputs, challenging LSZ's claim that these models encode grounded cultural beliefs.
- Europe > Germany > Baden-Württemberg > Stuttgart Region > Stuttgart (0.04)
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.04)
- Asia > Japan (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study > Negative Result (0.68)
Robust Multi-Agent Reinforcement Learning via Adversarial Regularization: Theoretical Foundation and Stable Algorithms
Multi-Agent Reinforcement Learning (MARL) has shown promising results across several domains. Despite this promise, MARL policies often lack robustness and are therefore sensitive to small changes in their environment. This presents a serious concern for the real world deployment of MARL algorithms, where the testing environment may slightly differ from the training environment. In this work we show that we can gain robustness by controlling a policy's Lipschitz constant, and under mild conditions, establish the existence of a Lipschitz and close-to-optimal policy. Motivated by these insights, we propose a new robust MARL framework, ERNIE, that promotes the Lipschitz continuity of the policies with respect to the state observations and actions by adversarial regularization.
Samsung's AI features on the Galaxy S24 in China reportedly ditch Google for Baidu
CNBC reported Friday that the Chinese version of the flagship phone uses Baidu's Ernie chatbot to power the phone's AI-powered features. Ernie arrived last August after reportedly receiving Chinese government approval. "Now featuring Ernie's understanding and generation capabilities, the upgraded Samsung Note Assistant can translate content and also summarize lengthy content into clear, intelligently organized formats at the click of a button, streamlining the organization of extensive text," Baidu and Samsung told CNBC in a joint statement. Samsung's description of the Galaxy S24 series on its Chinese website advertises many of the same Google-powered features it debuted last week in its San Jose, CA, launch event. These include a version of Circle to Search, real-time call translation, a transcription helper and a photo assistant.
- Asia > China (1.00)
- North America > United States > California > Santa Clara County > San Jose (0.28)
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.41)
How Generative-AI can be Effectively used in Government Chatbots
With the rapid development of artificial intelligence and breakthroughs in machine learning and natural language processing, intelligent question-answering robots have become widely used in government affairs. This paper conducts a horizontal comparison between Guangdong Province's government chatbots, ChatGPT, and Wenxin Ernie, two large language models, to analyze the strengths and weaknesses of existing government chatbots and AIGC technology. The study finds significant differences between government chatbots and large language models. China's government chatbots are still in an exploratory stage and have a gap to close to achieve "intelligence." To explore the future direction of government chatbots more deeply, this research proposes targeted optimization paths to help generative AI be effectively applied in government chatbot conversations.
- Asia > China > Guangdong Province (0.24)
- Europe > United Kingdom (0.14)
- North America > United States > Maryland (0.04)
- (3 more...)
- Overview (0.68)
- Research Report (0.50)
- Banking & Finance (0.95)
- Health & Medicine > Health Care Providers & Services (0.94)
- Health & Medicine > Therapeutic Area (0.68)
- Government > Regional Government (0.66)
Revisiting the Knowledge Injection Frameworks
Fu, Peng, Zhang, Yiming, Wang, Haobo, Qiu, Weikang, Zhao, Junbo
In recent years, large language models (LLMs), such as GPTs, have attained great impact worldwide. However, how to adapt these LLMs to better suit the vertical domain-specific tasks by utilizing external knowledge remains not completely solved. Indeed, there have emerged a few works on this line where most of them rely on an alignment heuristic that is built to inject the corresponding knowledge tuple into the associated text sample. However, despite the promise, we identify a pivotal problem in this work ubiquitously. Simply put, we find that injecting unaligned (i.e., random) knowledge tuple into the LLMs achieves comparable (and sometimes better) results than the aligned knowledge being injected. We therefore take a thorough investigation of this frustrating finding on a variety of related prior work and further provide a chain of potential interpretations for the phenomenon. Based on all that, we offer a simple remediated technique. Briefly, the core of this technique is rooted in an ideological emphasis on the pruning and purification of the external knowledge base to be injected into LLMs. At last, we show that by integrating this technique into most (if not all) knowledge injection frameworks and recent LLMs, it manages to overcome the aforementioned sanity problem and further pushes the boundary of the performance of the domain-adaptive LLMs.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- (2 more...)
Robust Multi-Agent Reinforcement Learning via Adversarial Regularization: Theoretical Foundation and Stable Algorithms
Bukharin, Alexander, Li, Yan, Yu, Yue, Zhang, Qingru, Chen, Zhehui, Zuo, Simiao, Zhang, Chao, Zhang, Songan, Zhao, Tuo
Multi-Agent Reinforcement Learning (MARL) has shown promising results across several domains. Despite this promise, MARL policies often lack robustness and are therefore sensitive to small changes in their environment. This presents a serious concern for the real world deployment of MARL algorithms, where the testing environment may slightly differ from the training environment. In this work we show that we can gain robustness by controlling a policy's Lipschitz constant, and under mild conditions, establish the existence of a Lipschitz and close-to-optimal policy. Based on these insights, we propose a new robust MARL framework, ERNIE, that promotes the Lipschitz continuity of the policies with respect to the state observations and actions by adversarial regularization. The ERNIE framework provides robustness against noisy observations, changing transition dynamics, and malicious actions of agents. However, ERNIE's adversarial regularization may introduce some training instability. To reduce this instability, we reformulate adversarial regularization as a Stackelberg game. We demonstrate the effectiveness of the proposed framework with extensive experiments in traffic light control and particle environments. In addition, we extend ERNIE to mean-field MARL with a formulation based on distributionally robust optimization that outperforms its non-robust counterpart and is of independent interest. Our code is available at https://github.com/abukharin3/ERNIE.
- North America > United States > Washington > King County > Seattle (0.04)
- North America > United States > Kansas > Cowley County (0.04)
- North America > Dominican Republic (0.04)
- Transportation > Ground > Road (1.00)
- Transportation > Infrastructure & Services (0.88)
Public Perceptions of Gender Bias in Large Language Models: Cases of ChatGPT and Ernie
Zhou, Kyrie Zhixuan, Sanfilippo, Madelyn Rose
Large language models are quickly gaining momentum, yet are found to demonstrate gender bias in their responses. In this paper, we conducted a content analysis of social media discussions to gauge public perceptions of gender bias in LLMs which are trained in different cultural contexts, i.e., ChatGPT, a US-based LLM, or Ernie, a China-based LLM. People shared both observations of gender bias in their personal use and scientific findings about gender bias in LLMs. A difference between the two LLMs was seen -- ChatGPT was more often found to carry implicit gender bias, e.g., associating men and women with different profession titles, while explicit gender bias was found in Ernie's responses, e.g., overly promoting women's pursuit of marriage over career. Based on the findings, we reflect on the impact of culture on gender bias and propose governance recommendations to regulate gender bias in LLMs.
- Europe > Jersey (0.28)
- Asia > China (0.25)
- North America > United States > California (0.05)
- (6 more...)
- Research Report (0.82)
- Public Relations (0.62)
- Law > Civil Rights & Constitutional Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Law > Statutes (0.94)
- (3 more...)