Goto

Collaborating Authors

 News


Can Large Language Model Agents Simulate Human Trust Behavior?

Neural Information Processing Systems

Large Language Model (LLM) agents have been increasingly adopted as simulation tools to model humans in social science and role-playing applications. However, one fundamental question remains: can LLM agents really simulate human behavior? In this paper, we focus on one critical and elemental behavior in human interactions, trust, and investigate whether LLM agents can simulate human trust behavior. We first find that LLM agents generally exhibit trust behavior, referred to as agent trust, under the framework of Trust Games, which are widely recognized in behavioral economics. Then, we discover that GPT-4 agents manifest high behavioral alignment with humans in terms of trust behavior, indicating the feasibility of simulating human trust behavior with LLM agents. In addition, we probe the biases of agent trust and differences in agent trust towards other LLM agents and humans. We also explore the intrinsic properties of agent trust under conditions including external manipulations and advanced reasoning strategies. Our study provides new insights into the behaviors of LLM agents and the fundamental analogy between LLMs and humans beyond value alignment. We further illustrate broader implications of our discoveries for applications where trust is paramount.


Weight Diffusion for Future: Learn to Generalize in Non-Stationary Environments

Neural Information Processing Systems

Enabling deep models to generalize in non-stationary environments is vital for real-world machine learning, as data distributions are often found to continually change. Recently, evolving domain generalization (EDG) has emerged to tackle the domain generalization in a time-varying system, where the domain gradually evolves over time in an underlying continuous structure. Nevertheless, it typically assumes multiple source domains simultaneously ready. It still remains an open problem to address EDG in the domain-incremental setting, where source domains are non-static and arrive sequentially to mimic the evolution of training domains. To this end, we propose Weight Diffusion (W-Diff), a novel framework that utilizes the conditional diffusion model in the parameter space to learn the evolving pattern of classifiers during the domain-incremental training process. Specifically, the diffusion model is conditioned on the classifier weights of different historical domain (regarded as a reference point) and the prototypes of current domain, to learn the evolution from the reference point to the classifier weights of current domain (regarded as the anchor point). In addition, a domain-shared feature encoder is learned by enforcing prediction consistency among multiple classifiers, so as to mitigate the overfitting problem and restrict the evolving pattern to be reflected in the classifier as much as possible. During inference, we adopt the ensemble manner based on a great number of target domain-customized classifiers, which are cheaply obtained via the conditional diffusion model, for robust prediction. Comprehensive experiments on both synthetic and real-world datasets show the superior generalization performance of W-Diff on unseen domains in the future.


OpenAI: The power and the pride

MIT Technology Review

There is no question that OpenAI pulled off something historic with its release of ChatGPT 3.5 in 2022. It set in motion an AI arms race that has already changed the world in a number of ways and seems poised to have an even greater long-term effect than the short-term disruptions to things like education and employment that we are already beginning to see. How that turns out for humanity is something we are still reckoning with and may be for quite some time. But a pair of recent books both attempt to get their arms around it with accounts of what two leading technology journalists saw at the OpenAI revolution. In Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, Karen Hao tells the story of the company's rise to power and its far-reaching impact all over the world.


Inductive Representation Learning on Large Graphs

Neural Information Processing Systems

Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.


WNBA investigation finds no evidence of hateful comments toward Angel Reese

FOX News

Fox News Flash top sports headlines are here. Check out what's clicking on Foxnews.com. The WNBA and the Indiana Fever announced that the allegations of "hateful comments" directed toward Angel Reese on May 17 were "not substantiated." Reese and her Chicago Sky faced the Fever and Caitlin Clark, and at one point, the two had to be separated after a flagrant foul by Clark against Reese. The association announced the next day that it would launch an investigation into the alleged comments.


Jasmine Crockett shares bizarre song clip calling herself 'leader of the future'

FOX News

Texas Rep. Jasmine Crockett attacked President Donald Trump's West Point address on MSNBC and called it proof of his unfitness as commander in chief. Rep. Jasmine Crockett, D-Texas, appears to be leaning in on her rising political stardom this week, briefly sharing what appeared to be a fan-made song that referred to the Democratic firebrand as the "leader of the future." "Jasmine Crockett, she rises with the dawn. Fighting for justice, her light will never be gone," the song went. Infectious with passion, she'll never bow down.


Scientist delivers ominous message to humanity after UFO covered in strange writing is found

Daily Mail - Science & tech

A UFO researcher has an ominous message for humanity as governments around the world begin releasing more information about alleged contact with extraterrestrials. Dr Julia Mossbridge is a cognitive neuroscientist and a researcher of unidentified aerial phenomena (UAP) - the new term for UFOs and alien sightings. After scientists in Colombia recovered a mysterious, sphere-shaped object that many now believe is a piece of UFO technology, Mossbridge said the world is moving into an era which may soon have to deal with the knowledge that aliens exist. 'We are entering a time when we are starting to recognize as humans we don't have the control that we thought we had over everything,' Dr Mossbridge told Fox News. However, Mossbridge, who studies how humans think and also attended the May 1 congressional hearing on UAPs, said the impending disclosure of alien life could throw the worldview of many people into chaos.



Geo-Diverse Safety Alignment Da Yin

Neural Information Processing Systems

Content Warning: This paper may contain examples of harmful contents by nature. In the rapidly evolving field of Large Language Models (LLMs), ensuring safety is a crucial and widely discussed topic. However, existing works often overlook the geo-diversity of cultural and legal standards across the world.