counterpoint
An Analysis of Large Language Models for Simulating User Responses in Surveys
Yu, Ziyun, Zhou, Yiru, Zhao, Chen, Wen, Hongyi
Using Large Language Models (LLMs) to simulate user opinions has received growing attention. Yet LLMs, especially trained with reinforcement learning from human feedback (RLHF), are known to exhibit biases toward dominant viewpoints, raising concerns about their ability to represent users from diverse demographic and cultural backgrounds. In this work, we examine the extent to which LLMs can simulate human responses to cross-domain survey questions through direct prompting and chain-of-thought prompting. We further propose a claim diversification method CLAIMSIM, which elicits viewpoints from LLM parametric knowledge as contextual input. Experiments on the survey question answering task indicate that, while CLAIMSIM produces more diverse responses, both approaches struggle to accurately simulate users. Further analysis reveals two key limitations: (1) LLMs tend to maintain fixed viewpoints across varying demographic features, and generate single-perspective claims; and (2) when presented with conflicting claims, LLMs struggle to reason over nuanced differences among demographic features, limiting their ability to adapt responses to specific user profiles.
- Europe > Western Europe (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
- North America > United States (0.04)
- (2 more...)
- Questionnaire & Opinion Survey (1.00)
- Research Report > New Finding (0.93)
The Wild Future of Artificial Intelligence - The Atlantic
This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. OpenAI's impressive new artificial-intelligence chatbot, ChatGPT, has intensified the debate over what the rise of AI-generated writing and art means for work, culture, education, and more. "You don't need a wild imagination to see that the future cracked open by these technologies is full of awful and awesome possibilities," our staff writer Derek Thompson recently wrote. I called Derek to explore some of those possibilities. But first, here are three new stories from The Atlantic.
- North America > United States > West Virginia (0.05)
- North America > United States > Texas (0.05)
- North America > United States > New York (0.05)
- (2 more...)
Point – Counterpoint on Why Organizations Suck at AI - DataScienceCentral.com
I love this infographic recently floating around LinkedIn. Sorry, don't know to whom to give credit, but it does provide an interesting depiction of how senior management thinks AI works and the realities of what's required to make AI work (Figure 1). Intent is an understanding and clarification of the intended need or objective defined at the beginning of the process. Intent is the why we are on this journey. Understanding Intent requires a detailed articulation of what are you trying to accomplish (e.g., objectives, need, purpose), what are the KPIs and metrics against which you will measure progress and success, who are the different stakeholders and constituents who will be involved in the scoping and execution of business objectives, what are the key decisions that these stakeholders need to make in support of the objectives and what are the KPIs and metrics against which they will measure progress and success, what are the Desired Outcomes, what are the potential costs associated with making the wrong decisions (critical for understanding the ramifications of False Positives and False Negatives), what are the ramifications of objective or nee failure, what are the potential unintended consequences…should I keep going (Figure 2)?
Counterpoint: AI is far more dangerous than quantum computing
Vivek Wadhwa and Mauritz Kop recently penned an op-ed urging governments around the world to get ahead of the threat posed by the emerging technology known as quantum computing. They even went so far as to title their article "Why Quantum Computing is Even More Dangerous Than Artificial Intelligence." Up front: This one gets a very respectful hard-disagree from me. While I do believe that quantum computing does pose an existential threat to humanity, my reasons differ wildly from those proposed by Wadhwa and Kop. Wadhwa and Kop open their article with a description of AI's failures, potential misuse, and how the media's narrative has exacerbated the danger of AI before it settles on a powerful lead: The world's failure to rein in the demon of AI--or rather, the crude technologies masquerading as such--should serve to be a profound warning.
- North America > United States (0.15)
- Asia > China (0.05)
- Law (1.00)
- Government (1.00)
- Information Technology > Security & Privacy (0.52)
Play a Bach duet with an AI counterpoint
Researchers at the University of Rochester Hajim School of Engineering & Applied Sciences have developed a web-based system called BachDuet that allows users to improvise duets with an artificial intelligence (AI) counterpart in real time. By visiting the BachDuet website, users can play duets with the AI agent using a computer keyboard, mouse, touchscreen, or MIDI keyboard. To play a duet with German composer Johann Sebastian Bach, you don't have to travel back to the 18th century; thanks to a new program developed by researchers at the University of Rochester, you only need a computer. The web-based program, called BachDuet, was developed by Zhiyao Duan, an associate professor of electrical and computer engineering and of computer science, and members of his lab, including Yongi Zang '23 and PhD student Christodoulos Benetatos. BachDuet allows a person to improvise duets in the style of Bach with an artificial intelligence (AI) counterpoint in real time.
- Media > Music (0.56)
- Leisure & Entertainment (0.56)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.36)
- Health & Medicine > Consumer Health (0.34)
AI ethics have consequences - learning from the problem of autonomous weapons systems
First of all, I want to state for the record that I have never played a video game that involved violence or war. I think the last time I played a "video game" was Flight Simulator. As a result, I suspect some readers are much more familiar with intensive and fanciful warfare than I am. Still, recently, I've been part of discussions with the Department of Defense and organizations that advise, consult and criticize the DoD on the topic AI in warfare. It is a complicated issue to introduce AI ethics with the violence and killing of war.
- North America > United States (0.57)
- Europe > United Kingdom (0.05)
- Europe > Russia (0.05)
- (4 more...)
- Government > Military (1.00)
- Government > Regional Government > North America Government > United States Government (0.57)
- Education > Focused Education > Special Education (0.40)
When Artificial Intelligence Becomes an Artform
Not zeros and ones or binary code--though that's a language, too--but a visual vernacular that helps humans make the connection that, yes, a computer was here and it made its mark. It comes in all forms: the perfect crispness of an illustration drawn on a Wacom pad; the trippy swirls of a Google Deep Dream image; the fuzzy imperfection of an AI-generated font. "We call it the computer accent," said Claire Evans, lead singer of synth pop band YACHT. For their recently released album, Chain Tripping, Evans, her bandmates, and a cast of creative AI experts explored how this so-called computer accent can be used to artistic ends. Sure, this is nothing new--you can't really get more "computer accent" than Kraftwerk (Computer Love was released way back in 1981, to name the most obvious example.)
- Media > Music (0.36)
- Leisure & Entertainment (0.36)
Counterpoint by Convolution
Huang, Cheng-Zhi Anna, Cooijmans, Tim, Roberts, Adam, Courville, Aaron, Eck, Douglas
Machine learning models of music typically break up the task of composition into a chronological process, composing a piece of music in a single pass from beginning to end. On the contrary, human composers write music in a nonlinear fashion, scribbling motifs here and there, often revisiting choices previously made. In order to better approximate this process, we train a convolutional neural network to complete partial musical scores, and explore the use of blocked Gibbs sampling as an analogue to rewriting. Neither the model nor the generative procedure are tied to a particular causal direction of composition. Our model is an instance of orderless NADE (Uria et al., 2014), which allows more direct ancestral sampling. However, we find that Gibbs sampling greatly improves sample quality, which we demonstrate to be due to some conditional distributions being poorly modeled. Moreover, we show that even the cheap approximate blocked Gibbs procedure from Yao et al. (2014) yields better samples than ancestral sampling, based on both log-likelihood and human evaluation.
- North America > Canada > Quebec > Montreal (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China (0.04)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (0.88)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.70)
Counterpoint: Regulators Should Allow the Greatest Space for AI Innovation
Everyone wants to be safe. But paradoxically, sometimes the policies we implement to guarantee our safety end up making us much worse off than if we had done nothing at all. It is counterintuitive, but this is the well-established calculus of the world of risk analysis. When we consider the future of AI and the public policies that will shape its evolution, it is vital to keep that insight in mind. While AI-enabled technologies can pose some risks that should be taken seriously, it is important that public policy not freeze the development of life-enriching innovations in this space based on speculative fears of an uncertain future.
- North America > United States > Virginia > Fairfax County > Fairfax (0.05)
- Europe > Russia (0.04)
- Asia > Russia (0.04)
- Law (1.00)
- Health & Medicine (1.00)
- Government > Regional Government > North America Government > United States Government (0.96)
- Information Technology > Security & Privacy (0.70)
Counterpoint: The case against an AI god
We'd previously written an opinion piece titled "The case for an artificially intelligent god." This is our counterpoint to that. It's a strange time to be a technology journalist. Somehow artificial intelligence has grown from buzzword to a religion, literally. For tech enthusiasts, it can often be more comfortable to wrap our heads around ideas like algorithms and neural networks than religion and faith.