ai interaction
TikTok Is Now Collecting Even More Data About Its Users. Here Are the 3 Biggest Changes
TikTok Is Now Collecting Even More Data About Its Users. According to its new privacy policy, TikTok now collects more data on its users, including their precise location, after majority ownership officially switched to a group based in the US. When TikTok users in the US opened the app today, they were greeted with a pop-up asking them to agree to the social media platform's new terms of service and privacy policy before they could resume scrolling. These changes are part of TikTok's transition to new ownership. In order to continue operating in the US, TikTok was compelled by the US government to transition from Chinese control to a new, American-majority corporate entity.
- South America > Venezuela (0.05)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.05)
- North America > United States > California (0.05)
- (3 more...)
A Lexical Analysis of online Reviews on Human-AI Interactions
This study focuses on understanding the complex dynamics between humans and AI systems by analyzing user reviews. While previous research has explored various aspects of human-AI interaction, such as user perceptions and ethical considerations, there remains a gap in understanding the specific concerns and challenges users face. By using a lexical approach to analyze 55,968 online reviews from G2.com, Producthunt.com, and Trustpilot.com, this preliminary research aims to analyze human-AI interaction. Initial results from factor analysis reveal key factors influencing these interactions. The study aims to provide deeper insights into these factors through content analysis, contributing to the development of more user-centric AI systems. The findings are expected to enhance our understanding of human-AI interaction and inform future AI technology and user experience improvements.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Asia > Singapore (0.04)
- Research Report (0.82)
- Overview (0.69)
Human-AI Interactions: Cognitive, Behavioral, and Emotional Impacts
Riley, Celeste, Al-Refai, Omar, Reyes, Yadira Colunga, Hammad, Eman
As stories of human-AI interactions continue to be highlighted in the news and research platforms, the challenges are becoming more pronounced, including potential risks of overreliance, cognitive offloading, social and emotional manipulation, and the nuanced degradation of human agency and judgment. This paper surveys recent research on these issues through the lens of the psychological triad: cognition, behavior, and emotion. Observations seem to suggest that while AI can substantially enhance memory, creativity, and engagement, it also introduces risks such as diminished critical thinking, skill erosion, and increased anxiety. Emotional outcomes are similarly mixed, with AI systems showing promise for support and stress reduction, but raising concerns about dependency, inappropriate attachments, and ethical oversight. This paper aims to underscore the need for responsible and context-aware AI design, highlighting gaps for longitudinal research and grounded evaluation frameworks to balance benefits with emerging human-centric risks.
- North America > United States > Texas > Brazos County > College Station (0.14)
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > Texas > Brazos County > Bryan (0.04)
- (6 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Overview (1.00)
Classifying Epistemic Relationships in Human-AI Interaction: An Exploratory Approach
As AI systems become integral to knowledge - intensive work, questions arise not only about their functionality but also their epistemic roles in human - AI interaction. While HCI research has proposed various AI role typologies, it often overlooks how AI resh apes users' roles as knowledge contributors. This study examines how users form epistemic relationships with AI -- how they assess, trust, and collaborate with it in research and teaching contexts. Based on 31 interviews with academics across disciplines, we developed a five - part codebook and identified five relationship types: Instrumental Reliance, Contingent Delegation, Co - agency Collaboration, Authority Displacement, and Epistemic Abstention. These reflect variations in trust, assessment modes, tasks, and human epistemic status. Our findings show that epistemic roles are dynamic and context dependent . We argue for shifting beyond static metaphors of AI toward a more nuanced framework that captures how humans and AI co - construct knowledge, enriching HCI's understanding of the relational and normative dimensions of AI use.
- North America > United States > Hawaii (0.04)
- North America > United States > Indiana (0.04)
- North America > Mexico (0.04)
- (5 more...)
- Research Report > New Finding (1.00)
- Personal > Interview (0.68)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.89)
- Information Technology > Artificial Intelligence > Cognitive Science (0.68)
Autonomy by Design: Preserving Human Autonomy in AI Decision-Support
Buijsman, Stefan, Carter, Sarah E., Bermúdez, Juan Pablo
AI systems increasingly support human decision-making across domains of professional, skill-based, and personal activity. While previous work has examined how AI might affect human autonomy globally, the effects of AI on domain-specific autonomy -- the capacity for self-governed action within defined realms of skill or expertise -- remain understudied. We analyze how AI decision-support systems affect two key components of domain-specific autonomy: skilled competence (the ability to make informed judgments within one's domain) and authentic value-formation (the capacity to form genuine domain-relevant values and preferences). By engaging with prior investigations and analyzing empirical cases across medical, financial, and educational domains, we demonstrate how the absence of reliable failure indicators and the potential for unconscious value shifts can erode domain-specific autonomy both immediately and over time. We then develop a constructive framework for autonomy-preserving AI support systems. We propose specific socio-technical design patterns -- including careful role specification, implementation of defeater mechanisms, and support for reflective practice -- that can help maintain domain-specific autonomy while leveraging AI capabilities. This framework provides concrete guidance for developing AI systems that enhance rather than diminish human agency within specialized domains of action.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Netherlands > South Holland > Delft (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (5 more...)
- Transportation (1.00)
- Information Technology (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
- Health & Medicine > Diagnostic Medicine (0.68)
Serious Games: Human-AI Interaction, Evolution, and Coevolution
Doreswamy, Nandini, Horstmanshof, Louise
The serious games between humans and AI have only just begun. Evolutionary Game Theory (EGT) models the competitive and cooperative strategies of biological entities. EGT could help predict the potential evolutionary equilibrium of humans and AI. The objective of this work was to examine some of the EGT models relevant to human-AI interaction, evolution, and coevolution. Of thirteen EGT models considered, three were examined: the Hawk-Dove Game, Iterated Prisoner's Dilemma, and the War of Attrition. This selection was based on the widespread acceptance and clear relevance of these models to potential human-AI evolutionary dynamics and coevolutionary trajectories. The Hawk-Dove Game predicts balanced mixed-strategy equilibria based on the costs of conflict. It also shows the potential for balanced coevolution rather than dominance. Iterated Prisoner's Dilemma suggests that repeated interaction may lead to cognitive coevolution. It demonstrates how memory and reciprocity can lead to cooperation. The War of Attrition suggests that competition for resources may result in strategic coevolution, asymmetric equilibria, and conventions on sharing resources. Therefore, EGT may provide a suitable framework to understand and predict the human-AI evolutionary dynamic. However, future research could extend beyond EGT and explore additional frameworks, empirical validation methods, and interdisciplinary perspectives. AI is being shaped by human input and is evolving in response to it. So too, neuroplasticity allows the human brain to grow and evolve in response to stimuli. If humans and AI converge in future, what might be the result of human neuroplasticity combined with an ever-evolving AI? Future research should be mindful of the ethical and cognitive implications of human-AI interaction, evolution, and coevolution.
- North America > United States > Virginia > Albemarle County > Charlottesville (0.14)
- Oceania > Australia > New South Wales (0.04)
- North America > United States > New York (0.04)
- (2 more...)
From Intuition to Understanding: Using AI Peers to Overcome Physics Misconceptions
Weijers, Ruben, Wu, Denton, Betts, Hannah, Jacod, Tamara, Guan, Yuxiang, Sujaya, Vidya, Dev, Kushal, Goel, Toshali, Delooze, William, Rabbany, Reihaneh, Wu, Ying, Godbout, Jean-François, Pelrine, Kellin
Generative AI has the potential to transform personalization and accessibility of education. However, it raises serious concerns about accuracy and helping students become independent critical thinkers. In this study, we designed a helpful AI "Peer" to help students correct fundamental physics misconceptions related to Newtonian mechanic concepts. In contrast to approaches that seek near-perfect accuracy to create an authoritative AI tutor or teacher, we directly inform students that this AI can answer up to 40% of questions incorrectly. In a randomized controlled trial with 165 students, those who engaged in targeted dialogue with the AI Peer achieved post-test scores that were, on average, 10.5 percentage points higher--with over 20 percentage points higher normalized gain--than a control group that discussed physics history. Qualitative feedback indicated that 91% of the treatment group's AI interactions were rated as helpful. Furthermore, by comparing student performance on pre-and post-test questions about the same concept, along with experts' annotations of the AI interactions, we find initial evidence suggesting the improvement in performance does not depend on the correctness of the AI. With further research, the AI Peer paradigm described here could open new possibilities for how we learn, adapt to, and grow with AI. Students have recently been exposed to the remarkable capabilities of Generative AI (AI) in education (AIED). For example, OpenAI's ChatGPT has been reported to successfully support teaching preparation, assessment design and grading, and student learning (Lo, 2023). Systems like ChatGPT show potential to save time and enhance teaching and learning, including critical and higher-order thinking tasks (Lo, 2023).
- Asia > Middle East > Jordan (0.04)
- North America > United States > Arizona (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- (3 more...)
- Education > Educational Setting (1.00)
- Education > Assessment & Standards > Student Performance (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.89)
Why Trust in AI May Be Inevitable
Truong, Nghi, Puranam, Phanish, Testlin, Ilia
In human-AI interactions, explanation is widely seen as necessary for enabling trust in AI systems. We argue that trust, however, may be a pre-requisite because explanation is sometimes impossible. We derive this result from a formalization of explanation as a search process through knowledge networks, where explainers must find paths between shared concepts and the concept to be explained, within finite time. Our model reveals that explanation can fail even under theoretically ideal conditions - when actors are rational, honest, motivated, can communicate perfectly, and possess overlapping knowledge. This is because successful explanation requires not just the existence of shared knowledge but also finding the connection path within time constraints, and it can therefore be rational to cease attempts at explanation before the shared knowledge is discovered. This result has important implications for human-AI interaction: as AI systems, particularly Large Language Models, become more sophisticated and able to generate superficially compelling but spurious explanations, humans may default to trust rather than demand genuine explanations. This creates risks of both misplaced trust and imperfect knowledge integration.
Financial Institutions Benefit from AI, But Consumers Remain Skeptical
There's no doubt that retail banking leaders understand the potential of artificial intelligence technology to improve customer experience. Nearly every one (94%) of more than 300 banking and insurance executives surveyed by The Capgemini Research Institute agreed that improving CX is the key objective behind launching new AI-enabled initiatives. In fact, more than half of the international sample say that at least 40% of customer interactions are already enabled by various AI applications, including conversational agents, prescriptive modeling, process automation, and complex analytics. That would be impressive -- except for one thing: Half of more than 5,000 consumers polled by Capgemini worldwide feel that the value they receive from AI-powered financial interactions was "non-existent or less than expected." What about in the U.S., the land of "Erica" and "Eno" and other digital assistants, and the many advanced mobile banking apps?
Financial Institutions Benefit from AI, But Consumers Remain Skeptical
There's no doubt that retail banking leaders understand the potential of artificial intelligence technology to improve customer experience. Nearly every one (94%) of more than 300 banking and insurance executives surveyed by The Capgemini Research Institute agreed that improving CX is the key objective behind launching new AI-enabled initiatives. In fact, more than half of the international sample say that at least 40% of customer interactions are already enabled by various AI applications, including conversational agents, prescriptive modeling, process automation, and complex analytics. That would be impressive -- except for one thing: Half of more than 5,000 consumers polled by Capgemini worldwide feel that the value they receive from AI-powered financial interactions was "non-existent or less than expected." What about in the U.S., the land of "Erica" and "Eno" and other digital assistants, and the many advanced mobile banking apps?