Goto

Collaborating Authors

 Shamszare, Hamid


User Intent to Use DeepSeek for Healthcare Purposes and their Trust in the Large Language Model: Multinational Survey Study

arXiv.org Artificial Intelligence

Large language models (LLMs) increasingly serve as interactive healthcare resources, yet user acceptance remains underexplored. This study examines how ease of use, perceived usefulness, trust, and risk perception interact to shape intentions to adopt DeepSeek, an emerging LLM-based platform, for healthcare purposes. A cross-sectional survey of 556 participants from India, the United Kingdom, and the United States was conducted to measure perceptions and usage patterns. Structural equation modeling assessed both direct and indirect effects, including potential quadratic relationships. Results revealed that trust plays a pivotal mediating role: ease of use exerts a significant indirect effect on usage intentions through trust, while perceived usefulness contributes to both trust development and direct adoption. By contrast, risk perception negatively affects usage intent, emphasizing the importance of robust data governance and transparency. Notably, significant non-linear paths were observed for ease of use and risk, indicating threshold or plateau effects. The measurement model demonstrated strong reliability and validity, supported by high composite reliabilities, average variance extracted, and discriminant validity measures. These findings extend technology acceptance and health informatics research by illuminating the multifaceted nature of user adoption in sensitive domains. Stakeholders should invest in trust-building strategies, user-centric design, and risk mitigation measures to encourage sustained and safe uptake of LLMs in healthcare. Future work can employ longitudinal designs or examine culture-specific variables to further clarify how user perceptions evolve over time and across different regulatory environments. Such insights are critical for harnessing AI to enhance outcomes.


The Impact of Performance Expectancy, Workload, Risk, and Satisfaction on Trust in ChatGPT: Cross-sectional Survey Analysis

arXiv.org Artificial Intelligence

This study investigated how perceived workload, satisfaction, performance expectancy, and risk-benefit perception influenced users' trust in Chat Generative Pre-Trained Transformer (ChatGPT). We aimed to understand the nuances of user engagement and provide insights to improve future design and adoption strategies for similar technologies. A semi-structured, web-based survey was conducted among adults in the United States who actively use ChatGPT at least once a month. The survey was conducted from 22nd February 2023 through 24th March 2023. We used structural equation modeling to understand the relationships among the constructs of perceived workload, satisfaction, performance expectancy, risk-benefit, and trust. The analysis of 607 survey responses revealed a significant negative relationship between perceived workload and user satisfaction, a negative but insignificant relationship between perceived workload and trust, and a positive relationship between user satisfaction and trust. Trust was also found to increase with performance expectancy. In contrast, the relationship between the benefit-to-risk ratio of using ChatGPT and trust was insignificant. The findings underscore the importance of ensuring user-friendly design and functionality in AI-based applications to reduce workload and enhance user satisfaction, thereby increasing user trust. Future research should further explore the relationship between the benefit-to-risk ratio and trust in the context of AI chatbots.