trust relationship
I don't trust you (anymore)! -- The effect of students' LLM use on Lecturer-Student-Trust in Higher Education
Kloker, Simon, Bazanya, Matthew, Kateete, Twaha
Trust plays a pivotal role in Lecturer-Student-Collaboration, encompassing teaching and research aspects. The advent of Large Language Models (LLMs) in platforms like Open AI's ChatGPT, coupled with their cost-effectiveness and high-quality results, has led to their rapid adoption among university students. However, discerning genuine student input from LLM-generated output poses a challenge for lecturers. This dilemma jeopardizes the trust relationship between lecturers and students, potentially impacting university downstream activities, particularly collaborative research initiatives. Despite attempts to establish guidelines for student LLM use, a clear framework mutually beneficial for lecturers and students in higher education remains elusive. This study addresses the research question: How does the use of LLMs by students impact Informational and Procedural Justice, influencing Team Trust and Expected Team Performance? Methodically, we applied a quantitative construct-based survey, evaluated using techniques of Structural Equation Modelling (PLS- SEM) to examine potential relationships among these constructs. Our findings based on 23 valid respondents from Ndejje University indicate that lecturers are less concerned about the fairness of LLM use per se but are more focused on the transparency of student utilization, which significantly influences Team Trust positively. This research contributes to the global discourse on integrating and regulating LLMs and subsequent models in education. We propose that guidelines should support LLM use while enforcing transparency in Lecturer-Student- Collaboration to foster Team Trust and Performance. The study contributes valuable insights for shaping policies enabling ethical and transparent LLMs usage in education to ensure effectiveness of collaborative learning environments.
- Africa > Uganda (0.04)
- North America > United States (0.04)
- Europe > Italy (0.04)
- (2 more...)
Agent-oriented Joint Decision Support for Data Owners in Auction-based Federated Learning
Tang, Xiaoli, Yu, Han, Li, Xiaoxiao
Auction-based Federated Learning (AFL) has attracted extensive research interest due to its ability to motivate data owners (DOs) to join FL through economic means. While many existing AFL methods focus on providing decision support to model users (MUs) and the AFL auctioneer, decision support for data owners remains open. To bridge this gap, we propose a first-of-its-kind agent-oriented joint Pricing, Acceptance and Sub-delegation decision support approach for data owners in AFL (PAS-AFL). By considering a DO's current reputation, pending FL tasks, willingness to train FL models, and its trust relationships with other DOs, it provides a systematic approach for a DO to make joint decisions on AFL bid acceptance, task sub-delegation and pricing based on Lyapunov optimization to maximize its utility. It is the first to enable each DO to take on multiple FL tasks simultaneously to earn higher income for DOs and enhance the throughput of FL tasks in the AFL ecosystem. Extensive experiments based on six benchmarking datasets demonstrate significant advantages of PAS-AFL compared to six alternative strategies, beating the best baseline by 28.77% and 2.64% on average in terms of utility and test accuracy of the resulting FL models, respectively.
- Asia > Singapore (0.05)
- North America > Canada > Ontario > Toronto (0.04)
- North America > Canada > British Columbia (0.04)
- Asia > China (0.04)
TrustGuard: GNN-based Robust and Explainable Trust Evaluation with Dynamicity Support
Wang, Jie, Yan, Zheng, Lan, Jiahe, Bertino, Elisa, Pedrycz, Witold
Trust evaluation assesses trust relationships between entities and facilitates decision-making. Machine Learning (ML) shows great potential for trust evaluation owing to its learning capabilities. In recent years, Graph Neural Networks (GNNs), as a new ML paradigm, have demonstrated superiority in dealing with graph data. This has motivated researchers to explore their use in trust evaluation, as trust relationships among entities can be modeled as a graph. However, current trust evaluation methods that employ GNNs fail to fully satisfy the dynamic nature of trust, overlook the adverse effects of trust-related attacks, and cannot provide convincing explanations on evaluation results. To address these problems, we propose TrustGuard, a GNN-based accurate trust evaluation model that supports trust dynamicity, is robust against typical attacks, and provides explanations through visualization. Specifically, TrustGuard is designed with a layered architecture that contains a snapshot input layer, a spatial aggregation layer, a temporal aggregation layer, and a prediction layer. Among them, the spatial aggregation layer adopts a defense mechanism to robustly aggregate local trust, and the temporal aggregation layer applies an attention mechanism for effective learning of temporal patterns. Extensive experiments on two real-world datasets show that TrustGuard outperforms state-of-the-art GNN-based trust evaluation models with respect to trust prediction across single-timeslot and multi-timeslot, even in the presence of attacks. In addition, TrustGuard can explain its evaluation results by visualizing both spatial and temporal views.
- North America > Canada > Ontario > National Capital Region > Ottawa (0.14)
- Europe > Poland > Masovia Province > Warsaw (0.04)
- Europe > Finland > Uusimaa > Helsinki (0.04)
- (10 more...)
- Personal (0.92)
- Research Report > New Finding (0.67)
Efficient Recruitment Strategy for Collaborative Mobile Crowd Sensing Based on GCN Trustworthiness Prediction
Zhan, Zhongwei, Wang, Yingjie, Duan, Peiyong, Sai, Akshita Maradapu Vera Venkata, Liu, Zhaowei, Xiang, Chaocan, Tong, Xiangrong, Wang, Weilong, Cai, Zhipeng
Collaborative Mobile Crowd Sensing (CMCS) enhances data quality and coverage by promoting teamwork in task sensing, with worker recruitment representing a complex multi-objective optimization problem. Existing strategies mainly focus on the characteristics of workers themselves, neglecting the asymmetric trust relationships between them, which affects the rationality of task utility evaluation. To address this, this paper first employs the Mini-Batch K-Means clustering algorithm and deploys edge servers to enable efficient distributed worker recruitment. Historical data and task requirements are utilized to obtain workers' ability types and distances. A trust-directed graph in the worker's social network is input into the Graph Convolutional Network (GCN) framework for training, capturing asymmetric trustworthiness between worker pairs. Privacy leakage is prevented in CMCS scenarios through high trust values between workers. Ultimately, an undirected recruitment graph is constructed using workers' abilities, trust values, and distance weights, transforming the worker recruitment problem into a Maximum Weight Average Subgraph Problem (MWASP). A Tabu Search Recruitment (TSR) algorithm is proposed to rationally recruit a balanced multi-objective optimal task utility worker set for each task. Extensive simulation experiments on four real-world datasets demonstrate the effectiveness of the proposed strategy, outperforming other strategies.
- Asia > China > Shandong Province > Yantai (0.06)
- Asia > China > Chongqing Province > Chongqing (0.04)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- Asia > Nepal (0.04)
- Information Technology > Data Science > Data Mining (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Clustering (0.54)
KGTrust: Evaluating Trustworthiness of SIoT via Knowledge Enhanced Graph Neural Networks
Yu, Zhizhi, Jin, Di, Huo, Cuiying, Wang, Zhiqiang, Liu, Xiulong, Qi, Heng, Wu, Jia, Wu, Lingfei
Social Internet of Things (SIoT), a promising and emerging paradigm that injects the notion of social networking into smart objects (i.e., things), paving the way for the next generation of Internet of Things. However, due to the risks and uncertainty, a crucial and urgent problem to be settled is establishing reliable relationships within SIoT, that is, trust evaluation. Graph neural networks for trust evaluation typically adopt a straightforward way such as one-hot or node2vec to comprehend node characteristics, which ignores the valuable semantic knowledge attached to nodes. Moreover, the underlying structure of SIoT is usually complex, including both the heterogeneous graph structure and pairwise trust relationships, which renders hard to preserve the properties of SIoT trust during information propagation. To address these aforementioned problems, we propose a novel knowledge-enhanced graph neural network (KGTrust) for better trust evaluation in SIoT. Specifically, we first extract useful knowledge from users' comment behaviors and external structured triples related to object descriptions, in order to gain a deeper insight into the semantics of users and objects. Furthermore, we introduce a discriminative convolutional layer that utilizes heterogeneous graph structure, node semantics, and augmented trust relationships to learn node embeddings from the perspective of a user as a trustor or a trustee, effectively capturing multi-aspect properties of SIoT trust during information propagation. Finally, a trust prediction layer is developed to estimate the trust relationships between pairwise nodes. Extensive experiments on three public datasets illustrate the superior performance of KGTrust over state-of-the-art methods.
- North America > United States > Texas > Travis County > Austin (0.05)
- Asia > China > Tianjin Province > Tianjin (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- (3 more...)
Vulnerabilities of Humans and Machines are Important
How much we trust artificial intelligence (AI) systems influences whether we rely on them and how we use them. Exploring the influencing factors on trust is attracting a lot of interest, while vulnerability in human-machine interaction doesn't really get much attention. Yet vulnerability is at the heart of theories of trust in human interactions. Vulnerability, however, is not at the heart of theories of trust in human-machine interaction. In humans, vulnerability is taking some risk of being disappointed by the other person in an interaction: They make themselves vulnerable to others.
- Europe > Switzerland (0.05)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.05)
Blockchain for Increased Trust in Virtual Health Care: Proof-of-Concept Study
Background: Health care systems are currently undergoing a digital transformation that has been primarily triggered by emerging technologies, such as artificial intelligence, the Internet of Things, 5G, blockchain, and the digital representation of patients using (mobile) sensor devices. One of the results of this transformation is the gradual virtualization of care. Irrespective of the care environment, trust between caregivers and patients is essential for achieving favorable health outcomes. Given the many breaches of information security and patient safety, today's health information system portfolios do not suffice as infrastructure for establishing and maintaining trust in virtual care environments. Objective: This study aims to establish a theoretical foundation for a complex health care system intervention that aims to exploit a cryptographically secured infrastructure for establishing and maintaining trust in virtualized care environments and, based on this theoretical foundation, present a proof of concept that fulfills the necessary requirements. Methods: This work applies the following framework for the design and evaluation of complex intervention research within health care: a review of the literature and expert consultation for technology forecasting. A proof of concept was developed by following the principles of design science and requirements engineering. Results: This study determined and defined the crucial functional and nonfunctional requirements and principles for enhancing trust between caregivers and patients within a virtualized health care environment. The cornerstone of our architecture is an approach that uses blockchain technology. The proposed decentralized system offers an innovative governance structure for a novel trust model. The presented theoretical design principles are supported by a concrete implementation of an Ethereum-based platform called VerifyMed. Conclusions: A service for enhancing trust in a virtualized health care environment that is built on a public blockchain has a high fit for purpose in Healthcare 4.0. As a result of health care development, societies are undergoing a current demographic shift--people live longer, and fewer are born. The overall increase in life expectancy between 1970 and 2013 was 10.4 years on average for Organization for Economic Cooperation and Development countries [1].
- North America > United States (0.04)
- Europe > Norway (0.04)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Health Care Providers & Services (1.00)
- Health & Medicine > Consumer Health (1.00)
- Health & Medicine > Health Care Technology > Telehealth (0.70)
- Information Technology > e-Commerce > Financial Technology (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Networks > Sensor Networks (0.54)
A Turing Test for Transparency
Biessmann, Felix, Treu, Viktor
A central goal of explainable artificial intelligence (XAI) is to improve the trust relationship in human-AI interaction. One assumption underlying research in transparent AI systems is that explanations help to better assess predictions of machine learning (ML) models, for instance by enabling humans to identify wrong predictions more efficiently. Recent empirical evidence however shows that explanations can have the opposite effect: When presenting explanations of ML predictions humans often tend to trust ML predictions even when these are wrong. Experimental evidence suggests that this effect can be attributed to how intuitive, or human, an AI or explanation appears. This effect challenges the very goal of XAI and implies that responsible usage of transparent AI methods has to consider the ability of humans to distinguish machine generated from human explanations. Here we propose a quantitative metric for XAI methods based on Turing's imitation game, a Turing Test for Transparency. A human interrogator is asked to judge whether an explanation was generated by a human or by an XAI method. Explanations of XAI methods that can not be detected by humans above chance performance in this binary classification task are passing the test. Detecting such explanations is a requirement for assessing and calibrating the trust relationship in human-AI interaction. We present experimental results on a crowd-sourced text classification task demonstrating that even for basic ML models and XAI approaches most participants were not able to differentiate human from machine generated explanations. We discuss ethical and practical implications of our results for applications of transparent ML.
A Robust Model for Trust Evaluation during Interactions between Agents in a Sociable Environment
Liang, Qin, Zhang, Minjie, Ren, Fenghui, Ito, Takayuki
Trust evaluation is an important topic in both research and applications in sociable environments. This paper presents a model for trust evaluation between agents by the combination of direct trust, indirect trust through neighbouring links and the reputation of an agent in the environment (i.e. social network) to provide the robust evaluation. Our approach is typology independent from social network structures and in a decentralized manner without a central controller, so it can be used in broad domains.
Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians
Artificial intelligence (AI) can transform health care practices with its increasing ability to translate the uncertainty and complexity in data into actionable—though imperfect—clinical decisions or suggestions. In the evolving relationship between humans and AI, trust is the one mechanism that shapes clinicians’ use and adoption of AI. Trust is a psychological mechanism to deal with the uncertainty between what is known and unknown. Several research studies have highlighted the need for improving AI-based systems and enhancing their capabilities to help clinicians. However, assessing the magnitude and impact of human trust on AI technology demands substantial attention. Will a clinician trust an AI-based system? What are the factors that influence human trust in AI? Can trust in AI be optimized to improve decision-making processes? In this paper, we focus on clinicians as the primary users of AI systems in health care and present factors shaping trust between clinicians and AI. We highlight critical challenges related to trust that should be considered during the development of any AI system for clinical use.
- North America > United States (0.49)
- Europe > United Kingdom (0.04)
- Health & Medicine > Diagnostic Medicine (0.94)
- Government > Regional Government > North America Government > United States Government > FDA (0.49)