privacy concern
Gmail Is Killing POP and Gmailify Access. Here's What It Means for You
Gmail Is Killing POP and Gmailify Access. If you have multiple email accounts, your Gmail setup may soon need some reorganizing. Google giveth, and Google taketh away. Two long-standing features are being removed from Gmail, and they both relate to how you access messages from other, non-Google email accounts through the Gmail interface. The features we're talking about are Gmailify and POP access, and if you rely on them to consolidate multiple email accounts into your Gmail inbox, you're going to have to find a different approach.
- North America > United States > California (0.15)
- Europe > United Kingdom (0.05)
- Europe > Slovakia (0.05)
- (3 more...)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.48)
- Information Technology > Communications > Mobile (0.33)
This AI Tool Will Tell You to Stop Slacking Off
Fomi watches you work, then scolds you when your attention wanders. It's helpful, but there are privacy issues to consider. I've tested a lot of software tools over the years designed to block distractions and keep you focused. None of them work perfectly, mostly because of context. Reddit, for example, is something I should generally avoid during the workday, so I tend to block it--this is a good decision for me overall.
- North America > United States > California (0.15)
- Europe > Slovakia (0.05)
- Europe > Czechia (0.05)
- Asia > China (0.05)
- Information Technology > Communications > Social Media (0.93)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.50)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.31)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.31)
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Asia > Middle East > Jordan (0.04)
Learning from Mixtures of Private and Public Populations
We initiate the study of a new model of supervised learning under privacy constraints. Imagine a medical study where a dataset is sampled from a population of both healthy and unhealthy individuals. Suppose healthy individuals have no privacy concerns (in such case, we call their data ``public'') while the unhealthy individuals desire stringent privacy protection for their data. In this example, the population (data distribution) is a mixture of private (unhealthy) and public (healthy) sub-populations that could be very different. Inspired by the above example, we consider a model in which the population $\cD$ is a mixture of two possibly distinct sub-populations: a private sub-population $\Dprv$ of private and sensitive data, and a public sub-population $\Dpub$ of data with no privacy concerns.
Health App Reviews for Privacy & Trust (HARPT): A Corpus for Analyzing Patient Privacy Concerns, Trust in Providers and Trust in Applications
Kelly, Timoteo, Korkmaz, Abdulkadir, Mallet, Samuel, Souders, Connor, Aliakbarpour, Sadra, Rao, Praveen
Background: User reviews of Telehealth and Patient Portal mobile applications (apps) hereon referred to as electronic health (eHealth) apps are a rich source of unsolicited patient feedback, revealing critical insights into patient perceptions. However, the lack of large-scale, annotated datasets specific to privacy and trust has limited the ability of researchers to systematically analyze these concerns using natural language processing (NLP) techniques. Objective: This study aims to develop and benchmark Health App Reviews for Privacy & Trust (HARPT), a large-scale annotated corpus of patient reviews from eHealth apps to advance research in patient privacy and trust. Methods: We employed a multistage data construction strategy. This integrated keyword-based filtering, iterative manual labeling with review, targeted data augmentation, and weak supervision using transformer-based classifiers. A curated subset of 7,000 reviews was manually annotated to support machine learning model development and evaluation. The resulting dataset was used to benchmark a broad range of models. Results: The HARPT corpus comprises 480,000 patient reviews annotated across seven categories capturing critical aspects of trust in the application (TA), trust in the provider (TP), and privacy concerns (PC). We provide comprehensive benchmark performance for a range of machine learning models on the manually annotated subset, establishing a baseline for future research. Conclusions: The HARPT corpus is a significant resource for advancing the study of privacy and trust in the eHealth domain. By providing a large-scale, annotated dataset and initial benchmarks, this work supports reproducible research in usable privacy and trust within health informatics. HARPT is released under an open resource license.
- North America > United States > Missouri > Boone County > Columbia (0.15)
- North America > United States > Connecticut > New Haven County > New Haven (0.04)
- North America > Canada > Ontario > Waterloo Region > Waterloo (0.04)
- Asia > Middle East > Republic of Türkiye (0.04)
- Information Technology (1.00)
- Health & Medicine > Health Care Technology > Telehealth (1.00)
1.1 The novelty of using generic knowledge
Our proposed approach can be applied to other AUs as well. In Tab.6, LP-SM also considers apex frames on CK+, and The comparison to LP-SM is consistent. In Tab.8, we apply FMPN-FER and DeepEmotion to our pre-processed We will consider a pre-trained VGGFace model in our further work. R2 2.1 The novelty compared to prior work. Facial expression can be a group of AUs.
Autonomy Matters: A Study on Personalization-Privacy Dilemma in LLM Agents
Zhang, Zhiping, Zhang, Yi Evie, Shi, Freda, Li, Tianshi
Large Language Model (LLM) agents require personal information for personalization in order to better act on users' behalf in daily tasks, but this raises privacy concerns and a personalization-privacy dilemma. Agent's autonomy introduces both risks and opportunities, yet its effects remain unclear. To better understand this, we conducted a 3$\times$3 between-subjects experiment ($N=450$) to study how agent's autonomy level and personalization influence users' privacy concerns, trust and willingness to use, as well as the underlying psychological processes. We find that personalization without considering users' privacy preferences increases privacy concerns and decreases trust and willingness to use. Autonomy moderates these effects: Intermediate autonomy flattens the impact of personalization compared to No- and Full autonomy conditions. Our results suggest that rather than aiming for perfect model alignment in output generation, balancing autonomy of agent's action and user control offers a promising path to mitigate the personalization-privacy dilemma.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- North America > Canada > Ontario > Waterloo Region > Waterloo (0.04)
- (6 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Communications (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
Privacy Perceptions in Robot-Assisted Well-Being Coaching: Examining the Roles of Information Transparency, User Control, and Proactivity
Nilgar, Atikkhan Faridkhan, Dietrich, Manuel, Van Laerhoven, Kristof
Social robots are increasingly recognized as valuable supporters in the field of well-being coaching. They can function as independent coaches or provide support alongside human coaches, and healthcare professionals. In coaching interactions, these robots often handle sensitive information shared by users, making privacy a relevant issue. Despite this, little is known about the factors that shape users' privacy perceptions. This research aims to examine three key factors systematically: (1) the transparency about information usage, (2) the level of specific user control over how the robot uses their information, and (3) the robot's behavioral approach - whether it acts proactively or only responds on demand. Our results from an online study (N = 200) show that even when users grant the robot general access to personal data, they additionally expect the ability to explicitly control how that information is interpreted and shared during sessions. Experimental conditions that provided such control received significantly higher ratings for perceived privacy appropriateness and trust. Compared to user control, the effects of transparency and proactivity on privacy appropriateness perception were low, and we found no significant impact. The results suggest that merely informing users or proactive sharing is insufficient without accompanying user control. These insights underscore the need for further research on mechanisms that allow users to manage robots' information processing and sharing, especially when social robots take on more proactive roles alongside humans.
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Europe > United Kingdom (0.04)
- Europe > Italy (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Asia > Middle East > Jordan (0.04)