Goto

Collaborating Authors

 california consumer privacy act


Privacy-Preserving Customer Support: A Framework for Secure and Scalable Interactions

Awasthi, Anant Prakash, Agarwal, Girdhar Gopal, Singh, Chandraketu, Varma, Rakshit, Sharma, Sanchit

arXiv.org Machine Learning

The growing reliance on artificial intelligence (AI) in customer support has significantly improved operational efficiency and user experience. However, traditional machine learning (ML) approaches, which require extensive local training on sensitive datasets, pose substantial privacy risks and compliance challenges with regulations like the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA). Existing privacy-preserving techniques, such as anonymization, differential privacy, and federated learning, address some concerns but face limitations in utility, scalability, and complexity. This paper introduces the Privacy-Preserving Zero-Shot Learning (PP-ZSL) framework, a novel approach leveraging large language models (LLMs) in a zero-shot learning mode. Unlike conventional ML methods, PP-ZSL eliminates the need for local training on sensitive data by utilizing pre-trained LLMs to generate responses directly. The framework incorporates real-time data anonymization to redact or mask sensitive information, retrieval-augmented generation (RAG) for domain-specific query resolution, and robust post-processing to ensure compliance with regulatory standards. This combination reduces privacy risks, simplifies compliance, and enhances scalability and operational efficiency. Empirical analysis demonstrates that the PP-ZSL framework provides accurate, privacy-compliant responses while significantly lowering the costs and complexities of deploying AI-driven customer support systems. The study highlights potential applications across industries, including financial services, healthcare, e-commerce, legal support, telecommunications, and government services. By addressing the dual challenges of privacy and performance, this framework establishes a foundation for secure, efficient, and regulatory-compliant AI applications in customer interactions.


Analyzing 25 Years of Privacy Policies with Machine Learning

#artificialintelligence

A recent study has used machine learning analysis techniques to chart the readability, usefulness, length and complexity of more than 50,000 privacy policies on popular websites in a period covering 25 years from 1996 to 2021. The research concludes that the average reader would need to devote 400 hours of'annual reading time' (more than an hour a day) in order to penetrate the growing word counts, obfuscating language and vague language use that characterize the modern privacy policies of some of the most-frequented websites. 'The average policy length has almost doubled in the last ten years, with 2159 words in March 2011 and 4191 words in March 2021, and almost quadrupled since 2000 (1146 words).' The mean word count and sentence count among the corpus studied, over a 25 year period. Though the rate of increase in length spiked when the GDPR and the California Consumer Privacy Act (CCPA) protections came into force, the paper discounts these variations as'small effect sizes' which appear to be insignificant against the broader long-term trend.


Making A Machine Learning Model Forget About You - AI Summary

#artificialintelligence

Further legislation is being considered around the world that will entitle individuals to request deletion of their data from machine learning systems, while the California Consumer Privacy Act (CCPA) of 2018 already provides this right to state residents. Escalating interest in this pursuit does not need to rely on grass-roots privacy activism: as the machine learning sector commercializes over the next ten years, and nations come under pressure to end the current laissez faire culture over the use of screen scraping for dataset generation, there will be a growing commercial incentive for IP-enforcing organizations (and IP trolls) to decode and review the data that has contributed to proprietary and high-earning classification, inference and generative AI frameworks. The researchers state that this approach was inspired by the biological process of'active forgetting', where the user takes strident action to erase all engram cells for a particular memory by manipulation of a special type of dopamine. Forsaken continuously evokes a mask gradient that replicates this action, with safeguards to slow down or halt this process in order to avoid catastrophic forgetting of non-target data. However, the model has by this time abstracted various features of the deleted data in a'holographic' fashion, in the way (by analogy) that a drop of ink redefines the utility of a glass of water. Further legislation is being considered around the world that will entitle individuals to request deletion of their data from machine learning systems, while the California Consumer Privacy Act (CCPA) of 2018 already provides this right to state residents.


The Ethics of AI and Emotional Intelligence - The Partnership on AI

#artificialintelligence

The experimental use of AI spread across sectors and moved beyond the internet into the physical world. Stores used AI perceptions of shoppers' moods and interest to display personalized public ads. Schools used AI to quantify student joy and engagement in the classroom. Employers used AI to evaluate job applicants' moods and emotional reactions in automated video interviews and to monitor employees' facial expressions in customer service positions. It was a year notable for increasing criticism and governance of AI related to emotion and affect.


Americans are getting really creeped out by devices eavesdropping on them and tracking them

USATODAY - Tech Top Stories

You've heard it a million times: Americans don't care about our online privacy. Turns out that's not really true. Anxiety levels over privacy and security are peaking as the relentless collection of online data and the steady drumbeat of data incursions and breaches take a toll. People are worried like never before about eavesdropping by smart home devices such as Google Home and the Amazon Echo or having their microphone tapped to target them with personalized ads and increasingly they want a say over how their personal information gets used, according to a survey released Tuesday to observe Data Privacy Day. More than 8 in 10 American adults expect to have control over how a business handles their data, the survey released by privacy firm DataGrail found.