Towards Operationalizing Right to Data Protection

Java, Abhinav, Shahid, Simra, Agarwal, Chirag

arXiv.org Artificial Intelligence 

The recent success of large language models (LLMs) has exposed the vulnerability of public data as these models are trained on data scraped at scale from public forums and news articles [Touvron et al., 2023] without consent, and the collection of this data remains largely unregulated. As a result, governments worldwide have passed several regulatory frameworks, such as the GDPR [Voigt and Von dem Bussche, 2017] in the EU, the Personal Information Protection and Electronic Documents Act in Canada [PIPEDA], the Data Protection Act in the UK [DPA], the Personal Data Protection Commission (PDPC) [Commission et al., 2022] in Singapore, and the EU AI Act [Neuwirth, 2022], to safeguard algorithmic decisions and data usage practices. The aforementioned legislative frameworks emphasize individuals' rights over how their data is used, even in public contexts. These laws are not limited to private or sensitive data but also encompass the ethical use of publicly accessible information, especially in contexts where such data is used for profiling, decision-making, or large-scale commercial gains. Despite the regulatory efforts, state-of-the-art LLMs are increasingly used in real-world applications to exploit personal data and predict political affiliations [Rozado, 2024, Hernandes, 2024], societal biases [Liang et al., 2021, Dong et al., 2024], and sensitive information of individuals [Wan et al., 2023b, Salewski et al., 2024, Suman et al., 2021], highlighting significant gaps between research and regulatory frameworks. In this work, we aim to make the first attempt to operationalize one principle of "right to protect data" into algorithmic implementation in practice, i.e., people having control over their online data, and propose R