data protection law
DeFi TrustBoost: Blockchain and AI for Trustworthy Decentralized Financial Decisions
Sachan, Swati, Fickett, Dale S.
This research introduces the Decentralized Finance (DeFi) TrustBoost Framework, which combines blockchain technology and Explainable AI to address challenges faced by lenders underwriting small business loan applications from low-wealth households. The framework is designed with a strong emphasis on fulfilling four crucial requirements of blockchain and AI systems: confidentiality, compliance with data protection laws, resistance to adversarial attacks, and compliance with regulatory audits. It presents a technique for tamper-proof auditing of automated AI decisions and a strategy for on-chain (inside-blockchain) and off-chain data storage to facilitate collaboration within and across financial organizations.
- Europe > United Kingdom (0.14)
- North America > United States > Virginia > Richmond (0.04)
- Europe > Italy > Lombardy > Milan (0.04)
- Information Technology > Security & Privacy (1.00)
- Banking & Finance > Trading (1.00)
- Banking & Finance > Loans (1.00)
Machine Learners Should Acknowledge the Legal Implications of Large Language Models as Personal Data
Nolte, Henrik, Finck, Michèle, Meding, Kristof
Does GPT know you? The answer depends on your level of public recognition; however, if your information was available on a website, the answer is probably yes. All Large Language Models (LLMs) memorize training data to some extent. If an LLM training corpus includes personal data, it also memorizes personal data. Developing an LLM typically involves processing personal data, which falls directly within the scope of data protection laws. If a person is identified or identifiable, the implications are far-reaching: the AI system is subject to EU General Data Protection Regulation requirements even after the training phase is concluded. To back our arguments: (1.) We reiterate that LLMs output training data at inference time, be it verbatim or in generalized form. (2.) We show that some LLMs can thus be considered personal data on their own. This triggers a cascade of data protection implications such as data subject rights, including rights to access, rectification, or erasure. These rights extend to the information embedded with-in the AI model. (3.) This paper argues that machine learning researchers must acknowledge the legal implications of LLMs as personal data throughout the full ML development lifecycle, from data collection and curation to model provision on, e.g., GitHub or Hugging Face. (4.) We propose different ways for the ML research community to deal with these legal implications. Our paper serves as a starting point for improving the alignment between data protection law and the technical capabilities of LLMs. Our findings underscore the need for more interaction between the legal domain and the ML community.
- North America > United States (0.47)
- Europe > France (0.14)
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.04)
- (4 more...)
South Korea pauses downloads of DeepSeek AI over privacy concerns
DeepSeek, the massively popular Chinese AI assistant, has been temporarily unavailable from app stores in South Korea since February 15. A press release from the country's data protection authority, the Personal Information Protection Commission (PIPC), stated that downloads will resume once the Chinese AI company complies with local data protection laws, while those with the app can still use it. DeepSeek is also blocked on South Korean government and military devices. DeepSeek only established a local presence in South Korea on February 10. The company also acknowledged that it didn't fully consider South Korea's data protection laws when launching the service globally.
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Asia Government (0.41)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
The Morning After: Meta may hold back its next-gen AI models from the EU
Meta has reportedly decided not to offer its upcoming multimodal AI model and future versions to customers in the European Union, citing a lack of clarity on the European regulators' data protection rules. These newer AI models process not only text but also images and audio, and power AI capabilities across Meta's platforms. Meta's move follows a similar decision by Apple, which recently announced it would not release its Apple Intelligence features in Europe due to regulatory concerns. Meta told Axios it still plans to release Llama 3, the company's text-only model, in the EU. The company's primary concern stems from the challenges of training AI models using data from European customers while complying with the General Data Protection Regulation (GDPR), the EU's data protection law.
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (0.60)
Meta will reportedly withhold multimodal AI models from the EU amid regulatory uncertainty
Meta has decided to not offer its upcoming multimodal AI model and future versions to customers in the European Union citing a lack of clarity from European regulators, according to a report by Axios. The models in question are designed to process not only text but also images and audio, and power AI capabilities in Meta platforms as well as the company's Ray-Ban smart glasses. "We will release a multimodal Llama model over the coming months, but not in the EU due to the unpredictable nature of the European regulatory environment," Meta said in a statement to Axios. Meta's move follows a similar decision by Apple, which recently announced it would not release its Apple Intelligence features in Europe due to regulatory concerns. Margrethe Vesteger, the EU's competition commissioner, had slammed Apple's move, saying that the company's decision was a "stunning, open declaration that they know 100 percent that this is another way of disabling competition where they have a stronghold already."
- Law (1.00)
- Information Technology > Security & Privacy (0.88)
- Government > Regional Government > Europe Government (0.60)
UK regulator says Snap's AI chatbot may put kids' privacy at risk
A UK regulator has raised concerns that Snap's AI chatbot may be putting the privacy of kids at risk. The Information Commissioner's Office (ICO), the country's privacy watchdog, issued a preliminary enforcement notice against the company over a "potential failure to properly assess the privacy risks posed by its generative AI chatbot'My AI'." Information Commissioner John Edwards said the ICO's provisional findings from its investigation indicated a "worrying failure by Snap to adequately identify and assess the privacy risks to children and other users" before rolling out My AI. The ICO noted that if Snap fails to sufficiently address its concerns, it may block the ChatGPT-powered chatbot in the UK. However, the preliminary notice doesn't necessarily mean that the ICO will take action against Snap or that the company has violated data protection laws. It will consider submissions from Snap before it makes a final decision.
UK watchdog warns chatbot developers over data protection laws
Britain's data watchdog has issued a warning to tech firms about the use of people's personal information to develop chatbots after concerns that the underlying technology is trained on large quantities of unfiltered material scraped from the web. The intervention from the Information Commissioner's Office came after its Italian counterpart temporarily banned ChatGPT over data privacy concerns. The ICO said firms developing and using chatbots must respect people's privacy when building generative artificial intelligence systems. ChatGPT, the best-known example of generative AI, is based on a system called a large language model (LLM) that is "trained" by being fed a vast trove of data culled from the internet. "There really can be no excuse for getting the privacy implications of generative AI wrong. We'll be working hard to make sure that organisations get it right," said Stephen Almond, the ICO's director of technology and innovation.
- Europe > United Kingdom (0.71)
- Europe > Italy (0.06)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.98)
Saying No to Surveillance State
Recently, an RTI filed by the Internet Freedom Foundation (IFF) revealed that the Delhi Police is using Facial recognition technology (FRT) to nab rioters in the capital city. This has caused an uproar as many members of the civil society raised concerns and called the Delhi Police's use of FRT'unethical' in the absence of a Data Protection Act in the country. The argument being made by them is national security should not come at the cost of privacy. Technology such as FRT has been controversial, and authorities leveraging such tech is definitely a concern. The RTI filed by IFF revealed that the procurement of the FRT by the Delhi Police was authorised as per a 2018 direction of the Delhi High Court in Sadhan Haldar v NCT of Delhi.
- North America > United States (0.05)
- Asia > India > NCT > New Delhi (0.05)
- Asia > China (0.05)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Government (1.00)
- Law (0.92)
- Information Technology > Security & Privacy (0.75)
MC2: Secure Collaborative Analytics for Machine Learning
Machine Learning (ML) has gained prominence in recent years because of its ability to be applied across scores of industries and solve complex problems effectively. Yet, research shows that nearly 90% of AI/ML models never actually make it into production or hit the market. The main challenge is that ML/AI models require huge volumes of high-quality, accurate, and timely data to be effective, but organizations have long been reluctant to share sensitive information due to security and privacy concerns. Personal data is becoming more pervasive, causing privacy concerns to grow. As a result, global data protection laws have become stricter, and organizations face increasingly higher noncompliance risks. Mitigating such concerns and taking AI/ML to the next level requires a new approach to collaboration -- secure collaborative learning.
High-tech legislation through self-regulation - Information Age
A quick glance over our technological, scientific, and productive history over the past few decades shows a trend towards increasing specialisation. Getting into an area and becoming a true expert in it takes considerably more time than it did several decades or centuries ago. Business, while progressing slower towards the same trend, is still experiencing something similar. Explaining in-depth technical concepts with sufficient detail and nuance to a layman is becoming more troublesome. Machine learning is one such example – frequently used, but scarcely understood by people outside the technical world.
- Law > Statutes (0.56)
- Information Technology > Security & Privacy (0.52)