Data Authenticity, Consent, & Provenance for AI are all broken: what will it take to fix them?
Longpre, Shayne, Mahari, Robert, Obeng-Marnu, Naana, Brannon, William, South, Tobin, Gero, Katy, Pentland, Sandy, Kabbara, Jad
–arXiv.org Artificial Intelligence
New capabilities in foundation models are owed in large part to massive, widely-sourced, and under-documented training data collections. Existing practices in data collection have led to challenges in documenting data transparency, tracing authenticity, verifying consent, privacy, representation, bias, copyright infringement, and the overall development of ethical and trustworthy foundation models. In response, regulation is emphasizing the need for training data transparency to understand foundation models' limitations. Based on a large-scale analysis of the foundation model training data landscape and existing solutions, we identify the missing infrastructure to facilitate responsible foundation model development practices. We examine the current shortcomings of common tools for tracing data authenticity, consent, and documentation, and outline how policymakers, developers, and data creators can facilitate responsible foundation model development by adopting universal data provenance standards.
arXiv.org Artificial Intelligence
Apr-19-2024
- Country:
- Asia
- North America
- Canada > Ontario
- Toronto (0.04)
- United States
- California > San Francisco County
- San Francisco (0.14)
- Massachusetts > Middlesex County
- Cambridge (0.14)
- New York > New York County
- New York City (0.04)
- Texas (0.04)
- California > San Francisco County
- Canada > Ontario
- Genre:
- Overview (0.46)
- Research Report (0.64)
- Industry:
- Government > Regional Government
- Information Technology (0.93)
- Law > Intellectual Property & Technology Law (1.00)
- Media > News (0.93)
- Technology: