Foundation models risk exacerbating ML's ethical challenges
The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Machine learning is undergoing a paradigm shift with the rise of models trained at massive scale, including Google's BERT, OpenAI's DALL-E, and AI21 Labs' Jurassic-1 Jumbo. Their capabilities and dramatic performance improvements are leading to a new status quo: a single model trained on raw datasets that can be adapted for a wide range of applications. Indeed, OpenAI is reportedly developing a multimodal system trained on images, text, and other data using massive computational resources, which the company's leadership believes is the most promising path toward AGI -- AI that can learn any task a human can. But while the emergence of these "foundational" models presents opportunities, it also poses risks, according to a new study released by the Stanford Human-Centered Artificial Intelligence's (HAI) Center for Research on Foundation Models (CRFM). CFRM, a new initiative made up of an interdisciplinary team of roughly 160 students, faculty, and researchers, today published a deep dive into the legal ramifications, environmental and economic impact, and ethical issues surrounding foundation models.
Aug-18-2021, 22:50:04 GMT
- AI-Alerts:
- 2021 > 2021-08 > AAAI AI-Alert for Aug 24, 2021 (1.00)
- Country:
- Europe (0.15)
- Genre:
- Research Report > New Finding (0.35)
- Industry:
- Banking & Finance > Economy (0.49)
- Technology: