Foundation models risk exacerbating ML's ethical challenges

Stanford HAI 

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Machine learning is undergoing a paradigm shift with the rise of models trained at massive scale, including Google's BERT, OpenAI's DALL-E, and AI21 Labs' Jurassic-1 Jumbo. Their capabilities and dramatic performance improvements are leading to a new status quo: a single model trained on raw datasets that can be adapted for a wide range of applications. Indeed, OpenAI is reportedly developing a multimodal system trained on images, text, and other data using massive computational resources, which the company's leadership believes is the most promising path toward AGI -- AI that can learn any task a human can. But while the emergence of these "foundational" models presents opportunities, it also poses risks, according to a new study released by the Stanford Human-Centered Artificial Intelligence's (HAI) Center for Research on Foundation Models (CRFM). CFRM, a new initiative made up of an interdisciplinary team of roughly 160 students, faculty, and researchers, today published a deep dive into the legal ramifications, environmental and economic impact, and ethical issues surrounding foundation models.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found