The challenge of making moral machines
As applications for AIs proliferate, so are questions about ethical development and embedded bias.Credit: MF3d In the waning days of 2020, Timnit Gebru, an artificial intelligence (AI) ethicist at Google, submitted a draft of an academic paper to her employer. Gebru and her collaborators had analysed natural language processing (NLP), and specifically the data-intensive approach of training NLP artificial intelligences (AIs). Such AIs can accurately interpret documents produced by humans, and respond naturally to human commands or queries. In their study, the team found the process of training a NLP AI requires immense resources and creates a considerable risk of embedding significant bias into the AI. That bias can lead to inappropriate or even harmful responses.
Jun-6-2022, 16:45:24 GMT
- Country:
- Asia
- China (0.04)
- Middle East > Saudi Arabia (0.04)
- Europe
- Germany (0.04)
- United Kingdom > England
- Oxfordshire > Oxford (0.04)
- North America > United States
- New York (0.04)
- Rhode Island > Providence County
- Providence (0.04)
- Asia
- Genre:
- Research Report (0.70)
- Industry:
- Health & Medicine (1.00)
- Information Technology > Services (0.34)
- Technology:
- Information Technology > Artificial Intelligence
- Issues > Social & Ethical Issues (1.00)
- Machine Learning (1.00)
- Natural Language (1.00)
- Information Technology > Artificial Intelligence