Artificial Intelligence and Ethics
On March 18, 2018, at around 10 p.m., Elaine Herzberg was wheeling her bicycle across a street in Tempe, Arizona, when she was struck and killed by a self-driving car. Although there was a human operator behind the wheel, an autonomous system--artificial intelligence--was in full control. This incident, like others involving interactions between people and AI technologies, raises a host of ethical and proto-legal questions. What moral obligations did the system's programmers have to prevent their creation from taking a human life? And who was responsible for Herzberg's death? "Artificial intelligence" refers to systems that can be designed to take cues from their environment and, based on those inputs, proceed to solve problems, assess risks, make predictions, and take actions. In the era predating powerful computers and big data, such systems were programmed by humans and followed rules of human invention, but advances in technology have led to the development of new approaches.
Dec-15-2018, 13:39:00 GMT
- Country:
- Africa
- Eritrea (0.04)
- Middle East
- Sudan (0.04)
- Asia > Middle East
- Saudi Arabia (0.04)
- Yemen (0.04)
- Indian Ocean > Red Sea (0.04)
- North America > United States
- Arizona > Maricopa County > Tempe (0.24)
- Africa
- Genre:
- Personal (0.46)
- Industry:
- Education > Educational Setting
- Higher Education (0.68)
- Health & Medicine (1.00)
- Information Technology (1.00)
- Law (1.00)
- Transportation > Ground
- Road (0.48)
- Education > Educational Setting
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (1.00)
- Natural Language (0.93)
- Representation & Reasoning (1.00)
- Robots > Autonomous Vehicles (0.88)
- Information Technology > Artificial Intelligence