Descriptive AI Ethics: Collecting and Understanding the Public Opinion
–arXiv.org Artificial Intelligence
As we start to encounter AI systems in various morally and legally salient environments, some have begun to explore how the current responsibility ascription practices might be adapted to meet such new technologies [19, 33]. A critical viewpoint today is that autonomous and self-learning AI systems pose a so-called responsibility gap [27]. These systems' autonomy challenges human control over them [13], while their adaptability leads to unpredictability. Hence, it might infeasible to trace back responsibility to a specific entity if these systems cause any harm. Considering responsibility practices as the adoption of certain attitudes towards an agent [40], scholarly work has also posed the question of whether AI systems are appropriate subjects of such practices [15, 29, 37] -- e.g., they might "have a body to kick," yet they "have no soul to damn" [4].
arXiv.org Artificial Intelligence
Jan-14-2021
- Country:
- Europe > Germany (0.04)
- North America > United States
- Genre:
- Research Report (0.40)
- Industry:
- Government (0.47)
- Law (0.73)
- Social Sector (0.70)
- Technology:
- Information Technology > Artificial Intelligence
- Issues > Social & Ethical Issues (1.00)
- Robots (1.00)
- Information Technology > Artificial Intelligence