Distinguishing two features of accountability for AI technologies - Nature Machine Intelligence
Across the AI ethics and global policy landscape, there is consensus that there should be human accountability for AI technologies1. These machines are used for high-stakes decision-making in complex domains -- for example, in healthcare, criminal justice and transport -- where they can cause or occasion serious harm. Some use deep machine learning models, which can make their outputs difficult to understand or contest. At the same time, when the datasets on which these models are trained reflect bias against specific demographic groups, the bias becomes encoded and causes disparate impacts2,3,4. Meanwhile, an increasing number of machines that embody AI, and specifically machine learning, such as highly automated vehicles, can execute decision-making functions and take actions independently of direct, real-time human control, in unpredictable conditions that call for adaptive performance.
Oct-15-2022, 11:40:39 GMT