Artificial Intelligence Risks: Black-Box Reasoning
Artificial intelligence (AI) systems and programs use data analytics and algorithms to perform functions that typically would require human intelligence and reasoning. Some types of AI are programmed to follow specific rules and logic to produce targeted outputs. In these cases, individuals can understand the reasoning behind a system's conclusions or recommendations by examining its programming and coding. However, many of today's cutting-edge AI technologies -- particularly machine learning systems that offer great promise for transforming healthcare -- have more opaque reasoning, making it difficult or impossible to determine how they produce results. This unknown functioning is referred to as "black-box reasoning" or "black-box decision-making."
Nov-7-2019, 16:52:34 GMT