Proposed Algorithmic Accountability Act Targets Bias in Artificial Intelligence JD Supra


Employed across industries, AI applications unlock smartphones using facial recognition, make driving decisions in autonomous vehicles, recommend entertainment options based on user preferences, assist the process of pharmaceutical development, judge the creditworthiness of potential homebuyers, and screen applicants for job interviews. AI automates, quickens, and improves data processing by finding patterns in the data, adapting to new data, and learning from experience. In theory, AI is objective--but in reality, AI systems are informed by human intelligence, which is of course far from perfect. Humans typically select the data used to train machine learning algorithms and create parameters for the machines to "learn" from new data over time. Even without discriminatory intent, the training data may reflect unconscious or historic bias. For example, if the training data shows that people of a certain gender or race have fulfilled certain criteria in the past, the algorithm may "learn" to select those individuals at the exclusion of others.