Goto

Collaborating Authors

We Need To Examine The Ethics And Governance Of Artificial Intelligence

#artificialintelligence

Growing up, one of my favorite movies was Steven Spielberg's Minority Report. I was fascinated by the idea that a crime could be prevented before it occurred. More interesting to me at the time was the futuristic role that'super intelligent' technology – something depicted as more sophisticated and advanced than humans – could play in doing this accurately. Recently, the role that pre-crime and artificial intelligence can play in our world has been explored in episodes of the popular Netflix TV show Black Mirror, focusing on the debate between free will and determinism. Working in counter-terrorism, I know that the use of artificial intelligence in the security space is fast becoming a reality.


We Need to be Examining the Ethics and Governance of Artificial Intelligence - AI Trends

#artificialintelligence

Growing up, one of my favorite movies was Steven Spielberg's Minority Report. I was fascinated by the idea that a crime could be prevented before it occurred. More interesting to me at the time was the futuristic role that'super intelligent' technology – something depicted as more sophisticated and advanced than humans – could play in doing this accurately. Recently, the role that pre-crime and artificial intelligence can play in our world has been explored in episodes of the popular Netflix TV show Black Mirror, focusing on the debate between free will and determinism. Working in counter-terrorism, I know that the use of artificial intelligence in the security space is fast becoming a reality.


The Danger of Bias in an Al Tech Based Society

#artificialintelligence

Currently, algorithms are used to make life-altering financial and legal decisions like who gets a job, what medical treatment people receive, and who gets granted parole. In theory, this should lead to fairer decision making. In reality, AI tech can be just as biased as the humans who create it. We are living in the age of the algorithm. More and more we are handing decision making over to mathematical models.


Technology Is Biased Too. How Do We Fix It?

#artificialintelligence

At first glance, COMPAS appears fair: White and black defendants given higher risk scores tended to reoffend at roughly the same rate. New laws and better government regulation could be a powerful tool in reforming how companies and government agencies use AI to make decisions. Last year, the European Union passed a law called the General Data Protection Regulation, which includes numerous restrictions on the automated processing of personal data and requires transparency about "the logic involved" in those systems. However, existing federal laws do protect against certain types of discrimination -- particularly in areas like hiring, housing and credit -- though they haven't been updated to address the way new technologies intersect with old prejudices.


Technology Is Biased Too. How Do We Fix It?

#artificialintelligence

Whether it's done consciously or subconsciously, racial discrimination continues to have a serious, measurable impact on the choices our society makes about criminal justice, law enforcement, hiring and financial lending. It might be tempting, then, to feel encouraged as more and more companies and government agencies turn to seemingly dispassionate technologies for help with some of these complicated decisions, which are often influenced by bias. Rather than relying on human judgment alone, organizations are increasingly asking algorithms to weigh in on questions that have profound social ramifications, like whether to recruit someone for a job, give them a loan, identify them as a suspect in a crime, send them to prison or grant them parole. But an increasing body of research and criticism suggests that algorithms and artificial intelligence aren't necessarily a panacea for ending prejudice, and they can have disproportionate impacts on groups that are already socially disadvantaged, particularly people of color. Instead of offering a workaround for human biases, the tools we designed to help us predict the future may be dooming us to repeat the past by replicating and even amplifying societal inequalities that already exist.