Artificial intelligence is the future. Google, Microsoft, Amazon and Apple are all making big bets on AI. (Amazon owner Jeff Bezos also owns The Washington Post.) Congress has held hearings and even formed a bipartisan Artificial Intelligence Caucus. From health care to transportation to national security, AI has the potential to improve lives. But it comes with fears about economic disruption and a brewing "AI arms race ."
Currently, algorithms are used to make life-altering financial and legal decisions like who gets a job, what medical treatment people receive, and who gets granted parole. In theory, this should lead to fairer decision making. In reality, AI tech can be just as biased as the humans who create it. We are living in the age of the algorithm. More and more we are handing decision making over to mathematical models.
I have been thinking of interactive ways of getting my postgraduate thesis on Racial Bias, Gender Bias, AI new ways to approach Human Computer Interaction out to everyone. Life has been super busy so I have decided to add snippets of the thesis for now. For this research paper, the researcher has identified a number of areas of concern in regards to systems powered by AI being deployed in situations that affect the lives of humans. These examples will be used to further highlight this area of concern. Suggestions have made that decision-support systems powered by AI can be used to augment human judgement and reduce both conscious and unconscious biases (Anderson & Anderson, 2007).
When the head of the U.S. Supreme Court says artificial intelligence (AI) is having a significant impact on how the legal system in this country works, you pay attention. That's exactly what happened when Chief Justice John Roberts was asked the following question: "Can you foresee a day when smart machines, driven with artificial intelligences, will assist with courtroom fact-finding or, more controversially even, judicial decision-making?" His answer startled the audience. "It's a day that's here and it's putting a significant strain on how the judiciary goes about doing things," he said, as reported by The New York Times. In the last decade, the field of AI has experienced a renaissance.
It was a striking story. "Machine Bias," the headline read, and the teaser proclaimed: "There's software used across the country to predict future criminals. And it's biased against blacks." ProPublica, a Pulitzer Prize–winning nonprofit news organization, had analyzed risk assessment software known as COMPAS. It is being used to forecast which criminals are most likely to reoffend.