Goto

Collaborating Authors

 facial analysis technology


How Technology Impacts and Compares to Humans in Socially Consequential Arenas

Dooley, Samuel

arXiv.org Artificial Intelligence

One of the main promises of technology development is for it to be adopted by people, organizations, societies, and governments -- incorporated into their life, work stream, or processes. Often, this is socially beneficial as it automates mundane tasks, frees up more time for other more important things, or otherwise improves the lives of those who use the technology. However, these beneficial results do not apply in every scenario and may not impact everyone in a system the same way. Sometimes a technology is developed which produces both benefits and inflicts some harm. These harms may come at a higher cost to some people than others, raising the question: {\it how are benefits and harms weighed when deciding if and how a socially consequential technology gets developed?} The most natural way to answer this question, and in fact how people first approach it, is to compare the new technology to what used to exist. As such, in this work, I make comparative analyses between humans and machines in three scenarios and seek to understand how sentiment about a technology, performance of that technology, and the impacts of that technology combine to influence how one decides to answer my main research question.


Why Bias in Artificial Intelligence is Bad News for Society

#artificialintelligence

The practice to include Artificial Intelligence in industry application is skyrocketing for a decade now. It is evident since, AI and its constituent applications Machine Learning, computer vision, facial analysis, autonomous vehicles, deep learning form the pillars of modern digital empowerment. The ability to learn the data it is trained up to understand the binary, quantum computation of the world, and make decisions derived from its insights makes AI unique than earlier technologies. Leaders believe that possessing AI-based technologies equate to future industry successes. From healthcare, research, finance, logistics to military, law enforcement department AI holds the key to massive competitive edge and up-gradation with monetary benefits too.


Artificial Intelligence Can Be Biased. Here's What You Should Know.

#artificialintelligence

Artificial intelligence has already started to shape our lives in ubiquitous and occasionally invisible ways. In its new documentary, In The Age of AI, FRONTLINE examines the promise and peril this technology. AI systems are being deployed by hiring managers, courts, law enforcement, and hospitals -- sometimes without the knowledge of the people being screened. And while these systems were initially lauded for being more objective than humans, it's fast becoming clear that the algorithms harbor bias, too. It's an issue Joy Buolamwini, a graduate researcher at the Massachusetts Institute of Technology, knows about firsthand. She founded the Algorithmic Justice League to draw attention to the issue, and earlier this year she testified at a congressional hearing on the impact of facial recognition technology on civil rights. "One of the major issues with algorithmic bias is you may not know it's happening," Buolamwini told FRONTLINE.


Tackling bias in artificial intelligence (and in humans)

#artificialintelligence

The growing use of artificial intelligence in sensitive areas, including for hiring, criminal justice, and healthcare, has stirred a debate about bias and fairness. Yet human decision making in these and other domains can also be flawed, shaped by individual and societal biases that are often unconscious. Will AI's decisions be less biased than human ones? Or will AI make these problems worse? Will AI's decisions be less biased than human ones?


Amazon's Facial Recognition System Mistakes Members of Congress for Mugshots

WIRED

Amazon touts its Rekognition facial recognition system as "simple and easy to use," encouraging customers to "detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases." And yet, in a study released Thursday by the American Civil Liberties Union, the technology managed to confuse photos of 28 members of Congress with publicly available mug shots. Given that Amazon actively markets Rekognition to law enforcement agencies across the US, that's simply not good enough. The ACLU study also illustrated the racial bias that plagues facial recognition today. "Nearly 40 percent of Rekognition's false matches in our test were of people of color, even though they make up only 20 percent of Congress," wrote ACLU attorney Jacob Snow.


Fighting the "coded gaze"

#artificialintelligence

When I was a master's student at MIT, I worked on a number of different art projects that used facial analysis technology. One in particular--called The Aspire Mirror-- would detect my face in a mirror and then display a reflection of something different, based on what inspired me or what I wanted to empathize with. As I was working on it, I realized that the software I was using had a hard time detecting my face. But after I made one adjustment, the software no longer struggled: I put on a white mask. This disheartening moment brought to mind Franz Fanon's book Black Skin White Masks, which interrogates the complexities of changing oneself--putting on a mask to fit the norms or expectations of a dominant culture.