Goto

Collaborating Authors

 nkonde


Deepfake detection tools must work with dark skin tones, experts warn

The Guardian > Technology

Detection tools being developed to combat the growing threat of deepfakes – realistic-looking false content – must use training datasets that are inclusive of darker skin tones to avoid bias, experts have warned. Most deepfake detectors are based on a learning strategy that depends largely on the dataset that is used for its training. It then uses AI to detect signs that may not be clear to the human eye. This can include monitoring blood flow and heart rate. However, these detection methods do not always work on people with darker skin tones, and if training sets do not contain all ethnicities, accents, genders, ages and skin-tone, they are open to bias, experts warned.


Deepfake detection tools must work with dark skin tones, experts warn

The Guardian

Detection tools being developed to combat the growing threat of deepfakes – realistic-looking false content – must use training datasets that are inclusive of darker skin tones to avoid bias, experts have warned. Most deepfake detectors are based on a learning strategy that depends largely on the dataset that is used for its training. It then uses AI to detect signs that may not be clear to the human eye. This can include monitoring blood flow and heart rate. However, these detection methods do not always work on people with darker skin tones, and if training sets do not contain all ethnicities, accents, genders, ages and skin-tone, they are open to bias, experts warned.


We could see federal regulation on face recognition as early as next week

MIT Technology Review

On May 10, 40 advocacy groups sent an open letter demanding a permanent ban on the use of Amazon's facial recognition software, Rekognition, by US police. The letter was addressed to Jeff Bezos and Andy Jassy, the company's current and incoming CEOs, and came just weeks before Amazon's year-long moratorium on sales to law enforcement was set to expire. The letter contrasted Bezos's and Jassy's vocal support of Black Lives Matter campaigners during last summer's racial justice protests after the murder of George Floyd with reporting that other Amazon products have been used by law enforcement to identify protesters. On May 17, Amazon announced it would extend its moratorium indefinitely, joining competitors IBM and Microsoft in self-regulated purgatory. The move is a nod at the political power of the groups fighting to curb the technology--and recognition that new legislative battle grounds are starting to emerge.


U.S. police brutality protests highlight concerns AI tech reinforces racial bias

The Japan Times

Washington – A wave of protests over law enforcement abuses has highlighted concerns over artificial intelligence programs like facial recognition that critics say may reinforce racial bias. While the protests have focused on police misconduct, activists point out flaws that may lead to unfair applications of technologies for law enforcement, including facial recognition, predictive policing and "risk assessment" algorithms. The issue came to the forefront recently with the wrongful arrest in Detroit of an African American man based on a flawed algorithm which identified him as a robbery suspect. Critics of facial recognition use in law enforcement say the case underscores the pervasive impact of a flawed technology. Mutale Nkonde, an AI researcher, said that even though the idea of bias and algorithms has been debated for years, the latest case and other incidents have driven home the message.


Inclusive AI: Are AI hiring tools hurting corporate diversity?

#artificialintelligence

In recent years, a growing number of organizations have utilized artificial intelligence (AI) to revolutionize their traditional workflows. These systems are implemented to enhance cost-efficiency, reduce employee burnout, and even identify premium talent. Many organizations are using AI tools to expedite the arduous hiring processes. These algorithms have been viewed as objective tools capable of eliminating human subjectivity from the employment screening process. Paradoxically, many of these models are riddled with the same inherent biases these systems are intended to remove.


These Black Women Are Fighting For Justice In A World Of Biased Algorithms

#artificialintelligence

They help us check out at the grocery store, target us with timely ads on Instagram for a new pair of shoes, turn off our lights at a simple voice command and even determine the songs we're most apt to enjoy on our favorite music streaming platforms. Though technology has given us more convenience, connection and access than ever before, the algorithms hidden beneath its seemingly harmless code, the algorithms shaping our lives, are also grossly discriminating against our community--and all too often with impunity. If you think this doesn't affect you, think again. For us, unchecked technology shows up as police departments disproportionately deploying facial recognition software within marginalized communities to target criminal behavior, or Black people being tagged as gorillas in Google image searches, or Facebook approving housing ads that are filtered to prevent them being marketed to minorities. These practices are what Princeton University associate professor Ruha Benjamin, Ph.D., refers to as "the New Jim Code."


Congress wants to protect you from biased algorithms, deepfakes, and other bad AI

MIT Technology Review

Last Wednesday, US lawmakers introduced a new bill that represents one of the country's first major efforts to regulate AI. There are likely to be more to come. It hints at a dramatic shift in Washington's stance toward one of this century's most powerful technologies. Only a few years ago, policymakers had little inclination to regulate AI. Now, as the consequences of not doing so grow increasingly tangible, a small contingent in Congress is advancing a broader strategy to rein the technology in.