Goto

Collaborating Authors

Results


Rite Aid Used Facial Recognition in Stores for Nearly a Decade

WIRED

Just over two weeks after an unprecedented hack led to the compromise of the Twitter accounts of Bill Gates, Elon Musk, Barack Obama, and dozens more, authorities have charged three men in connection with the incident. The alleged "mastermind" is a 17-year-old from Tampa, who will be tried as an adult. There are still plenty of details outstanding about how they might have pulled it off, but court documents show how a trail of bitcoin and IP addresses led investigators to the alleged hackers. A Garmin ransomware hack disrupted more than just workouts during a days-long outage; security researchers see it as part of a troubling trend of "big game hunting" among ransomware groups. In other alarming trends, hackers are breaking into news sites to publish misinformation through their content management systems, giving them an air of legitimacy.


Facebook will pay $650 million to settle facial recognition privacy lawsuit

Engadget

Facebook will now hand over a total of $650 million to settle a lawsuit over the company's use of facial recognition technology. The social network added $100 million to its initial $550 million settlement, Facebook revealed in court documents reported by Fortune. The lawsuit dates back to 2015, when the company was hit with a class action lawsuit saying Facebook violated an Illinois privacy law that required companies obtain "explicit consent" before collecting biometric data from users. At issue was Facebook's "tag suggestions" feature, which used facial recognition to scan photos and automatically suggest tags when users uploaded new images. The new $650 million settlement comes as officials around the country have pushed for facial recognition bans.


Why AI and facial recognition software is under scrutiny for racial and gender bias - IFSEC Global

#artificialintelligence

In the light of the Black Lives Matter protests, AI and facial recognition vendors and users are taking notice of concerns over racial bias and privacy, reports Ron Alalouff. The use of artificial intelligence (AI) has come under the spotlight recently, especially how algorithms can be biased against people of colour or women. And most recently, in the wake of the Black Lives Matter campaigns following the death of George Floyd in May, tech giants such as Amazon and IBM have suspended or withdrawn their facial recognition technologies which are based on AI algorithms. In the United States the issue of bias in AI is most explosive. Miriam Vogel, President and CEO of Equal AI, believes that while racism has its historical roots, "AI now plays a role in creating, exacerbating and hiding these disparities behind the facade of a seemingly neutral, scientific machine".


Judge: Facebook's $550 Million Settlement In Facial Recognition Case Is Not Enough

NPR Technology

Facebook in January agreed to a historic $550 million settlement over its face-identifying technology. But now, the federal judge overseeing the case is refusing the accept the deal. Facebook in January agreed to a historic $550 million settlement over its face-identifying technology. But now, the federal judge overseeing the case is refusing the accept the deal. Next week, lawyers for Facebook will be back in court, trying to convince a judge they should be allowed to settle a class action suit that accuses the company of violating users' privacy.


Why are Artificial Intelligence systems biased? – IAM Network

#artificialintelligence

A machine-learned AI system used to assess recidivism risks in Broward County, Fla., often gave higher risk scores to African Americans than to whites, even when the latter had criminal records. The popular sentence-completion facility in Google Mail was caught assuming that an "investor" must be a male.A celebrated natural language generator called GPT, with an uncanny ability to write polished-looking essays for any prompt, produced seemingly racist and sexist completions when given prompts about minorities. Amazon found, to its consternation, that an automated AI-based hiring system it built didn't seem to like female candidates.Commercial gender-recognition systems put out by industrial heavy-weights, including Amazon, IBM and Microsoft, have been shown to suffer from high misrecognition rates for people of color. Another commercial face-recognition technology that Amazon tried to sell to government agencies has been shown to have significantly higher error rates for minorities. And a popular selfie lens by Snapchat appears to "whiten" people's faces, apparently to make them more attractive.ADVERTISEMENTThese are not just academic curiosities.


Why are Artificial Intelligence systems biased?

#artificialintelligence

A machine-learned AI system used to assess recidivism risks in Broward County, Fla., often gave higher risk scores to African Americans than to whites, even when the latter had criminal records. The popular sentence-completion facility in Google Mail was caught assuming that an "investor" must be a male. A celebrated natural language generator called GPT, with an uncanny ability to write polished-looking essays for any prompt, produced seemingly racist and sexist completions when given prompts about minorities. Amazon found, to its consternation, that an automated AI-based hiring system it built didn't seem to like female candidates. Commercial gender-recognition systems put out by industrial heavy-weights, including Amazon, IBM and Microsoft, have been shown to suffer from high misrecognition rates for people of color.


Why are Artificial Intelligence systems biased?

#artificialintelligence

A machine-learned AI system used to assess recidivism risks in Broward County, Fla., often gave higher risk scores to African Americans than to whites, even when the latter had criminal records. The popular sentence-completion facility in Google Mail was caught assuming that an "investor" must be a male. A celebrated natural language generator called GPT, with an uncanny ability to write polished-looking essays for any prompt, produced seemingly racist and sexist completions when given prompts about minorities. Amazon found, to its consternation, that an automated AI-based hiring system it built didn't seem to like female candidates. Commercial gender-recognition systems put out by industrial heavy-weights, including Amazon, IBM and Microsoft, have been shown to suffer from high misrecognition rates for people of color.


Why IBM Decided to Halt all Facial Recognition Development

#artificialintelligence

In a letter to congress sent on June 8th, IBM's CEO Arvind Krishna made a bold statement regarding the company's policy toward facial recognition. "IBM no longer offers general purpose IBM facial recognition or analysis software," says Krishna. "IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency." The company has halted all facial recognition development and disapproves or any technology that could lead to racial profiling. The ethics of face recognition technology have been in question for years. However, there has been little to no movement in the enactment of official laws barring the technology.


If Done Right, AI Could Make Policing Fairer

WIRED

A decade ago, Fei-Fei Li, a professor of computer science at Stanford University, helped demonstrate the power of a new generation of powerful artificial intelligence algorithms. She created ImageNet, a vast collection of labeled images that could be fed to machine learning programs. Over time, that process helped machines master certain human skills remarkably well when they have enough data to learn from. Since then, AI programs have taught themselves to do more and more useful tasks, from voice recognition and language translation to operating warehouse robots and guiding self-driving cars. But AI algorithms have also demonstrated darker potential, for example as a means of automated facial recognition that can perpetuate race and gender bias.


Why Microsoft and Amazon are calling on Congress to regulate facial recognition tech

#artificialintelligence

Some of the biggest companies in the world are pulling their facial recognition technologies from law enforcement agencies across the country. Amazon (AMZN), IBM (IBM), and Microsoft (MSFT) have said that they will either put a moratorium on the use of their technology by police -- or are completely exiting the field citing human rights concerns. The technology, which can be used to identify suspects in things like surveillance footage, has faced widespread criticism after studies found it can be biased against women and people of color. And according to at least one expert, there needs to be some form of regulation put in place if these technologies are going to be used by law enforcement agencies. "If these technologies were to be deployed, I think you cannot do it in the absence of legislation," explained Siddharth Garg, assistant professor of computer science and engineering at NYU Tandon School of Engineering, told Yahoo Finance.