Goto

Collaborating Authors

 brackeen


The Quiet Growth of Race-Detection Software Sparks Concerns Over Bias

WSJ.com: WSJD - Technology

In the last few years, companies have started using such race-detection software to understand how certain customers use their products, who looks at their ads, or what people of different racial groups like. Others use the tool to seek different racial features in stock photography collections, typically for ads, or in security, to help narrow down the search for someone in a database. In China, where face tracking is widespread, surveillance cameras have been equipped with race-scanning software to track ethnic minorities. The field is still developing, and it is an open question how companies, governments and individuals will take advantage of such technology in the future. Use of the software is fraught, as researchers and companies have begun to recognize its potential to drive discrimination, posing challenges to widespread adoption.


Kairos gets a $4 million lifeline for its facial recognition software

#artificialintelligence

Kairos, the facial recognition startup that found itself in turmoil following the ouster of founder and then-CEO Brian Brackeen last October, has raised $4 million in funding from E. Jay Saunders, CEO of Domus Semo Sancus. This brings Kairos's total funding to $17 million. As of November, Kairos had just enough money to get through Q1 of this year. At the time, Brackeen was looking to raise $5 million for the company and had already secured $3.5 million from Beyond Capital Markets, contingent upon Brackeen rejoining the company. Fast-forward to today and Brackeen is still out of the company and the interim CEO, Melissa Doval, has been appointed permanent CEO.


Facial recognition startup Kairos founder continues to fight attempted takeover

#artificialintelligence

Late last month, New World Angels President and Kairos board chairperson Steve O'Hara sent a letter to Kairos founder Brian Brackeen notifying him of his termination from the role of chief executive officer. The termination letter cited willful misconduct as the cause for Brackeen's termination. Specifically, O'Hara said Brackeen misled shareholders and potential investors, misappropriated corporate funds, did not report to the board of directors and created a divisive atmosphere. Kairos is trying to tackle the society-wide problem of discrimination in artificial intelligence. While that's not the company's explicit mission -- it's to provide authentication tools to businesses -- algorithmic bias has long been a topic the company, especially Brackeen, has addressed.



A.I. Has a Race Problem

#artificialintelligence

A couple of years ago, as Brian Brackeen was preparing to pitch his facial recognition software to a potential customer as a convenient, secure alternative to passwords, the software stopped working. Panicked, he tried adjusting the room's lighting, then the Wi-Fi connection, before he realized the problem was his face. Brackeen is black, but like most facial recognition developers, he'd trained his algorithms with a set of mostly white faces. He got a white, blond colleague to pose for the demo, and they closed the deal. It was a Pyrrhic victory, he says: "It was like having your own child not recognize you."


AI Weekly: How to regulate facial recognition to preserve freedom

#artificialintelligence

Today Microsoft president Brad Smith called for federal regulation of facial recognition software. "In a democratic republic, there is no substitute for decision making by our elected representatives regarding the issues that require the balancing of public safety with the essence of our democratic freedoms. Facial recognition will require the public and private sectors alike to step up -- and to act," Smith wrote in a blog post. Recent events explain why Smith is speaking out now. Last month, while the majority of U.S. citizens was outraged about the idea of separating families who unlawfully entered the United States, Microsoft was criticized by the public and hundreds of its own employees for its contract with Immigration and Customs Enforcement (ICE).


New AI can work out whether you're gay or straight from a photograph

#artificialintelligence

Artificial intelligence can accurately guess whether people are gay or straight based on photos of their faces, according to new research suggesting that machines can have significantly better "gaydar" than humans. The study from Stanford University – which found that a computer algorithm could correctly distinguish between gay and straight men 81% of the time, and 74% for women – has raised questions about the biological origins of sexual orientation, the ethics of facial-detection technology and the potential for this kind of software to violate people's privacy or be abused for anti-LGBT purposes. The machine intelligence tested in the research, which was published in the Journal of Personality and Social Psychology and first reported in the Economist, was based on a sample of more than 35,000 facial images that men and women publicly posted on a US dating website. The researchers, Michal Kosinski and Yilun Wang, extracted features from the images using "deep neural networks", meaning a sophisticated mathematical system that learns to analyze visuals based on a large dataset. The research found that gay men and women tended to have "gender-atypical" features, expressions and "grooming styles", essentially meaning gay men appeared more feminine and vice versa.


New AI can tell whether you're gay or straight from a photograph

The Guardian

Artificial intelligence can accurately predict whether people are gay or straight based on photos of their faces, according to new research suggesting that machines can have significantly better "gaydar" than humans. The study from Stanford University – which found that a computer algorithm could correctly distinguish between gay and straight men 81% of the time, and 74% for women – has raised questions about the biological origins of sexual orientation, the ethics of facial-detection technology and the potential for this kind of software to violate people's privacy or be abused for anti-LGBT purposes. The machine intelligence tested in the research, which was published in the Journal of Personality and Social Psychology and first reported in the Economist, was based on a sample of more than 35,000 facial images that men and women publicly posted on a US dating website. The researchers, Michal Kosinski and Yilun Wang, extracted features from the images using "deep neural networks", meaning a sophisticated mathematical system that learns to analyze visuals based on a large dataset. The research found that gay men and women tended to have "gender-atypical" features, expressions and "grooming styles", essentially meaning gay men appeared more feminine and visa versa.


Facial recognition database used by FBI is out of control, House committee hears

The Guardian

Approximately half of adult Americans' photographs are stored in facial recognition databases that can be accessed by the FBI, without their knowledge or consent, in the hunt for suspected criminals. About 80% of photos in the FBI's network are non-criminal entries, including pictures from driver's licenses and passports. The algorithms used to identify matches are inaccurate about 15% of the time, and are more likely to misidentify black people than white people. These are just some of the damning facts presented at last week's House oversight committee hearing, where politicians and privacy campaigners criticized the FBI and called for stricter regulation of facial recognition technology at a time when it's creeping into law enforcement and business. "Facial recognition technology is a powerful tool law enforcement can use to protect people, their property, our borders, and our nation," said committee chair Jason Chaffetz, adding that in the private sector it can be used to protect financial transactions and prevent fraud or identity theft.