Civil Rights & Constitutional Law


Meet the Researchers Working to Make Sure Artificial Intelligence Is a Force for Good

TIME - Tech

With glass interior walls, exposed plumbing and a staff of young researchers dressed like Urban Outfitters models, New York University's AI Now Institute could easily be mistaken for the offices of any one of New York's innumerable tech startups. For many of those small companies (and quite a few larger ones) the objective is straightforward: leverage new advances in computing, especially artificial intelligence (AI), to disrupt industries from social networking to medical research. But for Meredith Whittaker and Kate Crawford, who co-founded AI Now together in 2017, it's that disruption itself that's under scrutiny. They are two of many experts who are working to ensure that, as corporations, entrepreneurs and governments roll out new AI applications, they do so in a way that's ethically sound. "These tools are now impacting so many parts of our everyday life, from healthcare to criminal justice to education to hiring, and it's happening simultaneously," says Crawford.


Why the police should use machine learning – but very carefully

#artificialintelligence

The debate over the police using machine learning is intensifying – it is considered in some quarters as controversial as stop and search. Stop and search is one of the most contentious areas of how the police interact with the public. It has been heavily criticised for being discriminatory towards black and minority ethnic groups, and for having marginal effects on reducing crime. In the same way, the police use of machine learning algorithms has been condemned by human rights groups who claim such programmes encourage racial profiling and discrimination along with threatening privacy and freedom of expression. Broadly speaking, machine learning uses data to teach computers to make decisions without explicitly instructing them how to do it.


Facial recognition is now rampant. The implications for our freedom are chilling Stephanie Hare

#artificialintelligence

Last week, all of us who live in the UK, and all who visit us, discovered that our faces were being scanned secretly by private companies and have been for some time. We don't know what these companies are doing with our faces or how long they've been doing it because they refused to share this with the Financial Times, which reported on Monday that facial recognition technology is being used in King's Cross and may be deployed in Canary Wharf, two areas that cover more than 160 acres of London. We are just as ignorant about what has been happening to our faces when they're scanned by the property developers, shopping centres, museums, conference centres and casinos that have also been secretly using facial recognition technology on us, according to the civil liberties group Big Brother Watch. But we can take a good guess. They may be matching us against police watchlists, maintaining their own watchlists or sharing their watchlists with the police, other companies and other governments.


ICO opens investigation into use of facial recognition in King's Cross

#artificialintelligence

The UK's privacy watchdog has opened an investigation into the use of facial recognition cameras in a busy part of central London. The information commissioner, Elizabeth Denham, announced she would look into the technology being used in Granary Square, close to King's Cross station. Two days ago the mayor of London, Sadiq Khan, wrote to the development's owner demanding to know whether the company believed its use of facial recognition software in its CCTV systems was legal. The Information Commissioner's Office (ICO) said it was "deeply concerned about the growing use of facial recognition technology in public spaces" and was seeking detailed information about how it is used. "Scanning people's faces as they lawfully go about their daily lives in order to identify them is a potential threat to privacy that should concern us all," Denham said.


Amazon's AI can now detect fear: Rekognition software can better read emotions and predict age

Daily Mail - Science & tech

Amazon says its increasingly popular facial recognition software has learned a few new tricks, including the ability to discern when someone is scared. The software, called'Rekognition', has added'fear' to its list of detectable emotions which already includes'Happy', 'Sad', 'Angry', 'Surprised', 'Disgusted', 'Calm' and'Confused,' said Amazon in an announcement earlier this week. In addition to its emotion capabilities, Amazon says its has also improved Rekognition's ability to identify gender and age more accurately. Amazon's facial recognition software can now detect'fear' and better glean age and gender according to an announcement by the company. The improved age features offer smaller age ranges across the spectrum and also more accurate range predictions, said the company.


Artificial Intelligence and the UNDP

#artificialintelligence

I have commented before that the topic of AI Safety should be equally as much about ensuring the field of artificial intelligence is working for important goals such as climate change or reducing inequality. In this regard I find the UNDP strategy of interest. UNDP works to eradicate poverty and reduce inequalities through the sustainable development of nations. This mission is being carried out in more than 170 countries and territories. Quite recently the UNDP launched its digital strategy for 2019–2021.


Regulator looking at use of facial recognition at King's Cross site

The Guardian

The UK's privacy regulator said it is studying the use of controversial facial recognition technology by property companies amid concerns that its use in CCTV systems at the King's Cross development in central London may not be legal. The Information Commissioner's Office warned businesses using the surveillance technology that they needed to demonstrate its use was "strictly necessary and proportionate" and had a clear basis in law. The data protection regulator added it was "currently looking at the use of facial recognition technology" by the private sector and warned it would "consider taking action where we find non-compliance with the law". On Monday, the owners of the King's Cross site confirmed that facial recognition software was used around the 67-acre, 50-building site "in the interest of public safety and to ensure that everyone who visits has the best possible experience". It is one of the first landowners or property companies in Britain to acknowledge deploying the software, described by a human rights pressure group as "authoritarian", partly because it captures images of people without their consent.


Facial recognition mistakes lawmakers for CRIMINALS in tests conducted by ACLU

Daily Mail - Science & tech

Despite facial recognition's seal of approval from law enforcement agencies across the U.S., recent experiments show the technology is far from infallible. In a demonstration by the American Civil Liberties Union, about 26 California lawmakers were misidentified by face-matching software built by Amazon, putting the rate of a mismatch at about 1 in 5. The results mimic a similar test done by the advocacy group in 2018 when a test saw Amazon's software, called'Rekognition', mismatch 28 members of congress -- many of whom were people of color. The ACLU says a test of Amazon's facial recognition software misidentified 1 in 5 lawmakers fed into its system Similarly, the software attempted to match their head shots against a database of known criminals -- a process that has become commonplace for the at least 200 departments across the U.S. who use Rekognition software. According to the LA Times, the test is fueling calls from California legislators to limit the technology's application in a law enforcement capacity, including its integration with police body cameras.


Facial Recognition Software Prompts Privacy, Racism Concerns

Huffington Post - Tech news and opinion

The city started using the cameras in areas with high crime rates, such as gas stations and outside liquor stores. But earlier this year, public housing officials installed Project Green Light cameras in a senior citizens' community, said Sandra Henriquez, executive director of the Detroit HousingCommission. She said the cameras themselves are not equipped with facial recognition software.


Preclusio uses machine learning to comply with GDPR, other privacy regulations – TechCrunch

#artificialintelligence

As privacy regulations like GDPR and the California Consumer Privacy Act proliferate, more startups are looking to help companies comply. Enter Preclusio, a member of the Y Combinator Summer 2019 class, which has developed a machine learning-fueled solution to help companies adhere to these privacy regulations. "We have a platform that is deployed on-prem in our customer's environment, and helps them identify what data they're collecting, how they're using it, where it's being stored and how it should be protected. We help companies put together this broad view of their data, and then we continuously monitor their data infrastructure to ensure that this data continues to be protected," company co-founder and CEO Heather Wade told TechCrunch. She says that the company made a deliberate decision to keep the solution on-prem.