civil rights & constitutional law


Is Artificial Intelligence Sexist and Racist?

#artificialintelligence

Last year, Amazon scrapped its machine-learning algorithm because it discovered it had a major problem--the artificial intelligence didn't like women. The machine-based learning tool was designed to analyze resumes and compare potential applicants to Amazon's current workforce. The algorithm was designed to take 100 resumes and filter out the top five applicants. The problem was that there is a pre-existing gender gap in software developer and other technical posts. Therefore, when the artificial intelligence tool analyzed the patterns in Amazon's hiring practices over the prior 10-year period, it taught itself to favor men over women.


China surveillance firm tracking millions of Muslims leaves database exposed, researcher says

FOX News

A screen shows visitors being filmed by AI (Artificial Intelligence) security cameras with facial recognition technology at the 14th China International Exhibition on Public Safety and Security in Beijing. A Chinese surveillance firm using facial recognition technology left one of its databases exposed online for months, according to a prominent security researcher. A massive database for 2,565,724 people -- with names, ID card number, expiration date, home address, date of birth, nationality, gender, photograph, employer and GPS coordinates of locations -- was left online without authentication, according to a report from ZDNet. Security researcher Victor Gevers, who founded the database, told ZDNet that over a 24-hour period, a steady stream of nearly 6.7 million GPS coordinates was recorded, which means the database was actively tracking Uyghur Muslims as they moved around Xinjiang province in China. HOW AMAZON'S JEFF BEZOS AND THE NATIONAL ENQUIRER WENT TO WAR Human rights groups have said that China is keeping hundreds of thousands of Uyghur Muslims in internment camps, where they are indoctrinated, forced to perform labor and detained.


7 free skills for the human rights jobs of the future

#artificialintelligence

The human rights job landscape is changing rapidly. Current and future challenges in combating human rights violations require new skills and tactics. We have compiled a list of 7 free online courses and specializations that will equip you with the knowledge and skills for the human rights jobs of the future. Machine learning and artificial intelligence create new opportunities and challenges for the protection of human rights. Artificial intelligence can help make education, health and economic systems more efficient but also bears the risk to amplify polarization, bias and discrimination against certain groups.


Parsing the Shadow Docket

Slate

Slate Plus members get extended, ad-free versions of our podcasts--and much more. Sign up today and try it free for two weeks. Copy this link and add it in your podcast app. For detailed instructions, see our Slate Plus podcasts page. Listen to Amicus via Apple Podcasts, Overcast, Spotify, Stitcher, or Google Podcasts.


Call to ban killer robots in wars

BBC News

A group of scientists has called for a ban on the development of weapons controlled by artificial intelligence (AI). It says that autonomous weapons may malfunction in unpredictable ways and kill innocent people. Ethics experts also argue that it is a moral step too far for AI systems to kill without any human intervention. The comments were made at the American Association for the Advancement of Science meeting in Washington DC. Human Rights Watch (HRW) is one of the 89 non-governmental organisations from 50 countries that have formed the Campaign to Stop Killer Robots, to press for an international treaty.


Why tech giants are interested in regulating facial recognition

#artificialintelligence

Last week, Amazon made the unexpected move of calling for regulation on facial recognition. In a blog post published on Thursday, Michael Punke, VP of global public policy at Amazon Web Services, expressed support for a "national legislative framework that protects individual civil rights and ensures that governments are transparent in their use of facial recognition technology." Facial recognition is one of the fastest-growing areas of the artificial intelligence industry. It has drawn interest from both the public and private sector and is already worth billions of dollars. Amazon has been moving fast to establish itself as a leader in facial recognition technology, actively marketing its Rekognition service to different customers, including law enforcement agencies.


Microsoft warns investors that its artificial-intelligence tech could go awry and hurt its reputation

#artificialintelligence

Microsoft is spending heavily on its artificial-intelligence tech. But it wants investors to know that the tech may go awry, harming the company's reputation in the process. Or so it warned investors in its latest quarterly report, as first spotted by Quartz's Dave Gershgorn. "Issues in the use of AI in our offerings may result in reputational harm or liability," Microsoft wrote in the filing. "AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions. You can read the full filing below. Despite the big talk from tech companies such as Microsoft about the virtues and possibilities of AI, the truth is that the technology is not that smart yet. Today, AI is mostly based on machine learning, in which the computer has a limited ability to infer conclusions from limited data. It must ingest many examples for it to "understand" something, and if that initial data set is biased or flawed, its output will be, too. Intel and startups such as Habana Labs are working on chips that could help computers better perform the complicated task of inference. Inference is the foundation of learning and the ability of humans (and machines) to reason. And Microsoft has already had a few high-profile cases of snafus with its AI tech. In 2016, Microsoft yanked a Twitter chatbot called Tay offline within 24 hours after it began spewing racist and sexist tweets, using words taught to it by trolls. More recently, and more seriously, was research done by Joy Buolamwini at the MIT Media Lab, reported on a year ago by The New York Times. She found three leading facial-recognition systems -- created by Microsoft, IBM, and China's Megvii -- were doing a terrible job identifying nonwhite faces. Microsoft's error rate for darker-skinned women was 21%, which was still better compared with 35% for the other two. Microsoft insists that it listened to that criticism and has improved its facial-recognition technology. Plus, in the wake of outcry over Amazon's Rekognition facial-recognition service, Microsoft has begun calling for regulation of facial-recognition tech. Microsoft CEO Satya Nadella told journalists last month: "Take this notion of facial recognition, right now it's just terrible.


How to help AI find human trafficking victims

#artificialintelligence

For the ongoing series, Code Word, we're exploring if -- and how -- technology can protect individuals against sexual assault and harassment, and how it can help and support survivors. You walk through the door and set your bags on the floor. You pose for a selfie with your hotel room in the background, uploading it to Instagram with seemingly random hashtags. For your followers, the photo is a means of documenting your travels. For investigators, you've just taken a crime scene photo that might one day help them to track down victims of human trafficking.


The Problem with AI Facial Recognition - InformationWeek

#artificialintelligence

Shelf-mounted cameras paired with artificial intelligence facial recognition software that can identify a person's age, gender, and ethnicity were one of the emerging systems being pitched to retail companies during this year's National Retail Federation Big Show in New York in January. The idea was to give physical stores demographic information that could guide how they market to individual customers. It's something that could give them a competitive edge against online retailers such as Amazon, that have been leveraging customer data all along. But using cameras to capture photos of your customers in a way they may not even notice seems like it could be crossing that line between cool technology and creepy technology. Beyond that, there could be other problems, too.


Police Seek 'Balance' In Use Of AI To Predict Crime Silicon UK Tech News

#artificialintelligence

Police have said they are seeking "balance" in the use of artificial intelligence to predict crimes, after freedom of information requests found that 14 UK police forces were deploying, testing or investigating predictive AI techniques. The report by Liberty, "Policing by Machine", warned that the tools risk entrenching existing biases and delivering inaccurate predictions. The civil liberties group urged police to end the use of predictive AI, saying mapping techniques rely on "problematic" historical arrest data, while individual risk assessment programmes "encourage discriminatory profiling". The forces using or trialling predictive mapping programmes are Avon and Somerset Constabulary, Cheshire Constabulary, Dyfed-Powys Police, Greater Manchester Police, Kent Police, Lancashire Police, Merseyside Police, the Metropolitan Police Service, Norfolk Constabulary, Northamptonshire Police, Warwickshire Police and West Mercia Police, West Midlands Police and West Yorkshire Police, while a further three forces – Avon and Somerset, Durham and West Midlands – are using or trialling individual risk-assessment programmes. Norfolk Police, for instance, is trialling a system for identifying whether burglaries should be investigated, while Durham Constabulary's Harm Assessment Risk Tool (Hart) provides advice to custody officers on individuals' risk of re-offending, and West Midlands Police uses hotspot mapping and a data-driven analysis project.