Civil Rights & Constitutional Law


artificial-intelligence-used-predict-outcome-hundreds-human-rights-cases-2435865

International Business Times

In the study, a team of British and American researchers said it had used an AI system to correctly predict the outcomes of hundreds of cases heard at the European Court of Human Rights. The AI, which analyzed 584 English language case texts related to Article 3, 6 and 8 of the European Convention on Human Rights using a machine learning algorithm, came to the same verdict as human judges in 79 percent of the cases. It could also be a valuable tool for highlighting which cases are most likely to be violations of the European Convention on Human Rights," lead researcher Nikolaos Aletras, also from UCL, noted in the statement. "It could also be a valuable tool for highlighting which cases are most likely to be violations of the European Convention on Human Rights."


Artificial Intelligence's White Guy Problem - NYTimes.com

#artificialintelligence

ACCORDING to some prominent voices in the tech world, artificial intelligence presents a looming existential threat to humanity: Warnings by luminaries like Elon Musk and Nick Bostrom about "the singularity" -- when machines become smarter than humans -- have attracted millions of dollars and spawned a multitude of conferences. But this hand-wringing is a distraction from the very real problems with artificial intelligence today, which may already be exacerbating inequality in the workplace, at home and in our legal and judicial systems. Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many "intelligent" systems that shape how we are categorized and advertised to. Take a small example from last year: Users discovered that Google's photo app, which applies automatic labels to pictures in digital photo albums, was classifying images of black people as gorillas. Google apologized; it was unintentional.


U.S. police used Facebook, Twitter data to track protesters: ACLU

The Japan Times

SAN FRANCISCO – U.S. police departments used location data and other user information from Twitter, Facebook and Instagram to track protesters in Ferguson, Missouri, and Baltimore, according to a report from the American Civil Liberties Union on Tuesday. Facebook, which also owns Instagram, and Twitter shut off the data access of Geofeedia, the Chicago-based data vendor that provided data to police, in response to the ACLU findings. The report comes amid growing concerns among consumers and regulators about how online data is being used and how closely tech companies are cooperating with the government on surveillance. "These special data deals were allowing the police to sneak in through a side door and use these powerful platforms to track protesters," said Nicole Ozer, the ACLU's technology and civil liberties policy director. The ACLU report found that as recently as July, Geofeedia touted its social media monitoring product as a tool to monitor protests.


Artificial Intelligence and Algorithms -- Friend or Foe to the News?

#artificialintelligence

You, like many others, have probably succumbed to clicking on the "trending" news tab on the right side of your Facebook news feed. At first glance it seems to provide the latest entertaining or newsworthy headlines from around the web, engineered, as Twitter's feed is, by the millions of active users actually on Facebook reading them and generating views. This is not exactly true; while the "trending" feed provides users the latest updates, it is run by algorithms programmed to filter through topics. In other words, it is not determined by users, but by artificial intelligence. According to Facebook's article "Search FYI: An Update to Trending," the social media giant uses algorithms to ensure that unimportant topics like #lunch are excluded from the trending list; instead, algorithms pull stories "directly from news sources."


Google's Brain Team: 'AIs can be racist and sexist but we can change that'

ZDNet

Google's methodology could have applications in any scoring system, such as a bank's credit-scoring system. In an age where data is driving decisions about everything from creditworthiness, to insurance and criminal justice, machines could well end up making bad predictions that just reflect and reinforce past discrimination. The Obama Administration outlined its concerns about this issue in its 2014 big-data report, warning that automated discrimination against certain groups could be the inadvertent outcome of the way big-data technologies are used. While privacy and regulation will slow the pace of adoption, AI will bring some profound changes to healthcare. Using social networks or location data to assess a person's creditworthiness could boost access to finance for people who don't have a credit history.


IBM, Cloudera join RStudio to create R interface to Apache Spark

#artificialintelligence

The focus here is on data: from R tips to desktop tools to taking a hard look at data claims. R users can now use the popular dplyr package to tap into Apache Spark big data. The new sparklyr package is a native dplyr interface to Spark, according to RStudio. After installing the package, users can "interactively manipulate Spark data using both dplyr and SQL (via DBI), according to an RStudio blog post, as well as "filter and aggregate Spark data sets then bring them into R for analysis and visualization." There is also access to Spark distributed machine-learning algorithms.


What the Gender Gap in Tech Could Cost Us

#artificialintelligence

Brad Grossman (@bradgro) is founder and CEO of Zeitguide, a cultural think tank. As artificial intelligence gets embedded into day-to-day activities -- predicting what we need from virtual assistants, teachers, even doctors -- is the technology neutrally scrubbing out gender biases, or encoding them permanently on our future? The companies developing AI, like most of Silicon Valley, have a predominantly male workforce of engineers and developers. As Melinda Gates noted during this year's Code Conference, "When I graduated 34% of undergraduates in computer science were women… we're now down to 17%." There is real risk that such gender imbalance is invisibly shaping machine learning algorithms and artificial intelligence applications.


well-keep-ai-safe-says-microsoft-google-ibm-facebook-and-amazon-on-new-partnership

#artificialintelligence

Some of the world's largest tech companies are coming together to form a partnership aimed at educating the public about the advancements of artificial intelligence and ensure they meet ethical standards. "We believe that artificial intelligence technologies hold great promise for raising the quality of people's lives and can be leveraged to help humanity address important global challenges such as climate change, food, inequality, health, and education," the group stated in a series of "tenets." Another nexus of interest will be around ethics, with the group inviting academic experts to work with companies on AI for the best of humanity. But it's not clear whether this means opposing working with government surveillance authorities, or opposing forms of online censorship.


Facebook, Google, Microsoft, IBM and Amazon partner to solve AI's ethical problem

#artificialintelligence

Artificial intelligence is becoming ubiquitous. As its reach grows and is engrained into consumer products and services, elements of control and regulation are required. Silicon Valley's biggest companies are joining forces to introduce this. Facebook, Google (in the form of DeepMind), Microsoft, IBM, and Amazon have created a partnership to research and collaborate on advancing AI in a responsible way. Each member of the Partnership on AI will contribute financial and research resources.


To Make AI Less Biased, Give It a Worldview

#artificialintelligence

One of the most difficult emerging problems when it comes to artificial intelligence is making sure that computers don't act like racist, sexist dicks. As it turns out, it's pretty tough to do: humans created and programmed them, and humans are often racist, sexist dicks. If we can program racism into computers, can we also train them to have a sense of fairness? Some experts believe that the large databases used to train modern machine learning programs reproduce existing human prejudices. To put it bluntly, as Microsoft researcher Kate Crawford did for the New York Times, AI has a white guy problem.