Results


Senators are asking whether artificial intelligence could violate US civil rights laws

#artificialintelligence

Seven members of the US Congress have sent letters to the Federal Trade Commission (pdf), Federal Bureau of Investigation (pdf), and Equal Employment Opportunity Commission (pdf) asking whether the agencies have vetted the potential biases of artificial intelligence algorithms being used for commerce, surveillance, and hiring. "We are concerned by the mounting evidence that these technologies can perpetuate gender, racial, age, and other biases," a letter to the FTC says. "As a result, their use may violate civil rights laws and could be unfair and deceptive." The letters request that the agencies respond by the end of September with complaints they've received of unfair use of facial recognition or artificial intelligence, as well as details on how these algorithms are tested for fairness before being implemented by the government. In the letter to the EEOC, senators Kamala Harris, Patty Murray, and Elizabeth Warren specifically ask the agency to determine whether this technology could violate the Civil Rights Act of 1964, the Equal Pay Act of 1963, or the Americans with Disabilities Act of 1990.


Daniel Wagner Interview on AI

#artificialintelligence

The following is an interview with Daniel Wagner, ahead of the release of his new book, AI Supremacy: Winning in the Era of Machine Learning. Can you briefly explain the differences between artificial intelligence, machine learning, and deep learning? Artificial intelligence (AI) is the overarching science and engineering associated with intelligent algorithms, whether or not they learn from data. However, the definition of intelligence is subject to philosophical debate-even the terms algorithms can be interpreted in a wide context. This is one of the reasons why there is some confusion about what AI is and what is not, because people use the word loosely and have their own definition of what they believe AI is. People should understand AI to be a catch-all term for technology which tends to imply the latest advances in intelligent algorithms, but the context in how the phrase is used determines its meaning, which can vary quite widely.


#KindrGrindr: Gay dating app launches anti-racism campaign

BBC News

If you're a black or Asian user of gay dating app Grindr, then it's possible you've encountered racism while using it. Some users of the app have said they've come across what they believe are discriminatory statements on other profiles - things like "no blacks and no Asians". Others say they've faced racist comments in conversation with users when they've rejected their advances. Now Grindr has taken a stand against discrimination on its platform and says no user is entitled to tear another down for "being who they are". It's launched the #KindrGrindr campaign to raise awareness of racism and discrimination and promote inclusivity among users.


Left unchecked, artificial intelligence can become prejudiced all on its own

#artificialintelligence

If artificial intelligence were to run the world, it wouldn't be so bad -- it could objectively make decisions on exactly the kinds of things humans tend to screw up. But if we're gonna hand over the reins to AI, it's gotta be fair. AI trained on datasets that were annotated or curated by people tend to learn the same racist, sexist, or otherwise bigoted biases of those people. Slowly, programmers seem to be correcting for these biases. But even if we succeed at keeping our own prejudice out of our code, it seems that AI is now capable of developing it all on its own.


How Yazidi refugees are using drones and helium balloons to collect evidence of genocide

The Independent

The British installation at the London Design Biennale is an international project that demonstrates how victims of human rights violations around the world can gather proof of their own experiences. Plastic bottles, digital cameras and kites, just some of the low-cost items in the exhibition, are being used in the Sinjar region of northern Iraq to gather the remaining evidence of Isis's 2014 treatment of the Yazidi ethnic minority, treatment that survivors and their supporters have called genocide and hope to prosecute in the international courts. Not only do they say thousands were killed by the terrorist group and thousands more displaced, but Yazidi cultural and religious heritage sites were destroyed and their temples were used as mass graves. Four years later, the region is still dangerous, littered with landmines and booby-traps left by the militants as they retreated. So when Yazda, a global rights organisation established by the Yazidi diaspora, sought help in supplementing their documentation efforts from Forensic Architecture, an independent research agency based at Goldsmiths, University of London, its team of architects, photographers, software developers, lawyers and archaeologists adapted their investigative methods to provide ways for Yazidis to gather video and data without entering the most hazardous areas.


Google's prototype Chinese search engine links users' activity to their phone numbers, report claims

Daily Mail

Google's secretive plans in China are attracting renewed scrutiny from privacy advocates. The tech giant is said to be building a prototype version of a censored Chinese search engine that links users' activity to their personal phone number, according to the Intercept. In doing so, it would be able to comply with the Chinese government's censorship requirements, increasing the chances that such a product would launch there in the future. A bipartisan group of 16 US lawmakers asked Google if it would comply with China's internet censorship and surveillance policies should it re-enter the search engine market there While China is home to the world's largest number of internet users, a 2015 report by US think tank Freedom House found that the country had the most restrictive online use policies of 65 nations it studied, ranking below Iran and Syria. But China has maintained that its various forms of web censorship are necessary for protecting its national security.


Extracting Fairness Policies from Legal Documents

arXiv.org Machine Learning

Machine Learning community is recently exploring the implications of bias and fairness with respect to the AI applications. The definition of fairness for such applications varies based on their domain of application. The policies governing the use of such machine learning system in a given context are defined by the constitutional laws of nations and regulatory policies enforced by the organizations that are involved in the usage. Fairness related laws and policies are often spread across the large documents like constitution, agreements, and organizational regulations. These legal documents have long complex sentences in order to achieve rigorousness and robustness. Automatic extraction of fairness policies, or in general, any specific kind of policies from large legal corpus can be very useful for the study of bias and fairness in the context of AI applications. We attempted to automatically extract fairness policies from publicly available law documents using two approaches based on semantic relatedness. The experiments reveal how classical Wordnet-based similarity and vector-based similarity differ in addressing this task. We have shown that similarity based on word vectors beats the classical approach with a large margin, whereas other vector representations of senses and sentences fail to even match the classical baseline. Further, we have presented thorough error analysis and reasoning to explain the results with appropriate examples from the dataset for deeper insights.


Artificial Intelligence can develop racism on its own

Daily Mail

Robots could teach themselves to treat other forms of life – including humans – as less valuable than themselves, new research claims. Experts say prejudice towards others does not require a high level of cognitive ability and could easily be exhibited by artificially intelligent machines. These machines could teach each other the value of excluding others from outside their immediate group. The latest findings are based on computer simulations of how AIs, or virtual agents, form a group and interact with each other. Robot could teach themselves to be treat other forms of life - including humans - as less valuable than themselves, new research suggests.


'Right to be forgotten' could threaten global free speech, say NGOs

The Guardian

The "right to be forgotten" online is in danger of being transformed into a tool of global censorship through a test case at the European court of justice (ECJ) this week, free speech organisations are warning. An application by the French data regulator for greater powers to remove out of date or embarrassing content from internet domains around the world will enable authoritarian regimes to exert control over publicly available information, according to a British-led alliance of NGOs. The right to be forgotten was originally established by an ECJ ruling in 2014 after a Spaniard sought to delete an auction notice of his repossessed home dating from 1998 on the website of a mass circulation newspaper in Catalonia. He had resolved his social security debts, he said, and his past troubles should no longer be automatically linked to him whenever anyone searched for his name on Google. The power to de-list from online searches was limited to national internet domains.


Deep Recurrent Survival Analysis

arXiv.org Machine Learning

Survival analysis is a hotspot in statistical research for modeling time-to-event information with data censorship handling, which has been widely used in many applications such as clinical research, information system and other fields with survivorship bias. Many works have been proposed for survival analysis ranging from traditional statistic methods to machine learning models. However, the existing methodologies either utilize counting-based statistics on the segmented data, or have a pre-assumption on the event probability distribution w.r.t. time. Moreover, few works consider sequential patterns within the feature space. In this paper, we propose a Deep Recurrent Survival Analysis model which combines deep learning for conditional probability prediction at fine-grained level of the data, and survival analysis for tackling the censorship. By capturing the time dependency through modeling the conditional probability of the event for each sample, our method predicts the likelihood of the true event occurrence and estimates the survival rate over time, i.e., the probability of the non-occurrence of the event, for the censored data. Meanwhile, without assuming any specific form of the event probability distribution, our model shows great advantages over the previous works on fitting various sophisticated data distributions. In the experiments on the three real-world tasks from different fields, our model significantly outperforms the state-of-the-art solutions under various metrics.