Civil Rights & Constitutional Law


Why Google Should Stay Out of China

Forbes Technology

A decade ago, a group of Internet companies, civil society organizations, academics, and investors launched the Global Network Initiative (GNI), a collaborative effort to promote free expression and protect user privacy on the Internet. Google helped lead this effort and a parallel project devoted to developing a human rights framework for the Internet. In 2010, Google further demonstrated its leadership by making a principled decision to withdraw its search-engine services from China. In a very public way, the company acknowledged the inherent contradiction between Chinese Internet censorship and Google's commitments to its users and the GNI to promote free expression. It was thus disturbing to read recent reports suggesting that Google now is seriously considering re-entering the Chinese market and succumbing to Chinese censorship in exchange for commercial opportunity.


Why Google's Rumored Return to China's Censored Screens Isn't a Game-Changer

TIME

Eight years after their very public falling out, could China and Google be pals once again? Whispers circulating Monday, first reported by The Intercept, suggested that Google would soon launch a Chinese version of its search engine that will kowtow to the Chinese Communist Party by scrubbing various bête noires: not least criticism of its human-rights record, calls for Tibetan independence and the bloodshed around Beijing's Tiananmen Square in 1989.


Google's new principles on AI need to be better at protecting human rights

#artificialintelligence

There are growing concerns about the potential risks of AI – and mounting criticism of technology giants. In the wake of what has been called an AI backlash or "techlash", states and businesses are waking up to the fact that the design and development of AI have to be ethical, benefit society and protect human rights. In the last few months, Google has faced protests from its own staff against the company's AI work with the US military. The US Department of Defense contracted Google to develop AI for analysing drone footage in what is known as "Project Maven". A Google spokesperson was reported to have said: "the backlash has been terrible for the company" and "it is incumbent on us to show leadership".


Should bots have a right to free speech? This non-profit thinks so.

#artificialintelligence

Do you have a right to know if you're talking to a bot? Does it have the right to keep that information from you? Those questions have been stirring in the minds of many since well before Google demoed Duplex, a human-like AI that makes phone calls on a user's behalf, earlier this month. Bots -- online accounts that appear to be controlled by a human, but are actually powered by AI -- are now prevalent all across the internet, specifically on social media sites. While some people think legally forcing these bots to "out" themselves as non-human would be beneficial, others think doing so violates the bot's right to free speech.


Europeans asked Google for their 'Right to be Forgotten' 2.4 million times

Mashable

After three years in effect, the European ruling with a name that sounds like it's straight out of a science-fiction book is revealing the things people most want to hide about themselves online.


'Least Desirable'? How Racial Discrimination Plays Out In Online Dating

NPR

In 2014, user data on OkCupid showed that most men on the site rated black women as less attractive than women of other races and ethnicities. That resonated with Ari Curtis, 28, and inspired her blog, Least Desirable.


On the Hardness of Inventory Management with Censored Demand Data

arXiv.org Machine Learning

We consider a repeated newsvendor problem where the inventory manager has no prior information about the demand, and can access only censored/sales data. In analogy to multi-armed bandit problems, the manager needs to simultaneously "explore" and "exploit" with her inventory decisions, in order to minimize the cumulative cost. We make no probabilistic assumptions---importantly, independence or time stationarity---regarding the mechanism that creates the demand sequence. Our goal is to shed light on the hardness of the problem, and to develop policies that perform well with respect to the regret criterion, that is, the difference between the cumulative cost of a policy and that of the best fixed action/static inventory decision in hindsight, uniformly over all feasible demand sequences. We show that a simple randomized policy, termed the Exponentially Weighted Forecaster, combined with a carefully designed cost estimator, achieves optimal scaling of the expected regret (up to logarithmic factors) with respect to all three key primitives: the number of time periods, the number of inventory decisions available, and the demand support. Through this result, we derive an important insight: the benefit from "information stalking" as well as the cost of censoring are both negligible in this dynamic learning problem, at least with respect to the regret criterion. Furthermore, we modify the proposed policy in order to perform well in terms of the tracking regret, that is, using as benchmark the best sequence of inventory decisions that switches a limited number of times. Numerical experiments suggest that the proposed approach outperforms existing ones (that are tailored to, or facilitated by, time stationarity) on nonstationary demand models. Finally, we extend the proposed approach and its analysis to a "combinatorial" version of the repeated newsvendor problem.


Google's comment ranking system will be a hit with the alt-right

Engadget

A recent, sprawling Wired feature outlined the results of its analysis on toxicity in online commenters across the United States. Unsurprisingly, it was like catnip for everyone who's ever heard the phrase "don't read the comments." According to The Great Tech Panic: Trolls Across America, Vermont has the most toxic online commenters, whereas Sharpsburg, Georgia "is the least toxic city in the US." The underlying API used to determine "toxicity" scores phrases like "I am a gay black woman" as 87 percent toxicity, and phrases like "I am a man" as the least toxic. The API, called Perspective, is made by Google's Alphabet within its Jigsaw incubator.


Tool checks whether websites have built-in prejudice

Daily Mail

From reports Amazon's same-day delivery is less available in black neighbourhoods to Microsoft's'racist' chatbots, signs of online prejudice are becoming increasingly common. Scientists now say they can spot racist and sexist software using a code that finds out if there is implicit bias in algorithms running on websites and apps. By changing specific variables - such as race, gender or other distinctive traits - the online code Themis claims to know if data is discriminating against specific people. Previous research suggests technology is generally becoming racist and sexist as it learns from humans - and as a result, hindering its ability to make balanced decisions. Themis is a freely available code that mimics the process of entering data - such as making a loan application - into a given website or app.


Princeton researchers discover why AI become racist and sexist

#artificialintelligence

Many AIs are trained to understand human language by learning from a massive corpus known as the Common Crawl. The Common Crawl is the result of a large-scale crawl of the Internet in 2014 that contains 840 billion tokens, or words. Princeton Center for Information Technology Policy researcher Aylin Caliskan and her colleagues wondered whether that corpus--created by millions of people typing away online--might contain biases that could be discovered by algorithm. To figure it out, they turned to an unusual source: the Implicit Association Test (IAT), which is used to measure often unconscious social attitudes. People taking the IAT are asked to put words into two categories.