Results


People are incensed that an elitist dating app is promoting itself with racist slurs

Mashable

An elitist, racist dating app is making waves in Singapore -- and its founder is defending it vehemently. SEE ALSO: Teen creates Facebook page to spotlight immigrants' weekly achievements A week ago, it made a Facebook post advertising itself. The term "banglas" is a racist term for the Bangladeshi migrant workers in Singapore. In an earlier Medium post he made in December, Eng said his app would allow filtering by "prestigious schools."


The March on Austin: Washington Casts a Shadow on SXSW

#artificialintelligence

For the creators, marketers and entrepreneurs descending this weekend on Austin, Texas, politics in the wake of President Trump will surely be top of mind, perhaps even overshadowing some of the innovation in virtual reality and artificial intelligence. This year's dialog will focus on how "social media can drive organized protests and provide support for causes our current administration has reprioritized," like the environment, gender equality and women's rights, said Neil Carty, senior VP-innovation strategy at consultancy MediaLink. "There is a shift away from interruptive TV ads to content people want to watch in its own right," said Jody Raida, director-branded entertainment at McGarryBowen. Artificial intelligence and virtual reality will also be hot, with dozens of sessions dedicated to the technologies, along with the application of chatbots and live video.


A collection of 13,500 insults lobbed by Wikipedia editors is helping researchers understand and fight trolls

#artificialintelligence

They say the data will boost efforts to train software to understand and police online harassment. The collaborators have already used the data to train machine-learning algorithms that rival crowdsourced workers at spotting personal attacks. When they ran it through the full collection of 63 million discussion posts made by Wikipedia editors, they found that only around one in 10 attacks had resulted in action by moderators. Wikimedia Foundation made reducing harassment among Wikipedia editors a priority last year.


How to Keep Your AI From Turning Into a Racist Monster

WIRED

Algorithmic bias--when seemingly innocuous programming takes on the prejudices either of its creators or the data it is fed--causes everything from warped Google searches to barring qualified women from medical school. Tay's embrace of humanity's worst attributes is an example of algorithmic bias--when seemingly innocuous programming takes on the prejudices either of its creators or the data it is fed. Recently, a Carnegie Mellon research team unearthed algorithmic bias in online ads. When they simulated people searching for jobs online, Google ads showed listings for high-income jobs to men nearly six times as often as to equivalent women.


How artificial intelligence can be corrupted to repress free speech

#artificialintelligence

By keeping ISPs and websites under threat of closure, the government is able to leverage that additional labor force to help monitor a larger population than it would otherwise be able to. This past July, the Cyberspace Administration of China, the administration in charge of online censorship, issued new rules to websites and service providers that enabled the government to punish any outlet that publishes "directly as news reports unverified content found on online platforms such as social media." And the Supreme Court, especially the Roberts Court, has been, on the main, a strong defender of free expression," Danielle Keats Citron, professor of law at the University of Maryland Carey School of Law, wrote to Engadget. "Context is crucial to many free-speech questions like whether a threat amounts to a true threat and whether a person is a limited-purpose public figure," professor Keats Citron told Engadget.


A massive AI partnership is tapping civil rights and economic experts to keep AI safe

#artificialintelligence

The Partnership also added Apple as a "founding member," putting the tech giant in good company: Amazon, Microsoft, IBM, Google, and Facebook are already on board. "In its most ideal form, [the Partnership] puts on the agenda the idea of human rights and civil liberties in the science and data science community," says Carol Rose, the executive director of the ACLU of Massachusetts who is joining the Partnership's board. "While there will be many benefits from AI, it is important to ensure that challenges such as protecting and advancing civil rights, civil liberties, and security are accounted for," Sears says. Google will be represented by director of augmented intelligence research Greg Corrado; Facebook by its director of AI research, Yann LeCun; Amazon by its director of machine learning, Ralf Herbrich; Microsoft by the director of its research lab, Horvitz; and IBM by a research scientist at its T.J. Watson Research Centre, Francesca Rossi.


A massive AI partnership is tapping civil rights and economic experts to keep AI safe

#artificialintelligence

The organizations themselves are not officially affiliated yet--that process is still underway--but the Partnership's board selected these candidates based on their expertise in civil rights, economics, and open research, according to interim co-chair Eric Horvitz, who is also director of Microsoft Research. The Partnership also added Apple as a "founding member," putting the tech giant in good company: Amazon, Microsoft, IBM, Google, and Facebook are already on board. "In its most ideal form, [the Partnership] puts on the agenda the idea of human rights and civil liberties in the science and data science community," says Carol Rose, the executive director of the ACLU of Massachusetts who is joining the Partnership's board. Google will be represented by director of augmented intelligence research Greg Corrado; Facebook by its director of AI research, Yann LeCun; Amazon by its director of machine learning, Ralf Herbrich; Microsoft by the director of its research lab, Horvitz; and IBM by a research scientist at its T.J. Watson Research Centre, Francesca Rossi.


A massive AI partnership is tapping civil rights and economic experts to keep AI safe

#artificialintelligence

The organizations themselves are not officially affiliated yet--that process is still underway--but the Partnership's board selected these candidates based on their expertise in civil rights, economics, and open research, according to interim co-chair Eric Horvitz, who is also director of Microsoft Research. The Partnership also added Apple as a "founding member," putting the tech giant in good company: Amazon, Microsoft, IBM, Google, and Facebook are already on board. "In its most ideal form, [the Partnership] puts on the agenda the idea of human rights and civil liberties in the science and data science community," says Carol Rose, the executive director of the ACLU of Massachusetts who is joining the Partnership's board. Google will be represented by director of augmented intelligence research Greg Corrado; Facebook by its director of AI research, Yann LeCun; Amazon by its director of machine learning, Ralf Herbrich; Microsoft by the director of its research lab, Horvitz; and IBM by a research scientist at its T.J. Watson Research Centre, Francesca Rossi.


How artificial intelligence can be corrupted to repress free speech

#artificialintelligence

According to a 2016 report from internet liberty watchdog, Freedom House, two-thirds of all internet users reside in countries where criticism of the ruling administration is censored -- 27 percent of them live in nations where posting, sharing or supporting unpopular opinions on social media can get you arrested. This past July, the Cyberspace Administration of China, the administration in charge of online censorship, issued new rules to websites and service providers which enabled the government to punish any outlet that publishes "directly as news reports unverified content found on online platforms such as social media." And the Supreme Court, especially the Roberts Court, has been, on the main, a strong defender of free expression," Danielle Keats Citron, Professor of Law at the University of Maryland Carey School of Law, wrote to Engadget. "Context is crucial to many free speech questions like whether a threat amounts to a true threat and whether a person is a limited purpose public figure," Professor Keats Citron told Engadget.


How artificial intelligence can be corrupted to repress free speech

Engadget

According to a 2016 report from internet liberty watchdog, Freedom House, two-thirds of all internet users reside in countries where criticism of the ruling administration is censored -- 27 percent of them live in nations where posting, sharing or supporting unpopular opinions on social media can get you arrested. This past July, the Cyberspace Administration of China, the administration in charge of online censorship, issued new rules to websites and service providers which enabled the government to punish any outlet that publishes "directly as news reports unverified content found on online platforms such as social media." And the Supreme Court, especially the Roberts Court, has been, on the main, a strong defender of free expression," Danielle Keats Citron, Professor of Law at the University of Maryland Carey School of Law, wrote to Engadget. The popular game managed to reduce toxic language and the abuse of other players by 11 percent and 6.2 percent, respectively, after LoL's developer, RiotGames, instituted an automated notification system that reminded players not to be jerks at various points throughout each match.