Results


China now has SEMINARS to tell other countries how to restrict speech

Daily Mail

China now has seminars to teach other countries how to censor free speech as its'techno-dystopia' spreads, a worrying report has found. Governments worldwide are stepping up use of online tools to suppress dissent and tighten their grip on power, a human rights watchdog study found. Chinese officials have held sessions on controlling information with 36 of the 65 countries assessed, and provided telecom and surveillance equipment to a number of foreign governments, researchers said. India led the world in the number of internet shutdowns, with over 100 reported incidents in 2018 so far, claiming that the moves were needed to halt the flow of disinformation and incitement to violence. Many governments, including Saudi Arabia, are employing'troll armies' to manipulate social media and in many cases drown out the voices of dissidents.


Dating apps are RACIST and should be redesigned without racial filters, study claims

Daily Mail

Dating apps that allow users to filter their searches by race - or rely on algorithms that pair up people of the same race - reinforce racial divisions and biases, according to a new paper by Cornell University researchers. Researchers called for the apps to be redesigned, and for'racist' algorithms should be reprogrammed. Experts say that amid the huge rise in the usage of dating apps are meaning people are failing to meet diverse potential partners. Cornell researchers called for the apps to be redesigned, and for'racist' algorithms should be reprogrammed. The paper revealed how simple design decisions could decrease bias against people of all marginalized groups.


Why is OK for online daters to block whole ethnic groups?

The Guardian

Sinakhone Keodara reached his breaking point last July. Loading up Grindr, the gay dating app that presents users with potential mates in close geographical proximity to them, the founder of a Los Angeles-based Asian television streaming service came across the profile of an elderly white man. He struck up a conversation, and received a three-word response: "Asian, ew gross." He is now considering suing Grindr for racial discrimination. For black and ethnic minority singletons, dipping a toe into the water of dating apps can involve subjecting yourself to racist abuse and crass intolerance.


Understanding Self-Narration of Personally Experienced Racism on Reddit

AAAI Conferences

We identify and classify users’ self-narration of racial discrimination and corresponding community support in social media. We developed natural language models first to distinguish self-narration of racial discrimination in Reddit threads, and then to identify which types of support are provided and valued in subsequent replies. Our classifiers can detect the self-narration of personally experienced racism in online textual accounts with 83% accuracy and can recognize four types of supportive actions in replies with up to 88% accuracy. Descriptively, our models identify types of racism experienced and the racist concepts (e.g., sexism, appearance or accent related) most experienced by people of different races. Finally, we show that commiseration is the most valued form of social support.


When Online Harassment Is Perceived as Justified

AAAI Conferences

Most models of criminal justice seek to identify and punish offenders. However, these models break down in online environments, where offenders can hide behind anonymity and lagging legal systems. As a result, people turn to their own moral codes to sanction perceived offenses. Unfortunately, this vigilante justice is motivated by retribution, often resulting in personal attacks, public shaming, and doxing— behaviors known as online harassment. We conducted two online experiments (n=160; n=432) to test the relationship between retribution and the perception of online harassment as appropriate, justified, and deserved. Study 1 tested attitudes about online harassment when directed toward a woman who has stolen from an elderly couple. Study 2 tested the effects of social conformity and bystander intervention. We find that people believe online harassment is more deserved and more justified—but not more appropriate—when the target has committed some offense. Promisingly, we find that exposure to a bystander intervention reduces this perception. We discuss alternative approaches and designs for responding to harassment online.


Should bots have a right to free speech? This non-profit thinks so.

#artificialintelligence

Do you have a right to know if you're talking to a bot? Does it have the right to keep that information from you? Those questions have been stirring in the minds of many since well before Google demoed Duplex, a human-like AI that makes phone calls on a user's behalf, earlier this month. Bots -- online accounts that appear to be controlled by a human, but are actually powered by AI -- are now prevalent all across the internet, specifically on social media sites. While some people think legally forcing these bots to "out" themselves as non-human would be beneficial, others think doing so violates the bot's right to free speech.


'Least Desirable'? How Racial Discrimination Plays Out In Online Dating

NPR

In 2014, user data on OkCupid showed that most men on the site rated black women as less attractive than women of other races and ethnicities. That resonated with Ari Curtis, 28, and inspired her blog, Least Desirable.


Google's comment ranking system will be a hit with the alt-right

Engadget

A recent, sprawling Wired feature outlined the results of its analysis on toxicity in online commenters across the United States. Unsurprisingly, it was like catnip for everyone who's ever heard the phrase "don't read the comments." According to The Great Tech Panic: Trolls Across America, Vermont has the most toxic online commenters, whereas Sharpsburg, Georgia "is the least toxic city in the US." The underlying API used to determine "toxicity" scores phrases like "I am a gay black woman" as 87 percent toxicity, and phrases like "I am a man" as the least toxic. The API, called Perspective, is made by Google's Alphabet within its Jigsaw incubator.


Pew Research Center: Internet, Science and Tech on the Future of Free Speech

#artificialintelligence

The more hopeful among these respondents cited a series of changes they expect in the next decade that could improve the tone of online life. They believe: Technical and human solutions will arise as the online world splinters into segmented, controlled social zones with the help of artificial intelligence (AI). While many of these experts were unanimous in expressing a level of concern about online discourse today many did express an expectation for improvement. These respondents said it is likely the coming decade will see a widespread move to more-secure services, applications, and platforms, reputation systems and more-robust user-identification policies. They predict more online platforms will require clear identification of participants; some expect that online reputation systems will be widely used in the future. Some expect that online social forums will splinter into segmented spaces, some highly protected and monitored while others retain much of the free-for-all character of today's platforms. Many said they expect that due to advances in AI, "intelligent agents" or bots will begin to more thoroughly scour forums for toxic commentary in addition to helping users locate and contribute to civil discussions. Jim Hendler, professor of computer science at Rensselaer Polytechnic Institute, wrote, "Technologies will evolve/adapt to allow users more control and avoidance of trolling.


How to Keep Your AI From Turning Into a Racist Monster

WIRED

If you're not sure whether algorithmic bias could derail your plan, you should be. Megan Garcia (@meganegarcia) is a senior fellow and director of New America California, where she studies cybersecurity, AI, and diversity in technology. Algorithmic bias--when seemingly innocuous programming takes on the prejudices either of its creators or the data it is fed--causes everything from warped Google searches to barring qualified women from medical school. It doesn't take active prejudice to produce skewed results (more on that later) in web searches, data-driven home loan decisions, or photo-recognition software. It just takes distorted data that no one notices and corrects for.