Results


Understanding Self-Narration of Personally Experienced Racism on Reddit

AAAI Conferences

We identify and classify users’ self-narration of racial discrimination and corresponding community support in social media. We developed natural language models first to distinguish self-narration of racial discrimination in Reddit threads, and then to identify which types of support are provided and valued in subsequent replies. Our classifiers can detect the self-narration of personally experienced racism in online textual accounts with 83% accuracy and can recognize four types of supportive actions in replies with up to 88% accuracy. Descriptively, our models identify types of racism experienced and the racist concepts (e.g., sexism, appearance or accent related) most experienced by people of different races. Finally, we show that commiseration is the most valued form of social support.


When Online Harassment Is Perceived as Justified

AAAI Conferences

Most models of criminal justice seek to identify and punish offenders. However, these models break down in online environments, where offenders can hide behind anonymity and lagging legal systems. As a result, people turn to their own moral codes to sanction perceived offenses. Unfortunately, this vigilante justice is motivated by retribution, often resulting in personal attacks, public shaming, and doxing— behaviors known as online harassment. We conducted two online experiments (n=160; n=432) to test the relationship between retribution and the perception of online harassment as appropriate, justified, and deserved. Study 1 tested attitudes about online harassment when directed toward a woman who has stolen from an elderly couple. Study 2 tested the effects of social conformity and bystander intervention. We find that people believe online harassment is more deserved and more justified—but not more appropriate—when the target has committed some offense. Promisingly, we find that exposure to a bystander intervention reduces this perception. We discuss alternative approaches and designs for responding to harassment online.


Should bots have a right to free speech? This non-profit thinks so.

#artificialintelligence

Do you have a right to know if you're talking to a bot? Does it have the right to keep that information from you? Those questions have been stirring in the minds of many since well before Google demoed Duplex, a human-like AI that makes phone calls on a user's behalf, earlier this month. Bots -- online accounts that appear to be controlled by a human, but are actually powered by AI -- are now prevalent all across the internet, specifically on social media sites. While some people think legally forcing these bots to "out" themselves as non-human would be beneficial, others think doing so violates the bot's right to free speech.


'Least Desirable'? How Racial Discrimination Plays Out In Online Dating

NPR

In 2014, user data on OkCupid showed that most men on the site rated black women as less attractive than women of other races and ethnicities. That resonated with Ari Curtis, 28, and inspired her blog, Least Desirable.


Google's comment ranking system will be a hit with the alt-right

Engadget

A recent, sprawling Wired feature outlined the results of its analysis on toxicity in online commenters across the United States. Unsurprisingly, it was like catnip for everyone who's ever heard the phrase "don't read the comments." According to The Great Tech Panic: Trolls Across America, Vermont has the most toxic online commenters, whereas Sharpsburg, Georgia "is the least toxic city in the US." The underlying API used to determine "toxicity" scores phrases like "I am a gay black woman" as 87 percent toxicity, and phrases like "I am a man" as the least toxic. The API, called Perspective, is made by Google's Alphabet within its Jigsaw incubator.


Pew Research Center: Internet, Science and Tech on the Future of Free Speech

#artificialintelligence

The more hopeful among these respondents cited a series of changes they expect in the next decade that could improve the tone of online life. They believe: Technical and human solutions will arise as the online world splinters into segmented, controlled social zones with the help of artificial intelligence (AI). While many of these experts were unanimous in expressing a level of concern about online discourse today many did express an expectation for improvement. These respondents said it is likely the coming decade will see a widespread move to more-secure services, applications, and platforms, reputation systems and more-robust user-identification policies. They predict more online platforms will require clear identification of participants; some expect that online reputation systems will be widely used in the future. Some expect that online social forums will splinter into segmented spaces, some highly protected and monitored while others retain much of the free-for-all character of today's platforms. Many said they expect that due to advances in AI, "intelligent agents" or bots will begin to more thoroughly scour forums for toxic commentary in addition to helping users locate and contribute to civil discussions. Jim Hendler, professor of computer science at Rensselaer Polytechnic Institute, wrote, "Technologies will evolve/adapt to allow users more control and avoidance of trolling.


How to Keep Your AI From Turning Into a Racist Monster

WIRED

If you're not sure whether algorithmic bias could derail your plan, you should be. Megan Garcia (@meganegarcia) is a senior fellow and director of New America California, where she studies cybersecurity, AI, and diversity in technology. Algorithmic bias--when seemingly innocuous programming takes on the prejudices either of its creators or the data it is fed--causes everything from warped Google searches to barring qualified women from medical school. It doesn't take active prejudice to produce skewed results (more on that later) in web searches, data-driven home loan decisions, or photo-recognition software. It just takes distorted data that no one notices and corrects for.


How artificial intelligence can be corrupted to repress free speech

#artificialintelligence

In fact, in many countries, the internet, the very thing that was supposed to smash down the walls of authoritarianism like a sledgehammer of liberty, has been instead been co-opted by those very regimes in order to push their own agendas while crushing dissent and opposition. And with the emergence of conversational AI -- the technology at the heart of services like Google's Allo and Jigsaw or Intel's Hack Harassment initiative -- these governments could have a new tool to further censor their citizens. Turkey, Brazil, Egypt, India and Uganda have all shut off internet access when politically beneficial to their ruling parties. Nations like Singapore, Russia and China all exert outsize control over the structure and function of their national networks, often relying on a mix of political, technical and social schemes to control the flow of information within their digital borders. The effects of these policies are self-evident.


How artificial intelligence can be corrupted to repress free speech

#artificialintelligence

In fact, in many countries, the internet, the very thing that was supposed to smash down the walls of authoritarianism like a sledgehammer of liberty, has been instead been co-opted by those very regimes in order to push their own agendas while crushing dissent and opposition. And with the emergence of conversational AI -- the technology at the heart of services like Google's Allo and Jigsaw or Intel's Hack Harassment initiative -- these governments could have a new tool to further censor their citizens. Turkey, Brazil, Egypt, India and Uganda have all shut off internet access when politically beneficial to their ruling parties. Nations like Singapore, Russia and China all exert outsized control over the structure and function of their national networks, often relying on a mix of political, technical and social schemes to control the flow of information within their digital borders. The effects of these policies are self-evident.


How artificial intelligence can be corrupted to repress free speech

Engadget

The internet was supposed to become an overwhelming democratizing force against illiberal administrations. It was supposed to open repressed citizens eyes, expose them to new democratic ideals and help them rise up against their authoritarian governments in declaring their basic human rights. It was supposed to be inherently resistant to centralized control. In fact, in many countries, the internet, the very thing that was supposed to smash down the walls of authoritarianism like a sledgehammer of liberty, has been instead been co-opted by those very regimes in order to push their own agendas while crushing dissent and opposition. And with the emergence of conversational AI -- the technology at the heart of services like Google's Allo and Jigsaw or Intel's Hack Harassment initiative -- these governments could have a new tool to further censor their citizens.