Collaborating Authors

Facebook issues $100K challenge to build an AI that can identify hateful memes


Memes are now an integral part of how people communicate on the internet. While a lot of these memes have an ability to cheer you up, a lot of them are hateful and discriminatory. At the same time, AI models that are trained primarily with text to detect hate speech, struggle to identify hateful memes. So, Facebook is throwing a new $100,000 challenge to developers to create models that can recognize hateful images and memes. As a part of the challenge, Facebook said it'll provide developers with a dataset of 10,000 'hateful' images licensed from Getty Images: We worked with trained third-party annotators to create new memes similar to existing ones that had been shared on social media sites.

Facebook deploys AI in its fight against hate speech and misinformation


Even in the year 2020, it's not very hard to be led astray on Facebook. Click a few misleading links and you can find yourself at the bottom of an ethnonationalist rabbit hole facing a flurry of hate speech and medical misinformation. But with the help of AI and machine learning systems, the social media platform is accelerating its efforts to keep this content from spreading. It's bad enough that we're having to deal with the COVID-19 pandemic without being bombarded on Facebook with ads for sham cures and conspiracy theories passed off as the gospel truth. The company is already partnering with 60 fact checking organizations to fight this disinformation and has issued a temporary ban to halt the sale of PPE, hand sanitizers, and cleaning supplies on the platform since the start of the outbreak in March.

Facebook wants to build a 'hateful meme' AI to clean up its platform

The Independent - Tech

Social media giant Facebook has launched a competition with an $100,000 prize pool in order to develop an artificial intelligence system that can detect "hateful" memes. The company will provide people with a data set that researchers can train their algorithm, a finite process of instructions that a computer can follow in order to solve problems. While humans are able to understand that the words and images in a meme are supposed to be read together, with contextual information from one informing the other, Facebook says that memes are difficult for computers to analyse because they cannot simply analyse the text and image separately. Instead, they must "combine these different modalities and understand how the meaning changes when they are presented together," the company wrote in its announcement. In order to create a dataset that could be used by researchers, Facebook created new memes based on those that had been shared on social media sites, but replaced the original images with licensed pictures from Getty Images that still preserved the original message of the meme.

Facebook says AI has a ways to go to detect nasty memes


Facebook contends the problem of "hateful" memes is a problem of computing the interstices between innocuous phrases and innocuous images that when combined have a certain derogatory effect. The company illustrates the matter with artificial examples to illustrate the nature of the problem, without republishing actual memes found in the wild. Mean memes, combinations of words and images that denigrate people based on qualities such as religion or ethnicity, pose an interesting challenge for machine learning programs -- and will for some time, according to Facebook. New research by the social media giant shows deep learning forms of artificial intelligence fall far short of humans in the ability to "detect" hurtful memes. A research paper disseminated by Facebook on Tuesday, titled "The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes," compiles a data set of 10,000 examples of mean memes found in the wild, including on Facebook, and compares how various state-of-the-art deep learning models do compared to human annotators.

Facebook's AI is still largely baffled by covid misinformation – MIT Technology Review


The news: In its latest Community Standards Enforcement Report, released today, Facebook detailed the updates it has made to its AI systems for detecting hate speech and disinformation. The tech giant says 88.8% of all the hate speech it removed this quarter was detected by AI, up from 80.2% in the previous quarter. The AI can remove content automatically if the system has high confidence that it is hate speech, but most is still checked by a human being first. Behind the scenes: The improvement is largely driven by two updates to Facebook's AI systems. First, the company is now using massive natural-language models that can better decipher the nuance and meaning of a post.