Facebook has announced a new initiative to stop “hateful” memes from popping up around the net. This new algorithm dubbed “Hateful Memes Challenge” is accessible to journalists and researchers for review, which in turn, can make “dangerous memes” something “positive” to “keep people safe” when browsing the web.
Firstly, if you want a quick summary of Facebook’s newly announced program to curb hateful memes, you can check out Memology 101’s video right here:
In case you did not watch the video, according to facebook.com, it is said by Douwe Kiela (research scientist), Hamed Firooz (applied research scientist), and Aravind Mohan (data scientist), that harmful content “affects the entire tech industry and society at large.”
In addition to the above, here’s one of the many pitches behind The Hateful Memes Challenge and what it aims to accomplish:
“We’ve built and are now sharing a data set designed specifically to help AI researchers develop new systems to identify multimodal hate speech. This content combines different modalities, such as text and images, making it difficult for machines to understand.
The Hateful Memes data set contains 10,000+ new multimodal examples created by Facebook AI. We licensed images from Getty Images so that researchers can use the data set to support their work. We are also releasing the code for baseline-trained models.
We are also launching the Hateful Memes Challenge, a first-of-its-kind online competition hosted by DrivenData with a $100,000 total prize pool. The challenge has been accepted as part of the NeurIPS 2020 competition track.”
Facebook AI, along with the Hateful Memes Challenge, is said by the three scientists and the Facebook team to be the best solution at this current moment. However, the team is also looking at ways to prevent potential misuse, and that’s why there will be limited access.
Furthermore, we also learn that the team is trying to help the AI algorithm to distinguish hateful examples from potential false positives. In an attempt to make sure the classification decisions are actionable, the team created the following examples specifically to define hate speech:
“A direct or indirect attack on people based on characteristics, including ethnicity, race, nationality, immigration status, religion, caste, sex, gender identity, sexual orientation, and disability or disease. We define attack as violent or dehumanizing (comparing people to non-human things, e.g., animals) speech, statements of inferiority, and calls for exclusion or segregation. Mocking hate crime is also considered hate speech.”
Lastly, the team is said to be making progress on improving AI systems to detect “hate speech” and other “harmful content” on Facebook. However, the team wants the Hateful Memes Challenge to spread beyond Facebook and on to other platforms to “keep people safe.”