Last Tuesday, Facebook released a plan to address offensive memes by using artificial intelligence supported by crowdsourcing. Crowdsourcing can help identify malicious and hateful posts.
Facebook says memes are becoming harder to detect despite their previous efforts and improved tools.
To combat this, Facebook has created a data set of 10,000 hateful memes and is offering $100,000 for developers who can use the data set to detect hate speech. The data set is a stack of images that often contain text to deliver a specific message. This scheme is part of an increased effort against hate speech.
“These efforts will spur the broader AI research community to test new methods, compare their work, and benchmark their results in order to accelerate work on detecting multimodal hate speech,” Facebook stated in a blog post.
However, the problem with the new tactic is the ‘multimodal’ contents that consist of both words and images. While the two segments can be individually inoffensive, the integration might be not. This can be a challenge because these two elements need to be analyzed together.
For example, the caption “love the way you smell today” is acceptable. However, combining it with a picture of a skunk doesn’t carry the same message.
To address this challenge, Facebook’s research community is planning to focus on developing tools that consider the various modalities that are present in a single content and combine them in the process of classification.
“This approach enables the system to analyze the different modalities together as people do,” said the company.
“To provide researchers with a data set with clear licensing terms, we licensed assets from Getty Images. We worked with trained third-party annotators to create new memes similar to existing ones that had been shared on social media sites,” the company says. “The annotators used Getty Images’ collection of stock images to replace the original visuals while still preserving the semantic content.”
Currently, a study states that the best available deep learning models can only achieve a 64.7 success rate when identifying offensive memes compared with a human success rate of 84.7 percent.
“We define attack as violent or dehumanizing (comparing people to non-human things, e.g. animals) speech, statements of inferiority, and calls for exclusion or segregation. Mocking hate crime is also considered hate speech,” Facebook says.
According to the company, hateful content is described as a direct or indirect attack based on various factors. Such factors include ethnicity, sex, sexual orientation, gender identity, caste, religion, disease, or disability.
Participants are said to have until October to study the data set and build models. They are welcome to participate in the final competition in December held at NeurIPS machine learning conference. The final competition will be about identifying hateful memes from a new set of memes.
The winner will receive $50,000 according to Facebook.
Over the course of time, Facebook has been investing heavily in addressing hateful content through automated removal. In fact, 88.8 percent of the hate speech contents were removed during the first quarter of 2020 which was detected using AI.
Source ABS-CBN News