Hate speech can be defined as speech that attacks a person or group on the basis of attributes such as race, religion, ethnic origin, sexual orientation, disability or gender. Hateful speech is widespread in the public debate, including on online platforms, social media and forums. Online hate speech has debilitating consequences on individual victims’ well-being by imposing psychological harm, damaging self-worth and inducing fear. The duration of the exposure maintained by the online availability of the content is associated with greater damage on victims and greater empowerment of perpetrators compared to offline hate speech. Therefore, the sooner the content is down, the better the chances to mitigate the negative effects of hate speech on victims’ well-being.
We believe that online communities should be inclusive, respectful and diverse. In order to fight online hate speech, we created Hatebusters, a web application that allows volunteers to check YouTube comments flagged by our machine learning model as hate speech candidates, and report them to YouTube for removal. Become a member of Hatebusters and contribute to the reduction of online hate speech.
The machine learning model of Hatebusters was initially trained on a dataset of YouTube comments that we collected and labelled ourselves as hate speech or non hate speech. This dataset is accessible by clicking here: Link to download dataset and Link to download description. If you use this dataset, please cite the publication below.
Together with colleagues from other universities, we won CrowdFlower’s AI for Everyone Challenge for Q4 of 2017. Thanks to this award, we aim to publish a new multi-labeled dataset on hate speech in the near future.
A. Anagnostou, I. Mollas, G. Tsoumakas (2018) Hatebusters: A Web Application for Actively Reporting YouTube Hate Speech, Proceedings of the 27th International Joint Conference on Artificial Intelligence and the 23rd European Conference on Artificial Intelligence (IJCAI ECAI 2018), Stockholm, Sweden, July 13-19, 2018