Automatic Detection of Online Abuse and Analysis of Problematic Users in Wikipedia
Today’s digital landscape is characterized by the pervasive presence of online communities. A persistent challenge to the ideal of free-flowing discourse in these communities is online abuse.
Wikipedia is a case in point, as its large body of contributors have experienced both the shared sense of community to be found online and the perils of abuse, ranging from hateful speech to personal attacks to spam.
Currently, Wikipedia has a human-driven process in place to identify online abuse. For their 2019 capstone project, DSI Master of Science in Data Science students Charu Rawat, Arnab Sarkar, and Sameer Singh proposed a framework to understand and detect such abuse in the English Wikipedia community.
Rawat, Sarkar, and Singh received the award for Best Paper in the Data Science for Society category at the 2019 Systems & Information Design Symposium (SIEDS). In "Automatic Detection of Online Abuse and Analysis of Problematic Users in Wikipedia," the team presented an analysis of user misconduct in Wikipedia and a system for the automated early detection of inappropriate behavior.
This is significant as a first attempt to support Wikipedia's human-based moderation with machine-generated suggestions of potential misconduct. As participation in Wikipedia increases, automated support in patrolling user submissions becomes more valuable for freeing humans from performing routine tasks.
The research started with the collection and analysis of publicly available data sources such as Wikipedia user account block logs and the text of individual Wikipedia edits. From this dataset the team developed an abuse detection model by matching Natural Language Processing techniques with machine learning algorithms. Among various experimental models, the best performance was given by the XGBoost Classifier with an AUC of 84%.
More details and documentation about the project is available on the Wikipedia project page.
Likely directions of future research projects include focusing on certain types of misconduct to achieve higher prediction rates in that subset, the design of the interface and process for humans to respond to automated suggestions, and carrying this methodology into non-English languages of Wikipedia.
The student team was supported by faculty advisor Rafael Alvarado and Wikimedian-in-Residence Lane Rasberry. The team also thanks the Wikimedia Foundation staff who guided the project direction, especially Trust and Safety Policy Manager Patrick Earley and Design Researcher Claudia Lo of the Anti-Harassment Tools Team.