Detecting toxicity triggers in online discussions

Hind Almerekhi, Bernard J. Jansen, Haewoon Kwak, Joni Salminen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Scopus citations

Abstract

Despite the considerable interest in the detection of toxic comments, there has been little research investigating the causes - i.e., triggers - of toxicity. In this work, we first propose a formal definition of triggers of toxicity in online communities. We proceed to build an LSTM neural network model using textual features of comments, and then, based on a comprehensive review of previous literature, we incorporate topical and sentiment shift in interactions as features. Our model achieves an average accuracy of 82.5% of detecting toxicity triggers from diverse Reddit communities.

Original languageEnglish (US)
Title of host publicationHT 2019 - Proceedings of the 30th ACM Conference on Hypertext and Social Media
PublisherAssociation for Computing Machinery, Inc
Pages291-292
Number of pages2
ISBN (Electronic)9781450368858
DOIs
StatePublished - Sep 12 2019
Event30th ACM Conference on Hypertext and Social Media, HT 2019 - Hof, Germany
Duration: Sep 17 2019Sep 20 2019

Publication series

NameHT 2019 - Proceedings of the 30th ACM Conference on Hypertext and Social Media

Conference

Conference30th ACM Conference on Hypertext and Social Media, HT 2019
CountryGermany
CityHof
Period9/17/199/20/19

All Science Journal Classification (ASJC) codes

  • Software
  • Artificial Intelligence
  • Human-Computer Interaction
  • Computer Graphics and Computer-Aided Design

Fingerprint Dive into the research topics of 'Detecting toxicity triggers in online discussions'. Together they form a unique fingerprint.

Cite this