Enable JavaScript to see more content
Recently hyped ML content linked in one simple page
Sources: reddit/r/{MachineLearning,datasets}, arxiv-sanity, twitter, kaggle/kernels, hackernews, awesome-datasets, sota changes
Made by: Deep Phrase HK Limited
1
|
|
[1910.12574v1] A BERT-Based Transfer Learning Approach for Hate Speech Detection in Online Social Media
We propose a transfer learning approach advantaging the pre-trained language model BERT to enhance the performance of a hate speech detection system and to generalize it to new datasets
Abstract. Generated hateful and toxic content by a portion of users in social media is a rising phenomenon that motivated researchers to dedicate substantial efforts to the challenging direction of hateful content identification. We not only need an efficient automatic hate speech detection model based on advanced machine learning and natural language processing, but also a sufficiently large amount of annotated data to train a model. The lack of a sufficient amount of labelled hate speech data, along with the existing biases, has been the main issue in this domain of research. To address these needs, in this study we introduce a novel transfer learning approach based on an existing pre-trained language model called BERT (Bidirectional Encoder Representations from Transformers). More specifically, we investigate the ability of BERT at capturing hateful context within social media content by using new finetuning methods based on transfer learning. To evaluate our proposed approach, we use two publicly available datasets that have been annotated for racism, sexism, hate, or offensive content on Twitter. The results show that our solution obtains considerable performance on these datasets in terms of precision and recall in comparison to existing approaches. Consequently, our model can capture some biases in data annotation and collection process and can potentially lead us to a more accurate model. Keywords: hate speech detection, transfer learning, language modeling, BERT, fine-tuning, NLP, social media.
‹ Fig. 1: BERTbase fine-tuning Fig. 2: Insert nonlinear layers Fig. 3: Insert Bi-LSTM layer Fig. 4: Insert CNN layer Fig. 5: Fine-tuning strategies (Methodology) Fig. 6: Waseem-datase’s confusion matrix Fig. 7: Davidson-dataset’s confusion matrix (Error Analysis)›
|
|
Related: TFIDF
[1811.02906] Transfer Learning from LDA to BiLSTM-CNN for Offensive Language Detection in Twitter[1801.04433] Detecting Offensive Language in Tweets Using Deep Learning[1910.01043] Neural Word Decomposition Models for Abusive Language Detection[1802.00385] A Unified Deep Learning Architecture for Abuse Detection[1908.06024] Tackling Online Abuse: A Survey of Automated Abuse Detection Methods[1908.06024v1] Tackling Online Abuse: A Survey of Automated Abuse Detection Methods[1902.06734] Author Profiling for Hate Speech Detection[1904.09072] Identifying Offensive Posts and Targeted Offense from Twitter[1706.01206] One-step and Two-step Classification for Abusive Language Detection on Twitter[1904.08770] An Empirical Evaluation of Text Representation Schemes on Multilingual Social Web to Filter the Textual Aggression
|
|
Related: TFIDF
[1811.02906] Transfer Learning from LDA to BiLSTM-CNN for Offensive Language Detection in Twitter[1801.04433] Detecting Offensive Language in Tweets Using Deep Learning[1910.01043] Neural Word Decomposition Models for Abusive Language Detection[1802.00385] A Unified Deep Learning Architecture for Abuse Detection[1908.06024] Tackling Online Abuse: A Survey of Automated Abuse Detection Methods[1908.06024v1] Tackling Online Abuse: A Survey of Automated Abuse Detection Methods[1902.06734] Author Profiling for Hate Speech Detection[1904.09072] Identifying Offensive Posts and Targeted Offense from Twitter[1706.01206] One-step and Two-step Classification for Abusive Language Detection on Twitter[1904.08770] An Empirical Evaluation of Text Representation Schemes on Multilingual Social Web to Filter the Textual Aggression