• Title/Summary/Keyword: malicious comments detection

Search Result 7, Processing Time 0.017 seconds

Design and Implementation of a LSTM-based YouTube Malicious Comment Detection System (유튜브 악성 댓글 탐지를 위한 LSTM 기반 기계학습 시스템 설계 및 구현)

  • Kim, Jeongmin;Kook, Joongjin
    • Smart Media Journal
    • /
    • v.11 no.2
    • /
    • pp.18-24
    • /
    • 2022
  • Problems caused by malicious comments occur on many social media. In particular, YouTube, which has a strong character as a medium, is getting more and more harmful from malicious comments due to its easy accessibility using mobile devices. In this paper, we designed and implemented a YouTube malicious comment detection system to identify malicious comments in YouTube contents through LSTM-based natural language processing and to visually display the percentage of malicious comments, such commentors' nicknames and their frequency, and we evaluated the performance of the system. By using a dataset of about 50,000 comments, malicious comments could be detected with an accuracy of about 92%. Therefore, it is expected that this system can solve the social problems caused by malicious comments that many YouTubers faced by automatically generating malicious comments statistics.

A Filtering Method of Malicious Comments Through Morpheme Analysis (형태소 분석을 통한 악성 댓글 필터링 방안)

  • Ha, Yeram;Cheon, Junseok;Wang, Inseo;Park, Minuk;Woo, Gyun
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.9
    • /
    • pp.750-761
    • /
    • 2021
  • Even though the replying comments on Internet articles have positive effects on discussions and communications, the malicious comments are still the source of problems even driving people to death. Automatic detection of malicious comments is important in this respect. However, the current filtering method of the malicious comments, based on forbidden words, is not so effective, especially for the replying comments written in Korean. This paper proposes a new filtering approach based on morpheme analysis, identifying coarse and polite morphemes. Based on these two groups of morphemes, the soundness of comments can be calculated. Further, this paper proposes various impact measures for comments, based on the soundness. According to the experiments on malicious comments, one of the impact measures is effective for detecting malicious comments. Comparing our method with the clean-bot of a portal site, the recall is enhanced by 37.93% point and F-measure is also enhanced up to 47.66 points. According to this result, it is highly expected that the new filtering method based on morpheme analysis can be a promising alternative to those based on forbidden words.

A Malicious Comments Detection Technique on the Internet using Sentiment Analysis and SVM (감성분석과 SVM을 이용한 인터넷 악성댓글 탐지 기법)

  • Hong, Jinju;Kim, Sehan;Park, Jeawon;Choi, Jaehyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.2
    • /
    • pp.260-267
    • /
    • 2016
  • The Internet has brought lots of changes to us sharing information mutually. However, as all social symptom have double-sided character, it has serious social problem. Vicious users have been taking advantage of anonymity on the Internet, stating comments aggressively for defamation, personal attacks, privacy violation and more. Malicious comments on the Internet are creating the biggest problem regarding unlawful acts and insults which occur on the Internet. In order to solve the issues, several studies have been done to efficiently manage the comments. However, there are limitations to recognize modified malicious vocabulary in previous research. So, in this paper, we propose a malicious comments detection technique by improving limitation of previous studies. The experimental result has shown accuracy of 87.8% providing higher accuracy as compared to previous studies done.

Abusive Detection Using Bidirectional Long Short-Term Memory Networks (양방향 장단기 메모리 신경망을 이용한 욕설 검출)

  • Na, In-Seop;Lee, Sin-Woo;Lee, Jae-Hak;Koh, Jin-Gwang
    • The Journal of Bigdata
    • /
    • v.4 no.2
    • /
    • pp.35-45
    • /
    • 2019
  • Recently, the damage with social cost of malicious comments is increasing. In addition to the news of talent committing suicide through the effects of malicious comments. The damage to malicious comments including abusive language and slang is increasing and spreading in various type and forms throughout society. In this paper, we propose a technique for detecting abusive language using a bi-directional long short-term memory neural network model. We collected comments on the web through the web crawler and processed the stopwords on unused words such as English Alphabet or special characters. For the stopwords processed comments, the bidirectional long short-term memory neural network model considering the front word and back word of sentences was used to determine and detect abusive language. In order to use the bi-directional long short-term memory neural network, the detected comments were subjected to morphological analysis and vectorization, and each word was labeled with abusive language. Experimental results showed a performance of 88.79% for a total of 9,288 comments screened and collected.

  • PDF

BERT-Based Logits Ensemble Model for Gender Bias and Hate Speech Detection

  • Sanggeon Yun;Seungshik Kang;Hyeokman Kim
    • Journal of Information Processing Systems
    • /
    • v.19 no.5
    • /
    • pp.641-651
    • /
    • 2023
  • Malicious hate speech and gender bias comments are common in online communities, causing social problems in our society. Gender bias and hate speech detection has been investigated. However, it is difficult because there are diverse ways to express them in words. To solve this problem, we attempted to detect malicious comments in a Korean hate speech dataset constructed in 2020. We explored bidirectional encoder representations from transformers (BERT)-based deep learning models utilizing hyperparameter tuning, data sampling, and logits ensembles with a label distribution. We evaluated our model in Kaggle competitions for gender bias, general bias, and hate speech detection. For gender bias detection, an F1-score of 0.7711 was achieved using an ensemble of the Soongsil-BERT and KcELECTRA models. The general bias task included the gender bias task, and the ensemble model achieved the best F1-score of 0.7166.

Preprocessing Technique for Malicious Comments Detection Considering the Form of Comments Used in the Online Community (온라인 커뮤니티에서 사용되는 댓글의 형태를 고려한 악플 탐지를 위한 전처리 기법)

  • Kim Hae Soo;Kim Mi Hui
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.3
    • /
    • pp.103-110
    • /
    • 2023
  • With the spread of the Internet, anonymous communities emerged along with the activation of communities for communication between people, and many users are doing harm to others, such as posting aggressive posts and leaving comments using anonymity. In the past, administrators directly checked posts and comments, then deleted and blocked them, but as the number of community users increased, they reached a level that managers could not continue to monitor. Initially, word filtering techniques were used to prevent malicious writing from being posted in a form that could not post or comment if a specific word was included, but they avoided filtering in a bypassed form, such as using similar words. As a way to solve this problem, deep learning was used to monitor posts posted by users in real-time, but recently, the community uses words that can only be understood by the community or from a human perspective, not from a general Korean word. There are various types and forms of characters, making it difficult to learn everything in the artificial intelligence model. Therefore, in this paper, we proposes a preprocessing technique in which each character of a sentence is imaged using a CNN model that learns the consonants, vowel and spacing images of Korean word and converts characters that can only be understood from a human perspective into characters predicted by the CNN model. As a result of the experiment, it was confirmed that the performance of the LSTM, BiLSTM and CNN-BiLSTM models increased by 3.2%, 3.3%, and 4.88%, respectively, through the proposed preprocessing technique.

Token-Based Classification and Dataset Construction for Detecting Modified Profanity (변형된 비속어 탐지를 위한 토큰 기반의 분류 및 데이터셋)

  • Sungmin Ko;Youhyun Shin
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.181-188
    • /
    • 2024
  • Traditional profanity detection methods have limitations in identifying intentionally altered profanities. This paper introduces a new method based on Named Entity Recognition, a subfield of Natural Language Processing. We developed a profanity detection technique using sequence labeling, for which we constructed a dataset by labeling some profanities in Korean malicious comments and conducted experiments. Additionally, to enhance the model's performance, we augmented the dataset by labeling parts of a Korean hate speech dataset using one of the large language models, ChatGPT, and conducted training. During this process, we confirmed that filtering the dataset created by the large language model by humans alone could improve performance. This suggests that human oversight is still necessary in the dataset augmentation process.