• Title/Summary/Keyword: 스팸 블로그 판별

Search Result 4, Processing Time 0.023 seconds

A Splog Detection System Using Support Vector Systems (지지벡터기계를 이용한 스팸 블로그(Splog) 판별 시스템)

  • Lee, Song-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.1
    • /
    • pp.163-168
    • /
    • 2011
  • Blogs are an easy way to publish information, engage in discussions, and form communities on the Internet. Recently, there are several varieties of spam blog whose purpose is to host ads or raise the PageRank of target sites. Our purpose is to develope the system which detects these spam blogs (splogs) automatically among blogs on Web environment. After removing HTML of blogs, they are tagged by part of speech(POS) tagger. Words and their POS tags information is used as a feature type. Among features, we select useful features with X2 statistics and train the SVM with the selected features. Our system acquired 90.5% of F1 measure with SPLOG data set.

A Splog Detection System Using Support Vector Machines and $x^2$ Statistics (지지벡터기계와 카이제곱 통계량을 이용한 스팸 블로그(Splog) 판별 시스템)

  • Lee, Song-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.05a
    • /
    • pp.905-908
    • /
    • 2010
  • Our purpose is to develope the system which detects splogs automatically among blogs on Web environment. After removing HTML of blogs, they are tagged by part of speech(POS) tagger. Words and their POS tags information is used as a feature type. Among features, we select useful features with $x^2$ statistics and train the SVM with the selected features. Our system acquired 90.5% of F1 measure with SPLOG data set.

  • PDF

Spam Classification by Analyzing Characteristics of a Single Web Document (단일 문서의 특징 분석을 이용한 스팸 분류 방법)

  • Sim, Sangkwon;Lee, Soowon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.11a
    • /
    • pp.845-848
    • /
    • 2014
  • 블로그는 인터넷에서 개인의 정보나 의견을 표출하고 커뮤니티를 형성하는데 사용되는 중요한 수단이나, 광고 유치, 페이지 순위 올리기, 쓰레기 데이터 생성 등 다양한 목적을 가진 스팸블로그가 생성되어 악용되기도 한다. 본 연구에서는 이러한 문제를 해결하기 위해 웹 문서에서 나타나는 특징들을 이용한 스팸 탐지 기법을 제안한다. 먼저 블로그 본문의 길이, 태그의 비율, 태그 수, 이미지 수, 랭크의 수 등 하나의 웹 문서에서 추출할 수 있는 특징을 기반으로 각 문서에 대한 특징 벡터를 생성하고 기계학습을 통해 모델을 생성하여 스팸 블로그를 판별한다. 제안 방법의 성능 평가를 위해 블로그 포스트 데이터를 사용하여 제안방법과 기존의 스팸 분류 연구를 비교 실험을 진행하였다. Bayesian 필터링 기법을 사용하는 기존연구와 비교 실험 결과, 제안방법이 더 좋은 정확도를 가지면서 특징 추출 속도 및 메모리 사용 효율성을 보였다.

A Study on Spam Document Classification Method using Characteristics of Keyword Repetition (단어 반복 특징을 이용한 스팸 문서 분류 방법에 관한 연구)

  • Lee, Seong-Jin;Baik, Jong-Bum;Han, Chung-Seok;Lee, Soo-Won
    • The KIPS Transactions:PartB
    • /
    • v.18B no.5
    • /
    • pp.315-324
    • /
    • 2011
  • In Web environment, a flood of spam causes serious social problems such as personal information leak, monetary loss from fishing and distribution of harmful contents. Moreover, types and techniques of spam distribution which must be controlled are varying as days go by. The learning based spam classification method using Bag-of-Words model is the most widely used method until now. However, this method is vulnerable to anti-spam avoidance techniques, which recent spams commonly have, because it classifies spam documents utilizing only keyword occurrence information from classification model training process. In this paper, we propose a spam document detection method using a characteristic of repeating words occurring in spam documents as a solution of anti-spam avoidance techniques. Recently, most spam documents have a trend of repeating key phrases that are designed to spread, and this trend can be used as a measure in classifying spam documents. In this paper, we define six variables, which represent a characteristic of word repetition, and use those variables as a feature set for constructing a classification model. The effectiveness of proposed method is evaluated by an experiment with blog posts and E-mail data. The result of experiment shows that the proposed method outperforms other approaches.