• Title/Summary/Keyword: Web Spam

Search Result 37, Processing Time 0.023 seconds

A Study on Spam Document Classification Method using Characteristics of Keyword Repetition (단어 반복 특징을 이용한 스팸 문서 분류 방법에 관한 연구)

  • Lee, Seong-Jin;Baik, Jong-Bum;Han, Chung-Seok;Lee, Soo-Won
    • The KIPS Transactions:PartB
    • /
    • v.18B no.5
    • /
    • pp.315-324
    • /
    • 2011
  • In Web environment, a flood of spam causes serious social problems such as personal information leak, monetary loss from fishing and distribution of harmful contents. Moreover, types and techniques of spam distribution which must be controlled are varying as days go by. The learning based spam classification method using Bag-of-Words model is the most widely used method until now. However, this method is vulnerable to anti-spam avoidance techniques, which recent spams commonly have, because it classifies spam documents utilizing only keyword occurrence information from classification model training process. In this paper, we propose a spam document detection method using a characteristic of repeating words occurring in spam documents as a solution of anti-spam avoidance techniques. Recently, most spam documents have a trend of repeating key phrases that are designed to spread, and this trend can be used as a measure in classifying spam documents. In this paper, we define six variables, which represent a characteristic of word repetition, and use those variables as a feature set for constructing a classification model. The effectiveness of proposed method is evaluated by an experiment with blog posts and E-mail data. The result of experiment shows that the proposed method outperforms other approaches.

A Spam Mail Classification Using Link Structure Analysis (링크구조분석을 이용한 스팸메일 분류)

  • Rhee, Shin-Young;Khil, A-Ra;Kim, Myung-Won
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.1
    • /
    • pp.30-39
    • /
    • 2007
  • The existing content-based spam mail filtering algorithms have difficulties in filtering spam mails when e-mails contain images but little text. In this thesis we propose an efficient spam mail classification algorithm that utilizes the link structure of e-mails. We compute the number of hyperlinks in an e-mail and the in-link frequencies of the web pages hyperlinked in the e-mail. Using these two features we classify spam mails and legitimate mails based on the decision tree trained for spam mail classification. We also suggest a hybrid system combining three different algorithms by majority voting: the link structure analysis algorithm, a modified link structure analysis algorithm, in which only the host part of the hyperlinked pages of an e-mail is used for link structure analysis, and the content-based method using SVM (support vector machines). The experimental results show that the link structure analysis algorithm slightly outperforms the existing content-based method with the accuracy of 94.8%. Moreover, the hybrid system achieves the accuracy of 97.0%, which is a significant performance improvement over the existing method.

Intelligent Spam-mail Filtering Based on Textual Information and Hyperlinks (텍스트정보와 하이퍼링크에 기반한 지능형 스팸 메일 필터링)

  • Kang, Sin-Jae;Kim, Jong-Wan
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.7
    • /
    • pp.895-901
    • /
    • 2004
  • This paper describes a two-phase intelligent method for filtering spam mail based on textual information and hyperlinks. Scince the body of spam mail has little text information, it provides insufficient hints to distinguish spam mails from legitimate mails. To resolve this problem, we follows hyperlinks contained in the email body, fetches contents of a remote webpage, and extracts hints (i.e., features) from original email body and fetched webpages. We divided hints into two kinds of information: definite information (sender`s information and definite spam keyword lists) and less definite textual information (words or phrases, and particular features of email). In filtering spam mails, definite information is used first, and then less definite textual information is applied. In our experiment, the method of fetching web pages achieved an improvement of F-measure by 9.4% over the method of using on original email header and body only.

Spam-Filtering by Identifying Automatically Generated Email Accounts (자동 생성 메일계정 인식을 통한 스팸 필터링)

  • Lee Sangho
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.5
    • /
    • pp.378-384
    • /
    • 2005
  • In this paper, we describe a novel method of spam-filtering to improve the performance of conventional spam-filtering systems. Conventional systems filter emails by investigating words distribution in email headers or bodies. Nowadays, spammers begin making email accounts in web-based email service sites and sending emails as if they are not spams. Investigating the email accounts of those spams, we notice that there is a large difference between the automatically generated accounts and ordinaries. Based on that difference, incoming emails are classified into spam/non-spam classes. To classify emails from only account strings, we used decision trees, which have been generally used for conventional pattern classification problems. We collected about 2.15 million account strings from email service sites, and our account checker resulted in the accuracy of $96.3\%$. The previous filter system with the checker yielded the improved filtering performance.

Splog Detection Using Post Structure Similarity and Daily Posting Count (포스트의 구조 유사성과 일일 발행수를 이용한 스플로그 탐지)

  • Beak, Jee-Hyun;Cho, Jung-Sik;Kim, Sung-Kwon
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.2
    • /
    • pp.137-147
    • /
    • 2010
  • A blog is a website, usually maintained by an individual, with regular entries of commentary, descriptions of events, or other material such as graphics or video. Entries are commonly displayed in reverse chronological order. Blog search engines, like web search engines, seek information for searchers on blogs. Blog search engines sometimes output unsatisfactory results, mainly due to spam blogs or splogs. Splogs are blogs hosting spam posts, plagiarized or auto-generated contents for the sole purpose of hosting advertizements or raising the search rankings of target sites. This thesis focuses on splog detection. This thesis proposes a new splog detection method, which is based on blog post structure similarity and posting count per day. Experiments based on methods proposed a day show excellent result on splog detection tasks with over 90% accuracy.

Design and Implementation of Web Mail Filtering Agent for Personalized Classification (개인화된 분류를 위한 웹 메일 필터링 에이전트)

  • Jeong, Ok-Ran;Cho, Dong-Sub
    • The KIPS Transactions:PartB
    • /
    • v.10B no.7
    • /
    • pp.853-862
    • /
    • 2003
  • Many more use e-mail purely on a personal basis and the pool of e-mail users is growing daily. Also, the amount of mails, which are transmitted in electronic commerce, is getting more and more. Because of its convenience, a mass of spam mails is flooding everyday. And yet automated techniques for learning to filter e-mail have yet to significantly affect the e-mail market. This paper suggests Web Mail Filtering Agent for Personalized Classification, which automatically manages mails adjusting to the user. It is based on web mail, which can be logged in any time, any place and has no limitation in any system. In case new mails are received, it first makes some personal rules in use of the result of observation ; and based on the personal rules, it automatically classifies the mails into categories according to the contents of mails and saves the classified mails in the relevant folders or deletes the unnecessary mails and spam mails. And, we applied Bayesian Algorithm using Dynamic Threshold for our system's accuracy.

A Splog Detection System Using Support Vector Systems (지지벡터기계를 이용한 스팸 블로그(Splog) 판별 시스템)

  • Lee, Song-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.1
    • /
    • pp.163-168
    • /
    • 2011
  • Blogs are an easy way to publish information, engage in discussions, and form communities on the Internet. Recently, there are several varieties of spam blog whose purpose is to host ads or raise the PageRank of target sites. Our purpose is to develope the system which detects these spam blogs (splogs) automatically among blogs on Web environment. After removing HTML of blogs, they are tagged by part of speech(POS) tagger. Words and their POS tags information is used as a feature type. Among features, we select useful features with X2 statistics and train the SVM with the selected features. Our system acquired 90.5% of F1 measure with SPLOG data set.

A Discovery System of Malicious Javascript URLs hidden in Web Source Code Files

  • Park, Hweerang;Cho, Sang-Il;Park, Jungkyu;Cho, Youngho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.5
    • /
    • pp.27-33
    • /
    • 2019
  • One of serious security threats is a botnet-based attack. A botnet in general consists of numerous bots, which are computing devices with networking function, such as personal computers, smartphones, or tiny IoT sensor devices compromised by malicious codes or attackers. Such botnets can launch various serious cyber-attacks like DDoS attacks, propagating mal-wares, and spreading spam e-mails over the network. To establish a botnet, attackers usually inject malicious URLs into web source codes stealthily by using data hiding methods like Javascript obfuscation techniques to avoid being discovered by traditional security systems such as Firewall, IPS(Intrusion Prevention System) or IDS(Intrusion Detection System). Meanwhile, it is non-trivial work in practice for software developers to manually find such malicious URLs which are hidden in numerous web source codes stored in web servers. In this paper, we propose a security defense system to discover such suspicious, malicious URLs hidden in web source codes, and present experiment results that show its discovery performance. In particular, based on our experiment results, our proposed system discovered 100% of URLs hidden by Javascript encoding obfuscation within sample web source files.

An Implementation of the Spam Mail Prevention System Using Reply Message with Secrete Words (비밀단어의 회신을 이용한 스팸메일 차단 시스템의 구현)

  • Ko Joo Young;Shim Jae Chang;Kim Hyun Ki
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.1
    • /
    • pp.111-118
    • /
    • 2005
  • This paper describes an implementation of the spam mail prevention system using reply message with secrete words. When user receives a new e-mail, the e-mail address is compared with the white e-mail addresses in database by the system. If user receives a new e-mail which does not exist in a white e-mail addresses database, a reply e-mail attached with secrete words is delivered automatically. And the system is compared with the white domains first for intranet environment. It speeds up processing time. proposed algorithm is required a small database and faster than the black e-mail addresses comparison. This system is implemented using procmail, PHP and IMAP on Linux and the user can manage the databases on the web.

  • PDF

Indirection based Multilevel Security Model and Application of Rehabilitation Psychology Analysis System (재활심리분석시스템의 다중 우회기반 접근통제 모델 및 응용)

  • Kim, Young-Soo;Jo, Sun-Goo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.10
    • /
    • pp.2301-2308
    • /
    • 2013
  • These days, Rehabilitation psychology analysis system is being used by world wide web in everyday's life. And on the other hand, we are facing spam messages' problems. To block these spam message, we are using filtering or pricing systems. But these solutions are raising other problems such as impediment in reception or availability caused by false positive or payment resistance. To solve these problems, we propose an Indirect Model on Message Control System(IMMCS) which controls an unsolicited message and prevents an useful message from discarding. We design and implement the IMMCS to enhance the usefulness and the availability. Being tested the IMMCS to verify the usability and the efficiency, it gave us a very successful result.