• Title/Summary/Keyword: Recall and Precision

Search Result 705, Processing Time 0.026 seconds

Junk-Mail Filtering by Mail Address Validation and Title-Content Weighting (메일 주소 유효성과 제목-내용 가중치 기법에 의한 스팸 메일 필터링)

  • Kang Seung-Shik
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.2
    • /
    • pp.255-263
    • /
    • 2006
  • It is common that a junk mail has an inconsistency of mail addresses between those of the mail headers and the mail recipients. In addition, users easily know that an email is a junk or legitimate mail only by looking for the title of the email. In this paper, we tried to apply the filtering classifiers of mail address validation check and the combination method of title-content weighting to improve the performance of junk mail filtering system. In order to verify the effectiveness of the proposed method, we performed an experiment by applying them to Naive Bayesian classifier. The experiment includes the unit testing and the combination of the filtering techniques. As a result, we found that our method improved 11.6% of recall and 2.1% of precision that it contributed the enhancement of the junk mail filtering system.

  • PDF

Modeling and Evaluating Information Diffusion for Spam Detection in Micro-blogging Networks

  • Chen, Kan;Zhu, Peidong;Chen, Liang;Xiong, Yueshan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.8
    • /
    • pp.3005-3027
    • /
    • 2015
  • Spam has become one of the top threats of micro-blogging networks as the representations of rumor spreading, advertisement abusing and malware distribution. With the increasing popularity of micro-blogging, the problems will exacerbate. Prior detection tools are either designed for specific types of spams or not robust enough. Spammers may escape easily from being detected by adjusting their behaviors. In this paper, we present a novel model to quantitatively evaluate information diffusion in micro-blogging networks. Under this model, we found that spam posts differ wildly from the non-spam ones. First, the propagations of non-spam posts mostly result from their followers, but those of spam posts are mainly from strangers. Second, the non-spam posts relatively last longer than the spam posts. Besides, the non-spam posts always get their first reposts/comments much sooner than the spam posts. With the features defined in our model, we propose an RBF-based approach to detect spams. Different from the previous works, in which the features are extracted from individual profiles or contents, the diffusion features are not determined by any single user but the crowd. Thus, our method is more robust because any single user's behavior changes will not affect the effectiveness. Besides, although the spams vary in types and forms, they're propagated in the same way, so our method is effective for all types of spams. With the real data crawled from the leading micro-blogging services of China, we are able to evaluate the effectiveness of our model. The experiment results show that our model can achieve high accuracy both in precision and recall.

Photo Retrieval System using Kinect Sensor in Smart TV Environment (스마트 TV 환경에서 키넥트 센서를 이용한 사진 검색 시스템)

  • Choi, Ju Choel
    • Journal of Digital Convergence
    • /
    • v.12 no.3
    • /
    • pp.255-261
    • /
    • 2014
  • Advances of digital device technology such as digital cameras, smart phones and tablets, provide convenience way for people to take pictures during his/her life. Photo data is being spread rapidly throughout the social network, causing the excessive amount of data available on the internet. Photo retrieval is categorized into three types, which are: keyword-based search, example-based search, visualize query-based search. The commonly used multimedia search methods which are implemented on Smart TV are adapting the previous methods that were optimized for PC environment. That causes some features of the method becoming irrelevant to be implemented on Smart TV. This paper proposes a novel Visual Query-based Photo Retrieval Method in Smart TV Environment using a motion sensing input device known as Kinect Sensor. We detected hand gestures using kinect sensor and used the information to mimic the control function of a mouse. The average precision and recall of the proposed system are 81% and 80%, respectively, with threshold value was set to 0.7.

Construction and Evaluation of a Sentiment Dictionary Using a Web Corpus Collected from Game Domain (게임 도메인 웹 코퍼스를 이용한 감성사전 구축 및 평가)

  • Jeong, Woo-Young;Bae, Byung-Chull;Cho, Sung Hyun;Kang, Shin-Jin
    • Journal of Korea Game Society
    • /
    • v.18 no.5
    • /
    • pp.113-122
    • /
    • 2018
  • This paper describes an approach to building and evaluating a sentiment dictionary using a Web corpus in the game domain. To build a sentiment dictionary, we collected vocabulary based on game-related web documents from a domestic portal site, using the Twitter Korean Processor. From the collected vocabulary, we selected the words whose POS are tagged as either verbs or adjectives, and assigned sentiment score for each selected word. To evaluate the constructed sentiment dictionary, we calculated F1 score with precision and recall, using Korean-SWN that is based on English Senti-word Net(SWN). The evaluation results show that average F1 scores are 0.85 for adjectives and 0.77 for verbs, respectively.

Comparison of term weighting schemes for document classification (문서 분류를 위한 용어 가중치 기법 비교)

  • Jeong, Ho Young;Shin, Sang Min;Choi, Yong-Seok
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.2
    • /
    • pp.265-276
    • /
    • 2019
  • The document-term frequency matrix is a general data of objects in text mining. In this study, we introduce a traditional term weighting scheme TF-IDF (term frequency-inverse document frequency) which is applied in the document-term frequency matrix and used for text classifications. In addition, we introduce and compare TF-IDF-ICSDF and TF-IGM schemes which are well known recently. This study also provides a method to extract keyword enhancing the quality of text classifications. Based on the keywords extracted, we applied support vector machine for the text classification. In this study, to compare the performance term weighting schemes, we used some performance metrics such as precision, recall, and F1-score. Therefore, we know that TF-IGM scheme provided high performance metrics and was optimal for text classification.

Drug-Drug Interaction Prediction Using Krill Herd Algorithm Based on Deep Learning Method

  • Al-Marghilani, Abdulsamad
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.6
    • /
    • pp.319-328
    • /
    • 2021
  • Parallel administration of numerous drugs increases Drug-Drug Interaction (DDI) because one drug might affect the activity of other drugs. DDI causes negative or positive impacts on therapeutic output. So there is a need to discover DDI to enhance the safety of consuming drugs. Though there are several DDI system exist to predict an interaction but nowadays it becomes impossible to maintain with a large number of biomedical texts which is getting increased rapidly. Mostly the existing DDI system address classification issues, and especially rely on handcrafted features, and some features which are based on particular domain tools. The objective of this paper to predict DDI in a way to avoid adverse effects caused by the consumed drugs, to predict similarities among the drug, Drug pair similarity calculation is performed. The best optimal weight is obtained with the support of KHA. LSTM function with weight obtained from KHA and makes bets prediction of DDI. Our methodology depends on (LSTM-KHA) for the detection of DDI. Similarities among the drugs are measured with the help of drug pair similarity calculation. KHA is used to find the best optimal weight which is used by LSTM to predict DDI. The experimental result was conducted on three kinds of dataset DS1 (CYP), DS2 (NCYP), and DS3 taken from the DrugBank database. To evaluate the performance of proposed work in terms of performance metrics like accuracy, recall, precision, F-measures, AUPR, AUC, and AUROC. Experimental results express that the proposed method outperforms other existing methods for predicting DDI. LSTMKHA produces reasonable performance metrics when compared to the existing DDI prediction model.

A Study on Constructing the Ontology of LIS Journal (문헌정보학 학술지를 대상으로 한 온톨로지 구축에 관한 연구)

  • Noh, Young-Hee
    • Journal of the Korean Society for information Management
    • /
    • v.28 no.2
    • /
    • pp.177-193
    • /
    • 2011
  • This study constructed an ontology targeting journal articles and evaluated its performance. Also, the performance of a triple structure ontology was compared with the knowledge base of an inverted index file designed for a simple keyword search engine. The coverage was three years of articles published in the Journal of the Korean Society for Information Management from 2007 to 2009. Protege was used to construct an ontology, whilst utilizing an inverted index file to compare performance. The concept ontology was manually established, and the bibliography ontology was automatically constructed to produce an OWL concept ontology and an OWL bibliography ontology, respectively. This study compared the performance of the knowledge base of the ontology, using the Jena search engine with the performance of an inverted index file using the Lucene search engine. As a result, The Lucene showed higher precision rate, but Jena showed higher recall rate.

An Efficient Algorithm for Detecting Tables in HTML Documents (HTML 문서의 테이블 식별을 위한 효율적인 알고리즘)

  • Kim Yeon-Seok;Lee Kyong-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.10
    • /
    • pp.1339-1353
    • /
    • 2004
  • < TABLE > tags in HTML documents are widely used for formatting layout of Web documents as well as for describing genuine tables with relational information. As a prerequisite for information extraction from the Web, this paper presents an efficient method for sophisticated table detection. The proposed method consists of two phases: preprocessing and attribute-value relations extraction. For the preprocessing where genuine or ungenuine tables are filtered out, appropriate rules are devised based on a careful examination of general characteristics of < TABLE > tags. The remaining is detected at the attribute-value relations extraction phase. Specifically, a value area is extracted and checked out whether there is a syntactic coherency Futhermore, the method looks for a semantic coherency between an attribute area and a value area of a table that may be inappropriate for the syntactic coherency checkup. Experimental results with 11,477 < TABLE > tags from 1,393 HTML documents show at the method has performed better compared with previous works, resulting in a precision of 97.54% and a recall of 99.22% in average.

  • PDF

XML Schema Matching based on Ontology Update for the Transformation of XML Documents (XML 문서의 변환을 위한 온톨로지 갱신 기반 XML 스키마 매칭)

  • Lee, Kyong-Ho;Lee, Jun-Seung
    • Journal of KIISE:Databases
    • /
    • v.33 no.7
    • /
    • pp.727-740
    • /
    • 2006
  • Schema matching is important as a prerequisite to the transformation of XML documents. This paper presents a schema matching method for the transformation of XML documents. The proposed method consists of two steps: preliminary matching relationships between leaf nodes in the two XML schemas are computed based on proposed ontology and leaf node similarity, and final matchings are extracted based on a proposed path similarity. Particularly, for a sophisticated schema matching, the proposed ontology is incrementally updated by users' feedback. furthermore, since the ontology can describe various relationships between concepts, the proposed method can compute complex matchings as well as simple matchings. Experimental results with schemas used in various domains show that the proposed method is superior to previous works, resulting in a precision of 97% and a recall of 83 % on the average. Furthermore, the dynamic ontology increased by 9 percent overall.

Automatic Construction of Alternative Word Candidates to Improve Patent Information Search Quality (특허 정보 검색 품질 향상을 위한 대체어 후보 자동 생성 방법)

  • Baik, Jong-Bum;Kim, Seong-Min;Lee, Soo-Won
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.10
    • /
    • pp.861-873
    • /
    • 2009
  • There are many reasons that fail to get appropriate information in information retrieval. Allomorph is one of the reasons for search failure due to keyword mismatch. This research proposes a method to construct alternative word candidates automatically in order to minimize search failure due to keyword mismatch. Assuming that two words have similar meaning if they have similar co-occurrence words, the proposed method uses the concept of concentration, association word set, cosine similarity between association word sets and a filtering technique using confidence. Performance of the proposed method is evaluated using a manually extracted alternative list. Evaluation results show that the proposed method outperforms the context window overlapping in precision and recall.