• Title/Summary/Keyword: Supervised learning methods

Search Result 202, Processing Time 0.026 seconds

Malicious URL Detection by Visual Characteristics with Machine Learning: Roles of HTTPS (시각적 특징과 머신 러닝으로 악성 URL 구분: HTTPS의 역할)

  • Sung-Won HONG;Min-Soo KANG
    • Journal of Korea Artificial Intelligence Association
    • /
    • v.1 no.2
    • /
    • pp.1-6
    • /
    • 2023
  • In this paper, we present a new method for classifying malicious URLs to reduce cases of learning difficulties due to unfamiliar and difficult terms related to information protection. This study plans to extract only visually distinguishable features within the URL structure and compare them through map learning algorithms, and to compare the contribution values of the best map learning algorithm methods to extract features that have the most impact on classifying malicious URLs. As research data, Kaggle used data that classified 7,046 malicious URLs and 7.046 normal URLs. As a result of the study, among the three supervised learning algorithms used (Decision Tree, Support Vector Machine, and Logistic Regression), the Decision Tree algorithm showed the best performance with 83% accuracy, 83.1% F1-score and 83.6% Recall values. It was confirmed that the contribution value of https is the highest among whether to use https, sub domain, and prefix and suffix, which can be visually distinguished through the feature contribution of Decision Tree. Although it has been difficult to learn unfamiliar and difficult terms so far, this study will be able to provide an intuitive judgment method without explanation of the terms and prove its usefulness in the field of malicious URL detection.

Wrapper Generation for Collecting Comparative Shopping Information

  • Shin, Ju-Ri;Sohn, Bong-Ki;Lee, Keon-Myung t
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.3 no.1
    • /
    • pp.127-132
    • /
    • 2003
  • This paper proposes a wrapper generation method for collecting comparative shopping information from various Internet shopping malls. The proposed method is a kind of supervised learning method to learn wrappers from sample web pages along with information locations designated by the administrators. It generates wrappers expressed in the form of generalized tags sequences and frame filling procedures for semi-structured web pages. The paper also presents how to use the learned wrappers and describes a prototype system which implemented the proposed ideas and methods.

Automatic Generation of Information Extraction Rules Through User-interface Agents (사용자 인터페이스 에이전트를 통한 정보추출 규칙의 자동 생성)

  • 김용기;양재영;최중민
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.4
    • /
    • pp.447-456
    • /
    • 2004
  • Information extraction is a process of recognizing and fetching particular information fragments from a document. In order to extract information uniformly from many heterogeneous information sources, it is necessary to produce information extraction rules called a wrapper for each source. Previous methods of information extraction can be categorized into manual wrapper generation and automatic wrapper generation. In the manual method, since the wrapper is manually generated by a human expert who analyzes documents and writes rules, the precision of the wrapper is very high whereas it reveals problems in scalability and efficiency In the automatic method, the agent program analyzes a set of example documents and produces a wrapper through learning. Although it is very scalable, this method has difficulty in generating correct rules per se, and also the generated rules are sometimes unreliable. This paper tries to combine both manual and automatic methods by proposing a new method of learning information extraction rules. We adopt the scheme of supervised learning in which a user-interface agent is designed to get information from the user regarding what to extract from a document, and eventually XML-based information extraction rules are generated through learning according to these inputs. The interface agent is used not only to generate new extraction rules but also to modify and extend existing ones to enhance the precision and the recall measures of the extraction system. We have done a series of experiments to test the system, and the results are very promising. We hope that our system can be applied to practical systems such as information-mediator agents.

A Comparison of Classification Methods for Credit Card Approval Using R (R의 분류방법을 이용한 신용카드 승인 분석 비교)

  • Song, Jong-Woo
    • Journal of Korean Society for Quality Management
    • /
    • v.36 no.1
    • /
    • pp.72-79
    • /
    • 2008
  • The policy for credit card approval/disapproval is based on the applier's personal and financial information. In this paper, we will analyze 2 credit card approval data with several classification methods. We identify which variables are important factors to decide the approval of credit card. Our main tool is an open-source statistical programming environment R which is freely available from http://www.r-project.org. It is getting popular recently because of its flexibility and a lot of packages (libraries) made by R-users in the world. We will use most widely used methods, LDNQDA, Logistic Regression, CART (Classification and Regression Trees), neural network, and SVM (Support Vector Machines) for comparisons.

SkelGAN: A Font Image Skeletonization Method

  • Ko, Debbie Honghee;Hassan, Ammar Ul;Majeed, Saima;Choi, Jaeyoung
    • Journal of Information Processing Systems
    • /
    • v.17 no.1
    • /
    • pp.1-13
    • /
    • 2021
  • In this research, we study the problem of font image skeletonization using an end-to-end deep adversarial network, in contrast with the state-of-the-art methods that use mathematical algorithms. Several studies have been concerned with skeletonization, but a few have utilized deep learning. Further, no study has considered generative models based on deep neural networks for font character skeletonization, which are more delicate than natural objects. In this work, we take a step closer to producing realistic synthesized skeletons of font characters. We consider using an end-to-end deep adversarial network, SkelGAN, for font-image skeletonization, in contrast with the state-of-the-art methods that use mathematical algorithms. The proposed skeleton generator is proved superior to all well-known mathematical skeletonization methods in terms of character structure, including delicate strokes, serifs, and even special styles. Experimental results also demonstrate the dominance of our method against the state-of-the-art supervised image-to-image translation method in font character skeletonization task.

A concise overview of principal support vector machines and its generalization

  • Jungmin Shin;Seung Jun Shin
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.2
    • /
    • pp.235-246
    • /
    • 2024
  • In high-dimensional data analysis, sufficient dimension reduction (SDR) has been considered as an attractive tool for reducing the dimensionality of predictors while preserving regression information. The principal support vector machine (PSVM) (Li et al., 2011) offers a unified approach for both linear and nonlinear SDR. This article comprehensively explores a variety of SDR methods based on the PSVM, which we call principal machines (PM) for SDR. The PM achieves SDR by solving a sequence of convex optimizations akin to popular supervised learning methods, such as the support vector machine, logistic regression, and quantile regression, to name a few. This makes the PM straightforward to handle and extend in both theoretical and computational aspects, as we will see throughout this article.

Comparative Study of Tokenizer Based on Learning for Sentiment Analysis (고객 감성 분석을 위한 학습 기반 토크나이저 비교 연구)

  • Kim, Wonjoon
    • Journal of Korean Society for Quality Management
    • /
    • v.48 no.3
    • /
    • pp.421-431
    • /
    • 2020
  • Purpose: The purpose of this study is to compare and analyze the tokenizer in natural language processing for customer satisfaction in sentiment analysis. Methods: In this study, a supervised learning-based tokenizer Mecab-Ko and an unsupervised learning-based tokenizer SentencePiece were used for comparison. Three algorithms: Naïve Bayes, k-Nearest Neighbor, and Decision Tree were selected to compare the performance of each tokenizer. For performance comparison, three metrics: accuracy, precision, and recall were used in the study. Results: The results of this study are as follows; Through performance evaluation and verification, it was confirmed that SentencePiece shows better classification performance than Mecab-Ko. In order to confirm the robustness of the derived results, independent t-tests were conducted on the evaluation results for the two types of the tokenizer. As a result of the study, it was confirmed that the classification performance of the SentencePiece tokenizer was high in the k-Nearest Neighbor and Decision Tree algorithms. In addition, the Decision Tree showed slightly higher accuracy among the three classification algorithms. Conclusion: The SentencePiece tokenizer can be used to classify and interpret customer sentiment based on online reviews in Korean more accurately. In addition, it seems that it is possible to give a specific meaning to a short word or a jargon, which is often used by users when evaluating products but is not defined in advance.

Learning Discriminative Fisher Kernel for Image Retrieval

  • Wang, Bin;Li, Xiong;Liu, Yuncai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.3
    • /
    • pp.522-538
    • /
    • 2013
  • Content based image retrieval has become an increasingly important research topic for its wide application. It is highly challenging when facing to large-scale database with large variance. The retrieval systems rely on a key component, the predefined or learned similarity measures over images. We note that, the similarity measures can be potential improved if the data distribution information is exploited using a more sophisticated way. In this paper, we propose a similarity measure learning approach for image retrieval. The similarity measure, so called Fisher kernel, is derived from the probabilistic distribution of images and is the function over observed data, hidden variable and model parameters, where the hidden variables encode high level information which are powerful in discrimination and are failed to be exploited in previous methods. We further propose a discriminative learning method for the similarity measure, i.e., encouraging the learned similarity to take a large value for a pair of images with the same label and to take a small value for a pair of images with distinct labels. The learned similarity measure, fully exploiting the data distribution, is well adapted to dataset and would improve the retrieval system. We evaluate the proposed method on Corel-1000, Corel5k, Caltech101 and MIRFlickr 25,000 databases. The results show the competitive performance of the proposed method.

Wifi Fingerprint Calibration Using Semi-Supervised Self Organizing Map (반지도식 자기조직화지도를 이용한 wifi fingerprint 보정 방법)

  • Thai, Quang Tung;Chung, Ki-Sook;Keum, Changsup
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.42 no.2
    • /
    • pp.536-544
    • /
    • 2017
  • Wireless RSSI (Received Signal Strength Indication) fingerprinting is one of the most popular methods for indoor positioning as it provides reasonable accuracy while being able to exploit existing wireless infrastructure. However, the process of radio map construction (aka fingerprint calibration) is laborious and time consuming as precise physical coordinates and wireless signals have to be measured at multiple locations of target environment. This paper proposes a method to build the map from a combination of RSSIs without location information collected in a crowdsourcing fashion, and a handful of labeled RSSIs using a semi-supervised self organizing map learning algorithm. Experiment on simulated data shows promising results as the method is able to recover the full map effectively with only 1% RSSI samples from the fingerprint database.

Soft Sensor Design Using Image Analysis and its Industrial Applications Part 2. Automatic Quality Classification of Engineered Stone Countertops (화상분석을 이용한 소프트 센서의 설계와 산업응용사례 2. 인조대리석의 품질 자동 분류)

  • Ryu, Jun-Hyung;Liu, J. Jay
    • Korean Chemical Engineering Research
    • /
    • v.48 no.4
    • /
    • pp.483-489
    • /
    • 2010
  • An image analysis-based soft sensor is designed and applied to automatic quality classification of product appearance with color-textural characteristics. In this work, multiresolutional multivariate image analysis (MR-MIA) is used in order to analyze product images with color as well as texture. Fisher's discriminant analysis (FDA) is also used as a supervised learning method for automatic classification. The use of FDA, one of latent variable methods, enables us not only to classify products appearance into distinct classes, but also to numerically and consistently estimate product appearance with continuous variations and to analyze characteristics of appearance. This approach is successfully applied to automatic quality classification of intermediate and final products in industrial manufacturing of engineered stone countertops.