• Title/Summary/Keyword: 지지벡터기계학습

Search Result 64, Processing Time 0.021 seconds

Learning and Performance Comparison of Multi-class Classification Problems based on Support Vector Machine (지지벡터기계를 이용한 다중 분류 문제의 학습과 성능 비교)

  • Hwang, Doo-Sung
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.7
    • /
    • pp.1035-1042
    • /
    • 2008
  • The support vector machine, as a binary classifier, is known to surpass the other classifiers only in binary classification problems through the various experiments. Even though its theory is based on the maximal margin classifier, the support vector machine approach cannot be easily extended to the multi-classification problems. In this paper, we review the extension techniques of the support vector machine toward the multi-classification and do the performance comparison. Depending on the data decomposition of the training data, the support vector machine is easily adapted for a multi-classification problem without modifying the intrinsic characteristics of the binary classifier. The performance is evaluated on a collection of the benchmark data sets and compared according to the selected teaming strategies, the training time, and the results of the neural network with the backpropagation teaming. The experiments suggest that the support vector machine is applicable and effective in the general multi-class classification problems when compared to the results of the neural network.

  • PDF

An analysis of Speech Acts for Korean Using Support Vector Machines (지지벡터기계(Support Vector Machines)를 이용한 한국어 화행분석)

  • En Jongmin;Lee Songwook;Seo Jungyun
    • The KIPS Transactions:PartB
    • /
    • v.12B no.3 s.99
    • /
    • pp.365-368
    • /
    • 2005
  • We propose a speech act analysis method for Korean dialogue using Support Vector Machines (SVM). We use a lexical form of a word, its part of speech (POS) tags, and bigrams of POS tags as sentence features and the contexts of the previous utterance as context features. We select informative features by Chi square statistics. After training SVM with the selected features, SVM classifiers determine the speech act of each utterance. In experiment, we acquired overall $90.54\%$ of accuracy with dialogue corpus for hotel reservation domain.

Learning Multiple Instance Support Vector Machine through Positive Data Distribution (긍정 데이터 분포를 반영한 다중 인스턴스 지지 벡터 기계 학습)

  • Hwang, Joong-Won;Park, Seong-Bae;Lee, Sang-Jo
    • Journal of KIISE
    • /
    • v.42 no.2
    • /
    • pp.227-234
    • /
    • 2015
  • This paper proposes a modified MI-SVM algorithm by considering data distribution. The previous MI-SVM algorithm seeks the margin by considering the "most positive" instance in a positive bag. Positive instances included in positive bags are located in a similar area in a feature space. In order to reflect this characteristic of positive instances, the proposed method selects the "most positive" instance by calculating the distance between each instance in the bag and a pivot point that is the intersection point of all positive instances. This paper suggests two ways to select the "most positive" pivot point in the training data. First, the algorithm seeks the "most positive" pivot point along the current predicted parameter, and then selects the nearest instance in the bag as a representative from the pivot point. Second, the algorithm finds the "most positive" pivot point by using a Diverse Density framework. Our experiments on 12 benchmark multi-instance data sets show that the proposed method results in higher performance than the previous MI-SVM algorithm.

A Spam Message Filter System for Mobile Environment (휴대폰의 스팸문자메시지 판별 시스템)

  • Lee, Songwook
    • Annual Conference on Human and Language Technology
    • /
    • 2010.10a
    • /
    • pp.194-196
    • /
    • 2010
  • 휴대폰의 광범위한 보급으로 문자메시지의 사용이 급증하고 있다. 이와 동시에 사용자가 원하지 않는 광고성 스팸문자도 넘쳐나고 있다. 본 연구는 이러한 스팸문자메시지를 자동으로 판별하는 시스템을 개발하는 것이다. 우리는 기계학습방법인 지지벡터기계(Support Vector Machine)을 사용하여 시스템을 학습하였으며 자질의 선택은 카이제곱 통계량을 이용하였다. 실험결과 F1 척도로 약 95.5%의 정확률을 얻었다

  • PDF

Support Vector Machine Algorithm for Imbalanced Data Learning (불균형 데이터 학습을 위한 지지벡터기계 알고리즘)

  • Kim, Kwang-Seong;Hwang, Doo-Sung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.7
    • /
    • pp.11-17
    • /
    • 2010
  • This paper proposes an improved SMO solving a quadratic optmization problem for class imbalanced learning. The SMO algorithm is aproporiate for solving the optimization problem of a support vector machine that assigns the different regularization values to the two classes, and the prosoposed SMO learning algorithm iterates the learning steps to find the current optimal solutions of only two Lagrange variables selected per class. The proposed algorithm is tested with the UCI benchmarking problems and compared to the experimental results of the SMO algorithm with the g-mean measure that considers class imbalanced distribution for gerneralization performance. In comparison to the SMO algorithm, the proposed algorithm is effective to improve the prediction rate of the minority class data and could shorthen the training time.

Distributed Support Vector Machines for Localization on a Sensor Newtork (센서 네트워크에서 위치 측정을 위한 분산 지지 벡터 머신)

  • Moon, Sangook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.10a
    • /
    • pp.944-946
    • /
    • 2014
  • Localization of a sensor network node using machine learning has been recently studied. It is easy for Support vector machines algorithm to implement in high level language enabling parallelism. In this paper, we realized Support vector machine using python language and built a sensor network cluster with 5 Pi's. We also established a Hadoop software framework to employ MapReduce mechanism. We modified the existing Support vector machine algorithm to fit into the distributed hadoop architecture system for localization of a sensor node. In our experiment, we implemented the test sensor network with a variety of parameters and examined based on proficiency, resource evaluation, and processing time.

  • PDF

Constructing a Support Vector Machine for Localization on a Low-End Cluster Sensor Network (로우엔드 클러스터 센서 네트워크에서 위치 측정을 위한 지지 벡터 머신)

  • Moon, Sangook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.12
    • /
    • pp.2885-2890
    • /
    • 2014
  • Localization of a sensor network node using machine learning has been recently studied. It is easy for Support vector machines algorithm to implement in high level language enabling parallelism. Raspberrypi is a linux system which can be used as a sensor node. Pi can be used to construct IP based Hadoop clusters. In this paper, we realized Support vector machine using python language and built a sensor network cluster with 5 Pi's. We also established a Hadoop software framework to employ MapReduce mechanism. In our experiment, we implemented the test sensor network with a variety of parameters and examined based on proficiency, resource evaluation, and processing time. The experimentation showed that with more execution power and memory volume, Pi could be appropriate for a member node of the cluster, accomplishing precise classification for sensor localization using machine learning.

Effective Korean Speech-act Classification Using the Classification Priority Application and a Post-correction Rules (분류 우선순위 적용과 후보정 규칙을 이용한 효과적인 한국어 화행 분류)

  • Song, Namhoon;Bae, Kyoungman;Ko, Youngjoong
    • Journal of KIISE
    • /
    • v.43 no.1
    • /
    • pp.80-86
    • /
    • 2016
  • A speech-act is a behavior intended by users in an utterance. Speech-act classification is important in a dialogue system. The machine learning and rule-based methods have mainly been used for speech-act classification. In this paper, we propose a speech-act classification method based on the combination of support vector machine (SVM) and transformation-based learning (TBL). The user's utterance is first classified by SVM that is preferentially applied to categories with a low utterance rate in training data. Next, when an utterance has negative scores throughout the whole of the categories, the utterance is applied to the correction phase by rules. The results from our method were higher performance over the baseline system long with error-reduction.

Dual SMS SPAM Filtering: A Graph-based Feature Weighting Method (듀얼 SMS 스팸 필터링: 그래프 기반 자질 가중치 기법)

  • Hwang, Jae-Won;Ko, Young-Joong
    • Annual Conference on Human and Language Technology
    • /
    • 2014.10a
    • /
    • pp.95-99
    • /
    • 2014
  • 본 논문에서는 최근 급속히 증가하여 사회적 이슈가 되고 있는 SMS 스팸 필터링을 위한 듀얼 SMS 스팸필터링 기법을 제안한다. 지속적으로 증가하고 새롭게 변형되는 SMS 문자 필터링을 위해서는 패턴 및 스팸 단어 사전을 통한 필터링은 많은 수작업을 요구하여 부적합하다. 그리하여 기계 학습을 이용한 자동화 시스템 구축이 요구되고 있으며, 효과적인 기계 학습을 위해서는 자질 선택과 자질의 가중치 책정 방법이 중요하다. 하지만 SMS 문자 특성상 문장들이 짧기 때문에 출현하는 자질의 수가 적어 분류의 어려움을 겪게 된다. 이 같은 문제를 개선하기 위하여 본 논문에서는 슬라이딩 윈도우 기반 N-gram 확장을 통해 자질을 확장하고, 확장된 자질로 그래프를 구축하여 얕은 구조적 특징을 표현한다. 학습 데이터에 출현한 N-gram 자질을 정점(Vertex)으로, 자질의 출현 빈도를 그래프의 간선(Edge)의 가중치로 설정하여 햄(HAM)과 스팸(SPAM) 그래프를 각각 구성한다. 이렇게 구성된 그래프를 바탕으로 노드의 중요도와 간선의 가중치를 활용하여 최종적인 자질의 가중치를 결정한다. 입력 문자가 도착하면 스팸과 햄의 그래프를 각각 이용하여 입력 문자의 2개의 자질 벡터(Vector)를 생성한다. 생성된 자질 벡터를 지지 벡터 기계(Support Vector Machine)를 이용하여 각 SVM 확률 값(Probability Score)을 얻어 스팸 여부를 결정한다. 3가지의 실험환경에서 바이그램 자질과 이진 가중치를 사용한 기본 시스템보다 F1-Score의 약 최대 2.7%, 최소 0.5%까지 향상되었으며, 결과적으로 평균 약 1.35%의 성능 향상을 얻을 수 있었다.

  • PDF

A Study on automatic assignment of descriptors using machine learning (기계학습을 통한 디스크립터 자동부여에 관한 연구)

  • Kim, Pan-Jun
    • Journal of the Korean Society for information Management
    • /
    • v.23 no.1 s.59
    • /
    • pp.279-299
    • /
    • 2006
  • This study utilizes various approaches of machine learning in the process of automatically assigning descriptors to journal articles. The effectiveness of feature selection and the size of training set were examined, after selecting core journals in the field of information science and organizing test collection from the articles of the past 11 years. Regarding feature selection, after reducing the feature set using $x^2$ statistics(CHI) and criteria that prefer high-frequency features(COS, GSS, JAC), the trained Support Vector Machines(SVM) performed the best. With respect to the size of the training set, it significantly influenced the performance of Support Vector Machines(SVM) and Voted Perceptron(VTP). However, it had little effect on Naive Bayes(NB).