• Title/Summary/Keyword: $Na{\ddot{i}}ve$ bayes

Search Result 32, Processing Time 0.024 seconds

Relationship between Result of Sentiment Analysis and User Satisfaction -The case of Korean Meteorological Administration- (감성분석 결과와 사용자 만족도와의 관계 -기상청 사례를 중심으로-)

  • Kim, In-Gyum;Kim, Hye-Min;Lim, Byunghwan;Lee, Ki-Kwang
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.10
    • /
    • pp.393-402
    • /
    • 2016
  • To compensate for limited the satisfaction survey currently conducted by Korea Metrological Administration (KMA), a sentiment analysis via a social networking service (SNS) can be utilized. From 2011 to 2014, with the sentiment analysis, Twitter who had commented 'KMA' had collected, then, using $Na{\ddot{i}}ve$ Bayes classification, we were classified into three sentiments: positive, negative, and neutral sentiments. An additional dictionary was made with morphemes appeared only in the positive, negative, and neutral sentiments of basic $Na{\ddot{i}}ve$ Bayes classification, thus the accuracy of sentiment analysis was improved. As a result, when sentiments were classified with a basic $Na{\ddot{i}}ve$ Bayes classification, the training data were reproduced about 75% accuracy rate. Whereas, when classifying with the additional dictionary, it showed 97% accuracy rate. When using the additional dictionary, sentiments of verification data was classified with about 75% accuracy rate. Lower classification accuracy rate would be improved by not only a qualified dictionary that has increased amount of training data, including diverse keywords related to weather, but continuous update of the dictionary. Meanwhile, contrary to the sentiment analysis based on dictionary definition of individual vocabulary, if sentiments are classified into meaning of sentence, increased rate of negative sentiment and change in satisfaction could be explained. Therefore, the sentiment analysis via SNS would be considered as useful tool for complementing surveys in the future.

Informal Quality Data Analysis via Sentimental analysis and Word2vec method (감성분석과 Word2vec을 이용한 비정형 품질 데이터 분석)

  • Lee, Chinuk;Yoo, Kook Hyun;Mun, Byeong Min;Bae, Suk Joo
    • Journal of Korean Society for Quality Management
    • /
    • v.45 no.1
    • /
    • pp.117-128
    • /
    • 2017
  • Purpose: This study analyzes automobile quality review data to develop alternative analytical method of informal data. Existing methods to analyze informal data are based mainly on the frequency of informal data, however, this research tries to use correlation information of each informal data. Method: After sentimental analysis to acquire the user information for automobile products, three classification methods, that is, $na{\ddot{i}}ve$ Bayes, random forest, and support vector machine, were employed to accurately classify the informal user opinions with respect to automobile qualities. Additionally, Word2vec was applied to discover correlated information about informal data. Result: As applicative results of three classification methods, random forest method shows most effective results compared to the other classification methods. Word2vec method manages to discover closest relevant data with automobile components. Conclusion: The proposed method shows its effectiveness in terms of accuracy and sensitivity on the analysis of informal quality data, however, only two sentiments (positive or negative) can be categorized due to human errors. Further studies are required to derive more sentiments to accurately classify informal quality data. Word2vec method also shows comparative results to discover the relevance of components precisely.

Security tendency analysis techniques through machine learning algorithms applications in big data environments (빅데이터 환경에서 기계학습 알고리즘 응용을 통한 보안 성향 분석 기법)

  • Choi, Do-Hyeon;Park, Jung-Oh
    • Journal of Digital Convergence
    • /
    • v.13 no.9
    • /
    • pp.269-276
    • /
    • 2015
  • Recently, with the activation of the industry related to the big data, the global security companies have expanded their scopes from structured to unstructured data for the intelligent security threat monitoring and prevention, and they show the trend to utilize the technique of user's tendency analysis for security prevention. This is because the information scope that can be deducted from the existing structured data(Quantify existing available data) analysis is limited. This study is to utilize the analysis of security tendency(Items classified purpose distinction, positive, negative judgment, key analysis of keyword relevance) applying the machine learning algorithm($Na{\ddot{i}}ve$ Bayes, Decision Tree, K-nearest neighbor, Apriori) in the big data environment. Upon the capability analysis, it was confirmed that the security items and specific indexes for the decision of security tendency could be extracted from structured and unstructured data.

Software Quality Classification using Bayesian Classifier (베이지안 분류기를 이용한 소프트웨어 품질 분류)

  • Hong, Euy-Seok
    • Journal of Information Technology Services
    • /
    • v.11 no.1
    • /
    • pp.211-221
    • /
    • 2012
  • Many metric-based classification models have been proposed to predict fault-proneness of software module. This paper presents two prediction models using Bayesian classifier which is one of the most popular modern classification algorithms. Bayesian model based on Bayesian probability theory can be a promising technique for software quality prediction. This is due to the ability to represent uncertainty using probabilities and the ability to partly incorporate expert's knowledge into training data. The two models, Na$\ddot{i}$veBayes(NB) and Bayesian Belief Network(BBN), are constructed and dimensionality reduction of training data and test data are performed before model evaluation. Prediction accuracy of the model is evaluated using two prediction error measures, Type I error and Type II error, and compared with well-known prediction models, backpropagation neural network model and support vector machine model. The results show that the prediction performance of BBN model is slightly better than that of NB. For the data set with ambiguity, although the BBN model's prediction accuracy is not as good as the compared models, it achieves better performance than the compared models for the data set without ambiguity.

A Semantic Analysis of Korean Compound Nouns with Enforced Semantic Constraints using a Na${\ddot{i}}$ve Bayes Classifier (나이브 베이즈 분류기를 이용한 의미제약이 강화된 한국어 복합명사 의미 분석)

  • Lee, Yong-Hoon;Ock, Cheol-Young
    • Annual Conference on Human and Language Technology
    • /
    • 2011.10a
    • /
    • pp.102-106
    • /
    • 2011
  • 본 논문에서는 사전 원어정보를 이용한 기존 방법에 나이브 베이즈 분류기를 추가로 이용하는 의미제약 기술에 대하여 소개한다. 의미제약은 의미 분석의 전처리 단계로서 부분적으로 중의성을 해소하여 입력된 복합명사의 분석 정확도 뿐만 아니라 전체적인 분석시간의 단축에도 큰 도움을 준다. 나이브 베이즈 분류기를 이용하는 방법은 사전의 의존성으로 인해 제약할 수 없는 2-gram을 대상으로 제약을 시도한다. 분류기를 위한 학습데이터는 의미 태깅된 기분석 2-gram사전을 이용하여 U-WIN의 관계정보와 사전 그리고 패턴들에 의해 생성된다. 원어정보로 해결하지 못하는 34.63%의 2-gram중 2.83%에 대해 추가로 제약에 성공 하였다.

  • PDF

Using Naïve Bayes Classifier and Confusion Matrix Spelling Correction in OCR (나이브 베이즈 분류기와 혼동 행렬을 이용한 OCR에서의 철자 교정)

  • Noh, Kyung-Mok;Kim, Chang-Hyun;Cheon, Min-Ah;Kim, Jae-Hoon
    • 한국어정보학회:학술대회논문집
    • /
    • 2016.10a
    • /
    • pp.310-312
    • /
    • 2016
  • OCR(Optical Character Recognition)의 오류를 줄이기 위해 본 논문에서는 교정 어휘 쌍의 혼동 행렬(confusion matrix)과 나이브 베이즈 분류기($na{\ddot{i}}ve$ Bayes classifier)를 이용한 철자 교정 시스템을 제안한다. 본 시스템에서는 철자 오류 중 한글에 대한 철자 오류만을 교정하였다. 실험에 사용된 말뭉치는 한국어 원시 말뭉치와 OCR 출력 말뭉치, OCR 정답 말뭉치이다. 한국어 원시 말뭉치로부터 자소 단위의 언어모델(language model)과 교정 후보 검색을 위한 접두사 말뭉치를 구축했고, OCR 출력 말뭉치와 OCR 정답 말뭉치로부터 교정 어휘 쌍을 추출하고, 자소 단위로 분해하여 혼동 행렬을 만들고, 이를 이용하여 오류 모델(error model)을 구축했다. 접두사 말뭉치를 이용해서 교정 후보를 찾고 나이브 베이즈 분류기를 통해 확률이 높은 교정 후보 n개를 제시하였다. 후보 n개 내에 정답 어절이 있다면 교정을 성공하였다고 판단했고, 그 결과 약 97.73%의 인식률을 가지는 OCR에서, 3개의 교정 후보를 제시하였을 때, 약 0.28% 향상된 98.01%의 인식률을 보였다. 이는 한글에 대한 오류를 교정했을 때이며, 향후 특수 문자와 숫자 등을 복합적으로 처리하여 교정을 시도한다면 더 나은 결과를 보여줄 것이라 기대한다.

  • PDF

Using Naïve Bayes Classifier and Confusion Matrix Spelling Correction in OCR (나이브 베이즈 분류기와 혼동 행렬을 이용한 OCR에서의 철자 교정)

  • Noh, Kyung-Mok;Kim, Chang-Hyun;Cheon, Min-Ah;Kim, Jae-Hoon
    • Annual Conference on Human and Language Technology
    • /
    • 2016.10a
    • /
    • pp.310-312
    • /
    • 2016
  • OCR(Optical Character Recognition)의 오류를 줄이기 위해 본 논문에서는 교정 어휘 쌍의 혼동 행렬(confusion matrix)과 나이브 베이즈 분류기($na{\ddot{i}}ve$ Bayes classifier)를 이용한 철자 교정 시스템을 제안한다. 본 시스템에서는 철자 오류 중 한글에 대한 철자 오류만을 교정하였다. 실험에 사용된 말뭉치는 한국어 원시 말뭉치와 OCR 출력 말뭉치, OCR 정답 말뭉치이다. 한국어 원시 말뭉치로부터 자소 단위의 언어 모델(language model)과 교정 후보 검색을 위한 접두사 말뭉치를 구축했고, OCR 출력 말뭉치와 OCR 정답 말뭉치로부터 교정 어휘 쌍을 추출하고, 자소 단위로 분해하여 혼동 행렬을 만들고, 이를 이용하여 오류 모델(error model)을 구축했다. 접두사 말뭉치를 이용해서 교정 후보를 찾고 나이브 베이즈 분류기를 통해 확률이 높은 교정 후보 n개를 제시하였다. 후보 n개 내에 정답 어절이 있다면 교정을 성공하였다고 판단했고, 그 결과 약 97.73%의 인식률을 가지는 OCR에서, 3개의 교정 후보를 제시하였을 때, 약 0.28% 향상된 98.01%의 인식률을 보였다. 이는 한글에 대한 오류를 교정했을 때이며, 향후 특수 문자와 숫자 등을 복합적으로 처리하여 교정을 시도한다면 더 나은 결과를 보여줄 것이라 기대한다.

  • PDF

Mobile Junk Message Filter Reflecting User Preference

  • Lee, Kyoung-Ju;Choi, Deok-Jai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.11
    • /
    • pp.2849-2865
    • /
    • 2012
  • In order to block mobile junk messages automatically, many studies on spam filters have applied machine learning algorithms. Most previous research focused only on the accuracy rate of spam filters from the view point of the algorithm used, not on individual user's preferences. In terms of individual taste, the spam filters implemented on a mobile device have the advantage over spam filters on a network node, because it deals with only incoming messages on the users' phone and generates no additional traffic during the filtering process. However, a spam filter on a mobile phone has to consider the consumption of resources, because energy, memory and computing ability are limited. Moreover, as time passes an increasing number of feature words are likely to exhaust mobile resources. In this paper we propose a spam filter model distributed between a users' computer and smart phone. We expect the model to follow personal decision boundaries and use the uniform resources of smart phones. An authorized user's computer takes on the more complex and time consuming jobs, such as feature selection and training, while the smart phone performs only the minimum amount of work for filtering and utilizes the results of the information calculated on the desktop. Our experiments show that the accuracy of our method is more than 95% with Na$\ddot{i}$ve Bayes and Support Vector Machine, and our model that uses uniform memory does not affect other applications that run on the smart phone.

A Framework for Semantic Interpretation of Noun Compounds Using Tratz Model and Binary Features

  • Zaeri, Ahmad;Nematbakhsh, Mohammad Ali
    • ETRI Journal
    • /
    • v.34 no.5
    • /
    • pp.743-752
    • /
    • 2012
  • Semantic interpretation of the relationship between noun compound (NC) elements has been a challenging issue due to the lack of contextual information, the unbounded number of combinations, and the absence of a universally accepted system for the categorization. The current models require a huge corpus of data to extract contextual information, which limits their usage in many situations. In this paper, a new semantic relations interpreter for NCs based on novel lightweight binary features is proposed. Some of the binary features used are novel. In addition, the interpreter uses a new feature selection method. By developing these new features and techniques, the proposed method removes the need for any huge corpuses. Implementing this method using a modular and plugin-based framework, and by training it using the largest and the most current fine-grained data set, shows that the accuracy is better than that of previously reported upon methods that utilize large corpuses. This improvement in accuracy and the provision of superior efficiency is achieved not only by improving the old features with such techniques as semantic scattering and sense collocation, but also by using various novel features and classifier max entropy. That the accuracy of the max entropy classifier is higher compared to that of other classifiers, such as a support vector machine, a Na$\ddot{i}$ve Bayes, and a decision tree, is also shown.

Defect Detection in Laser Welding Using Multidimensional Discretization and Event-Codification (Multidimensional Discretization과 Event-Codification 기법을 이용한 레이저 용접 불량 검출)

  • Baek, Su Jeong;Oh, Rocku;Kim, Duck Young
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.32 no.11
    • /
    • pp.989-995
    • /
    • 2015
  • In the literature, various stochastic anomaly detection methods, such as limit checking and PCA-based approaches, have been applied to weld defect detection. However, it is still a challenge to identify meaningful defect patterns from very limited sensor signals of laser welding, characterized by intermittent, discontinuous, very short, and non-stationary random signals. In order to effectively analyze the physical characteristics of laser weld signals: plasma intensity, weld pool temperature, and back reflection, we first transform the raw data of laser weld signals into the form of event logs. This is done by multidimensional discretization and event-codification, after which the event logs are decoded to extract weld defect patterns by $Na{\ddot{i}}ve$ Bayes classifier. The performance of the proposed method is examined in comparison with the commercial solution of PRECITEC's LWM$^{TM}$ and the most recent PCA-based detection method. The results show higher performance of the proposed method in terms of sensitivity (1.00) and specificity (0.98).