• 제목/요약/키워드: k-NN Classification

검색결과 192건 처리시간 0.036초

회전기계 고장 진단에 적용한 인공 신경회로망과 통계적 패턴 인식 기법의 비교 연구 (A Comparison of Artificial Neural Networks and Statistical Pattern Recognition Methods for Rotation Machine Condition Classification)

  • 김창구;박광호;기창두
    • 한국정밀공학회지
    • /
    • 제16권12호
    • /
    • pp.119-125
    • /
    • 1999
  • This paper gives an overview of the various approaches to designing statistical pattern recognition scheme based on Bayes discrimination rule and the artificial neural networks for rotating machine condition classification. Concerning to Bayes discrimination rule, this paper contains the linear discrimination rule applied to classification into several multivariate normal distributions with common covariance matrices, the quadratic discrimination rule under different covariance matrices. Also we discribes k-nearest neighbor method to directly estimate a posterior probability of each class. Five features are extracted in time domain vibration signals. Employing these five features, statistical pattern classifier and neural networks have been established to detect defects on rotating machine. Four different cases of rotation machine were observed. The effects of k number and neural networks structures on monitoring performance have also been investigated. For the comparison of diagnosis performance of these two method, their recognition success rates are calculated form the test data. The result of experiment which classifies the rotating machine conditions using each method presents that the neural networks shows the highest recognition rate.

  • PDF

문서 분류에서의 SVM 오류 감소를 위한 하이브리드 방법 (Hybrid Approach to SVM Error Reduction in Document Classification)

  • 이준석;김상수;박성배;이상조
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2005년도 가을 학술발표논문집 Vol.32 No.2 (2)
    • /
    • pp.544-546
    • /
    • 2005
  • 본 논문에서는 문서 분류(document classification) 성능을 높이기 위해 다음과 같은 방법을 제안한다. 먼저 패턴 분류 문제에 있어서 우수한 성능을 보이는 SVM(Support Vector Machine)을 사용하여 분류 하고, 마진을 만족하는 데이터를 다시 k-NN 으로 분류를 한다. 단순히 SVM만을 사용한것보다. k-NN을 함께 사용한것이 더 높은 성능을 보였다.

  • PDF

초월평면 최적화를 이용한 최근접 초월평면 학습법의 성능 향상 방법 (An Optimizing Hyperrectangle method for Nearest Hyperrectangle Learning)

  • 이형일
    • 한국지능시스템학회논문지
    • /
    • 제13권3호
    • /
    • pp.328-333
    • /
    • 2003
  • 메모리기반 추론에서 기억공간의 효율적인 사용과 분류성능의 향상을 위하여 제안된 NGE이론에 기반한 최근접 초월평면법은 학습자료를 초월평면상에 투영시켜 생성된 초월평면을 이용한다. 이때 학습자료에 포합될 수 있는 오류자료가 그대로 초월평면에 포함되어 분류의 정확성을 저해하는 요인으로 작용하는 단점을 가지고 있다. 본 논문에서는 기존의 최근접 초월평면의 단점을 보완한 초월평면 최적화(OH:Optimizing Hyperrectangle) 방법을 제안 한다. 제안된 방법은 특징가중치 벡터를 초월평면마다 할당하여 학습하고, 학습 후 생성된 모든 초월평면에 대해 특징별 최빈구간을 추출하여 최적초월평면을 구성하여 분류 시 사용한다. 제안된 방법은 EACH시스템과 마찬가지로 k-NN분류기에서 필요로 하는 메모리 공간의 40%정도를 사용하며, 분류에 있어서는 EACH시스템 보다 우수한 인식 성능을 보이고 있다.

개선된 데이터마이닝을 위한 혼합 학습구조의 제시 (Hybrid Learning Architectures for Advanced Data Mining:An Application to Binary Classification for Fraud Management)

  • Kim, Steven H.;Shin, Sung-Woo
    • 정보기술응용연구
    • /
    • 제1권
    • /
    • pp.173-211
    • /
    • 1999
  • The task of classification permeates all walks of life, from business and economics to science and public policy. In this context, nonlinear techniques from artificial intelligence have often proven to be more effective than the methods of classical statistics. The objective of knowledge discovery and data mining is to support decision making through the effective use of information. The automated approach to knowledge discovery is especially useful when dealing with large data sets or complex relationships. For many applications, automated software may find subtle patterns which escape the notice of manual analysis, or whose complexity exceeds the cognitive capabilities of humans. This paper explores the utility of a collaborative learning approach involving integrated models in the preprocessing and postprocessing stages. For instance, a genetic algorithm effects feature-weight optimization in a preprocessing module. Moreover, an inductive tree, artificial neural network (ANN), and k-nearest neighbor (kNN) techniques serve as postprocessing modules. More specifically, the postprocessors act as second0order classifiers which determine the best first-order classifier on a case-by-case basis. In addition to the second-order models, a voting scheme is investigated as a simple, but efficient, postprocessing model. The first-order models consist of statistical and machine learning models such as logistic regression (logit), multivariate discriminant analysis (MDA), ANN, and kNN. The genetic algorithm, inductive decision tree, and voting scheme act as kernel modules for collaborative learning. These ideas are explored against the background of a practical application relating to financial fraud management which exemplifies a binary classification problem.

  • PDF

Off-line PD Model Classification of Traction Motor Stator Coil Using BP

  • Park Seong-Hee;Jang Dong-Uk;Kang Seong-Hwa;Lim Kee-Joe
    • KIEE International Transactions on Electrophysics and Applications
    • /
    • 제5C권6호
    • /
    • pp.223-227
    • /
    • 2005
  • Insulation failure of traction motor stator coil depends on the continuous stress imposed on it and knowing its insulation condition is an issue of significance for proper safety operation. In this paper, application of the NN (Neural Network) as a scheme of the off-line PD (partial discharge) diagnosis method that occurs at the stator coil of a traction motor was studied. For PD data acquisition, three defective models were made; internal void discharge model, slot discharge model and surface discharge model. PD data for recognition were acquired from a PD detector. Statistical distributions and parameters were calculated to perform recognition between model discharge sources. These statistical distribution parameters are applied to classify PD sources by the NN with a good recognition rate on the discharge sources.

Research on Fault Diagnosis of Wind Power Generator Blade Based on SC-SMOTE and kNN

  • Peng, Cheng;Chen, Qing;Zhang, Longxin;Wan, Lanjun;Yuan, Xinpan
    • Journal of Information Processing Systems
    • /
    • 제16권4호
    • /
    • pp.870-881
    • /
    • 2020
  • Because SCADA monitoring data of wind turbines are large and fast changing, the unbalanced proportion of data in various working conditions makes it difficult to process fault feature data. The existing methods mainly introduce new and non-repeating instances by interpolating adjacent minority samples. In order to overcome the shortcomings of these methods which does not consider boundary conditions in balancing data, an improved over-sampling balancing algorithm SC-SMOTE (safe circle synthetic minority oversampling technology) is proposed to optimize data sets. Then, for the balanced data sets, a fault diagnosis method based on improved k-nearest neighbors (kNN) classification for wind turbine blade icing is adopted. Compared with the SMOTE algorithm, the experimental results show that the method is effective in the diagnosis of fan blade icing fault and improves the accuracy of diagnosis.

Utilizing Data Mining Techniques to Predict Students Performance using Data Log from MOODLE

  • Noora Shawareb;Ahmed Ewais;Fisnik Dalipi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권9호
    • /
    • pp.2564-2588
    • /
    • 2024
  • Due to COVID19 pandemic, most of educational institutions and schools changed the traditional way of teaching to online teaching and learning using well-known Learning Management Systems (LMS) such as Moodle, Canvas, Blackboard, etc. Accordingly, LMS started to generate a large data related to students' characteristics and achievements and other course-related information. This makes it difficult to teachers to monitor students' behaviour and performance. Therefore, a need to support teachers with a tool alerting student who might be in risk based on their recorded activities and achievements in adopted LMS in the school. This paper focuses on the benefits of using recorded data in LMS platforms, specifically Moodle, to predict students' performance by analysing their behavioural data and engagement activities using data mining techniques. As part of the overall process, this study encountered the task of extracting and selecting relevant data features for predicting performance, along with designing the framework and choosing appropriate machine learning techniques. The collected data underwent pre-processing operations to remove random partitions, empty values, duplicates, and code the data. Different machine learning techniques, including k-NN, TREE, Ensembled Tree, SVM, and MLPNNs were applied to the processed data. The results showed that the MLPNNs technique outperformed other classification techniques, achieving a classification accuracy of 93%, while SVM and k-NN achieved 90% and 87% respectively. This indicates the possibility for future research to investigate incorporating other neural network methods for categorizing students using data from LMS.

HKIB-20000 & HKIB-40075: Hangul Benchmark Collections for Text Categorization Research

  • Kim, Jin-Suk;Choe, Ho-Seop;You, Beom-Jong;Seo, Jeong-Hyun;Lee, Suk-Hoon;Ra, Dong-Yul
    • Journal of Computing Science and Engineering
    • /
    • 제3권3호
    • /
    • pp.165-180
    • /
    • 2009
  • The HKIB, or Hankookilbo, test collections are two archives of Korean newswire stories manually categorized with semi-hierarchical or hierarchical category taxonomies. The base newswire stories were made available by the Hankook Ilbo (The Korea Daily) for research purposes. At first, Chungnam National University and KISTI collaborated to manually tag 40,075 news stories with categories by semi-hierarchical and balanced three-level classification scheme, where each news story has only one level-3 category (single-labeling). We refer to this original data set as HKIB-40075 test collection. And then Yonsei University and KISTI collaborated to select 20,000 newswire stories from the HKIB-40075 test collection, to rearrange the classification scheme to be fully hierarchical but unbalanced, and to assign one or more categories to each news story (multi-labeling). We refer to this modified data set as HKIB-20000 test collection. We benchmark a k-NN categorization algorithm both on HKIB-20000 and on HKIB-40075, illustrating properties of the collections, providing baseline results for future studies, and suggesting new directions for further research on Korean text categorization problem.

RECOGNIZING SIX EMOTIONAL STATES USING SPEECH SIGNALS

  • Kang, Bong-Seok;Han, Chul-Hee;Youn, Dae-Hee;Lee, Chungyong
    • 한국감성과학회:학술대회논문집
    • /
    • 한국감성과학회 2000년도 춘계 학술대회 및 국제 감성공학 심포지움 논문집 Proceeding of the 2000 Spring Conference of KOSES and International Sensibility Ergonomics Symposium
    • /
    • pp.366-369
    • /
    • 2000
  • This paper examines three algorithms to recognize speaker's emotion using the speech signals. Target emotions are happiness, sadness, anger, fear, boredom and neutral state. MLB(Maximum-Likeligood Bayes), NN(Nearest Neighbor) and HMM (Hidden Markov Model) algorithms are used as the pattern matching techniques. In all cases, pitch and energy are used as the features. The feature vectors for MLB and NN are composed of pitch mean, pitch standard deviation, energy mean, energy standard deviation, etc. For HMM, vectors of delta pitch with delta-delta pitch and delta energy with delta-delta energy are used. We recorded a corpus of emotional speech data and performed the subjective evaluation for the data. The subjective recognition result was 56% and was compared with the classifiers' recognition rates. MLB, NN, and HMM classifiers achieved recognition rates of 68.9%, 69.3% and 89.1% respectively, for the speaker dependent, and context-independent classification.

  • PDF

Improving Weighted k Nearest Neighbor Classification Through The Analytic Hierarchy Process Aiding

  • Park, Cheol-Soo;Ingoo Han
    • 한국데이타베이스학회:학술대회논문집
    • /
    • 한국데이타베이스학회 1999년도 춘계공동학술대회: 지식경영과 지식공학
    • /
    • pp.187-194
    • /
    • 1999
  • Case-Based Reasoning(CBR) systems support ill structured decision-making. The measure of the success of a CBR system depends on its ability to retrieve the most relevant previous cases in support of the solution of a new case. One of the methodologies widely used in existing CBR systems to retrieve previous cases is that of the Nearest Neighbor(NN) matching function. The NN matching function is based on assumptions of the independence of attributes in previous case and the availability of rules and procedures for matching.(omitted)

  • PDF