• Title/Summary/Keyword: k-nearest neighbor method

Search Result 313, Processing Time 0.031 seconds

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

Optimization of Support Vector Machines for Financial Forecasting (재무예측을 위한 Support Vector Machine의 최적화)

  • Kim, Kyoung-Jae;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.241-254
    • /
    • 2011
  • Financial time-series forecasting is one of the most important issues because it is essential for the risk management of financial institutions. Therefore, researchers have tried to forecast financial time-series using various data mining techniques such as regression, artificial neural networks, decision trees, k-nearest neighbor etc. Recently, support vector machines (SVMs) are popularly applied to this research area because they have advantages that they don't require huge training data and have low possibility of overfitting. However, a user must determine several design factors by heuristics in order to use SVM. For example, the selection of appropriate kernel function and its parameters and proper feature subset selection are major design factors of SVM. Other than these factors, the proper selection of instance subset may also improve the forecasting performance of SVM by eliminating irrelevant and distorting training instances. Nonetheless, there have been few studies that have applied instance selection to SVM, especially in the domain of stock market prediction. Instance selection tries to choose proper instance subsets from original training data. It may be considered as a method of knowledge refinement and it maintains the instance-base. This study proposes the novel instance selection algorithm for SVMs. The proposed technique in this study uses genetic algorithm (GA) to optimize instance selection process with parameter optimization simultaneously. We call the model as ISVM (SVM with Instance selection) in this study. Experiments on stock market data are implemented using ISVM. In this study, the GA searches for optimal or near-optimal values of kernel parameters and relevant instances for SVMs. This study needs two sets of parameters in chromosomes in GA setting : The codes for kernel parameters and for instance selection. For the controlling parameters of the GA search, the population size is set at 50 organisms and the value of the crossover rate is set at 0.7 while the mutation rate is 0.1. As the stopping condition, 50 generations are permitted. The application data used in this study consists of technical indicators and the direction of change in the daily Korea stock price index (KOSPI). The total number of samples is 2218 trading days. We separate the whole data into three subsets as training, test, hold-out data set. The number of data in each subset is 1056, 581, 581 respectively. This study compares ISVM to several comparative models including logistic regression (logit), backpropagation neural networks (ANN), nearest neighbor (1-NN), conventional SVM (SVM) and SVM with the optimized parameters (PSVM). In especial, PSVM uses optimized kernel parameters by the genetic algorithm. The experimental results show that ISVM outperforms 1-NN by 15.32%, ANN by 6.89%, Logit and SVM by 5.34%, and PSVM by 4.82% for the holdout data. For ISVM, only 556 data from 1056 original training data are used to produce the result. In addition, the two-sample test for proportions is used to examine whether ISVM significantly outperforms other comparative models. The results indicate that ISVM outperforms ANN and 1-NN at the 1% statistical significance level. In addition, ISVM performs better than Logit, SVM and PSVM at the 5% statistical significance level.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

A Study for 8 Constitution Medicine Diagnosis Expert System Development(2) (8체질 진단을 위한 전문가 시스템 개발에 관한 연구(2))

  • Shin, Yong-Sup;Park, Young-Bae;Park, Young-Jae;Kim, Min-Yong;Lee, Sang-Chul;Oh, Hwan-Sup
    • The Journal of the Society of Korean Medicine Diagnostics
    • /
    • v.12 no.2
    • /
    • pp.107-126
    • /
    • 2008
  • Background : There was seldom study about method that diagnose 8 Constitution beside method of pulse diagnosis in 8 Constitution Medicine. Objectives : This study is to make out 8 Constitution Medicine Diagnosis Expert System Development used CBR(Case based Reasoning). Methods : First, at case base construction process we constructed case base for CBR embodiment because gathering 925 cases all to patient who constitution is verified, and second, at study model establishment process superior expert system development by purpose CBR of reasoning process dividing fundamental type CBR that spend basis data value and expert type CBR that reflect weight in basis data value accordin I II III to advice expert opinion, and third, system embodiment process explained about way to give process and weight that diagnose constitution through Nearest Neighbor Method sampling process of CBR techniques, and fourth, at system estimation process we selected superior CBR model because comparing and estimate the diagnosis rate of expert system with fundamental type system (GECBR) model and expert type I II III CBR system (AVCBR, AACBR, AGCBR) model that reflect expert opinion in fundamental type system. GECBR and AGCBR chose on superior study model. Through such 4 study process, we developed 8 constitution diagnosis expert system lastly. Results : 1. When we select GECBR that is fundamental type by reasoning system, diagnosis rate 78.91% of 8 constitution diagnosis expert system is expected, and the constitution diagnosis rate Hepatonia 90.4%, Cholecystonia 63.0%, Pancreotonia 91.1%, Gastrotonia 0%, Pulmotonia 71.2%, Colonotonia 74.4%, Renotonia 37.5%, Vesicotonia 67.1% expect. 2. When we select AGCBR that is expert type III by reasoning system, diagnosis rate 77.51% of 8 constitution diagnosis expert system is expected, and the constitution diagnosis rate Hepatonia 93.4%, Cholecystonia 58.5%, Pancreotonia 91.1%, Gastrotonia 0%, Pulmotonia 73.1%, Colonotonia 64.4%, Renotonia 41.7%, Vesicotonia 72.2% expect. Conclusion : Based on this study, 8 constitution diagnosis expert system may give help to diagnose 8 constitution, and it is going to utilize as objective estimation tool of 8 constitution diagnosis, and further study for 8 Constitution Medicine Diagnosis Expert System Development used CBR(Case based Reasoning) is needed to supplement this study.

  • PDF

A Study for 8 Constitution Medicine Diagnosis Expert System Development (8체질의학을 위한 진단 전문가 시스템 개발 및 고찰)

  • Shin, Yong-Sup;Park, Young-Bae;Park, Young-Jae;Kim, Min-Yong;Oh, Hwan-Sup
    • The Journal of the Society of Korean Medicine Diagnostics
    • /
    • v.12 no.1
    • /
    • pp.142-184
    • /
    • 2008
  • Background: There was seldom study about method that diagnose 8 Constitution beside method of pulse diagnosis in 8 Constitution Medicine. Objectives: This study is to make out 8 Constitution Medicine Diagnosis Expert System Development used CBR(Case based Reasoning). Methods: First, at case base construction process we constructed case base for CBR embodiment because gathering 925 cases all to patient who constitution is verified, and second, at study model establishment process superior expert system development by purpose CBR of reasoning process dividing fundamental type CBR that spend basis data value and expert type I II III CBR that reflect weight in basis data value according to advice expert opinion, and third, system embodiment process explained about way to give process and weight that diagnose constitution through Nearest Neighbor Method sampling process of CBR techniques, and fourth, at system estimation process we selected superior CBR model because comparing and estimate the diagnosis rate of expert system with fundamental type system (GECBR) model and expert type I II III CBR system (AVCBR, AACBR, AGCBR) model that reflect expert opinion in fundamental type system. GECBR and AGCBR chose on superior study model. Through such 4 study process, we developed 8 constitution diagnosis expert system lastly. Results: 1. When we select GECBR that is fundamental type by reasoning system, diagnosis rate 78.91% of 8 constitution diagnosis expert system is expected, and the constitution diagnosis rate Hepatonia 90.4%, Cholecystonia 63.0%, Pancreotonia 91.1%, Gastrotonia 0%, Pulmotonia 71.2%, Colonotonia 74.4%, Renotonia 37.5%, Vesicotonia 67.1% expect. 2. When we select AGCBR that is expert type III by reasoning system, diagnosis rate 77.51% of 8 constitution diagnosis expert system is expected, and the constitution diagnosis rate Hepatonia 93.4%, Cholecystonia 58.5%, Pancreotonia 91.1%, Gastrotonia 0%, Pulmotonia 73.1%, Colonotonia 64.4%, Renotonia 41.7%, Vesicotonia 72.2% expect. Conclusion: Based on this study, 8 constitution diagnosis expert system may give help to diagnose 8 constitution, and it is going to utilize as objective estimation tool of 8 constitution diagnosis, and further study for 8 Constitution Medicine Diagnosis Expert System Development used CBR(Case based Reasoning) is needed to supplement this study.

  • PDF

Multi-target Data Association Filter Based on Order Statistics for Millimeter-wave Automotive Radar (밀리미터파 대역 차량용 레이더를 위한 순서통계 기법을 이용한 다중표적의 데이터 연관 필터)

  • Lee, Moon-Sik;Kim, Yong-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.37 no.5
    • /
    • pp.94-104
    • /
    • 2000
  • The accuracy and reliability of the target tracking is very critical issue in the design of automotive collision warning radar A significant problem in multi-target tracking (MTT) is the target-to-measurement data association If an incorrect measurement is associated with a target, the target could diverge the track and be prematurely terminated or cause other targets to also diverge the track. Most methods for target-to-measurement data association tend to coalesce neighboring targets Therefore, many algorithms have been developed to solve this data association problem. In this paper, a new multi-target data association method based on order statistics is described The new approaches. called the order statistics probabilistic data association (OSPDA) and the order statistics joint probabilistic data association (OSJPDA), are formulated using the association probabilities of the probabilistic data association (PDA) and the joint probabilistic data association (JPDA) filters, respectively Using the decision logic. an optimal or near optimal target-to-measurement data association is made A computer simulation of the proposed method in a heavy cluttered condition is given, including a comparison With the nearest-neighbor CNN). the PDA, and the JPDA filters, Simulation results show that the performances of the OSPDA filter and the OSJPDA filter are superior to those of the PDA filter and the JPDA filter in terms of tracking accuracy about 18% and 19%, respectively In addition, the proposed method is implemented using a developed digital signal processing (DSP) board which can be interfaced with the engine control unit (ECU) of car engine and with the d?xer through the controller area network (CAN)

  • PDF

An Implementation of Automatic Genre Classification System for Korean Traditional Music (한국 전통음악 (국악)에 대한 자동 장르 분류 시스템 구현)

  • Lee Kang-Kyu;Yoon Won-Jung;Park Kyu-Sik
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.1
    • /
    • pp.29-37
    • /
    • 2005
  • This paper proposes an automatic genre classification system for Korean traditional music. The Proposed system accepts and classifies queried input music as one of the six musical genres such as Royal Shrine Music, Classcal Chamber Music, Folk Song, Folk Music, Buddhist Music, Shamanist Music based on music contents. In general, content-based music genre classification consists of two stages - music feature vector extraction and Pattern classification. For feature extraction. the system extracts 58 dimensional feature vectors including spectral centroid, spectral rolloff and spectral flux based on STFT and also the coefficient domain features such as LPC, MFCC, and then these features are further optimized using SFS method. For Pattern or genre classification, k-NN, Gaussian, GMM and SVM algorithms are considered. In addition, the proposed system adopts MFC method to settle down the uncertainty problem of the system performance due to the different query Patterns (or portions). From the experimental results. we verify the successful genre classification performance over $97{\%}$ for both the k-NN and SVM classifier, however SVM classifier provides almost three times faster classification performance than the k-NN.

A Parameter-Free Approach for Clustering and Outlier Detection in Image Databases (이미지 데이터베이스에서 매개변수를 필요로 하지 않는 클러스터링 및 아웃라이어 검출 방법)

  • Oh, Hyun-Kyo;Yoon, Seok-Ho;Kim, Sang-Wook
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.1
    • /
    • pp.80-91
    • /
    • 2010
  • As the volume of image data increases dramatically, its good organization of image data is crucial for efficient image retrieval. Clustering is a typical way of organizing image data. However, traditional clustering methods have a difficulty of requiring a user to provide the number of clusters as a parameter before clustering. In this paper, we discuss an approach for clustering image data that does not require the parameter. Basically, the proposed approach is based on Cross-Association that finds a structure or patterns hidden in data using the relationship between individual objects. In order to apply Cross-Association to clustering of image data, we convert the image data into a graph first. Then, we perform Cross-Association on the graph thus obtained and interpret the results in the clustering perspective. We also propose the method of hierarchical clustering and the method of outlier detection based on Cross-Association. By performing a series of experiments, we verify the effectiveness of the proposed approach. Finally, we discuss the finding of a good value of k used in k-nearest neighbor search and also compare the clustering results with symmetric and asymmetric ways used in building a graph.

A Learning Agent for Automatic Bookmark Classification (북 마크 자동 분류를 위한 학습 에이전트)

  • Kim, In-Cheol;Cho, Soo-Sun
    • The KIPS Transactions:PartB
    • /
    • v.8B no.5
    • /
    • pp.455-462
    • /
    • 2001
  • The World Wide Web has become one of the major services provided through Internet. When searching the vast web space, users use bookmarking facilities to record the sites of interests encountered during the course of navigation. One of the typical problems arising from bookmarking is that the list of bookmarks lose coherent organization when the the becomes too lengthy, thus ceasing to function as a practical finding aid. In order to maintain the bookmark file in an efficient, organized manner, the user has to classify all the bookmarks newly added to the file, and update the folders. This paper introduces our learning agent called BClassifier that automatically classifies bookmarks by analyzing the contents of the corresponding web documents. The chief source for the training examples are the bookmarks already classified into several bookmark folders according to their subject by the user. Additionally, the web pages found under top categories of Yahoo site are collected and included in the training examples for diversifying the subject categories to be represented, and the training examples for these categories as well. Our agent employs naive Bayesian learning method that is a well-tested, probability-based categorizing technique. In this paper, the outcome of some experimentation is also outlined and evaluated. A comparison of naive Bayesian learning method alongside other learning methods such as k-Nearest Neighbor and TFIDF is also presented.

  • PDF

Semantic Similarity Search using the Signature Tree (시그니처 트리를 사용한 의미적 유사성 검색 기법)

  • Kim, Ki-Sung;Im, Dong-Hyuk;Kim, Cheol-Han;Kim, Hyoung-Joo
    • Journal of KIISE:Databases
    • /
    • v.34 no.6
    • /
    • pp.546-553
    • /
    • 2007
  • As ontologies are used widely, interest for semantic similarity search is also increasing. In this paper, we suggest a query evaluation scheme for k-nearest neighbor query, which retrieves k most similar objects to the query object. We use the best match method to calculate the semantic similarity between objects and use the signature tree to index annotation information of objects in database. The signature tree is usually used for the set similarity search. When we use the signature tree in similarity search, we are required to predict the upper-bound of similarity for a node; the highest similarity value which can be found when we traverse into the node. So we suggest a prediction function for the best match similarity function and prove the correctness of the prediction. And we modify the original signature tree structure for same signatures not to be stored redundantly. This improved structure of signature tree not only reduces the size of signature tree but also increases the efficiency of query evaluation. We use the Gene Ontology(GO) for our experiments, which provides large ontologies and large amount of annotation data. Using GO, we show that proposed method improves query efficiency and present several experimental results varying the page size and using several node-splitting methods.