• Title/Summary/Keyword: K-Nearest Neighbor 알고리즘

Search Result 204, Processing Time 0.023 seconds

Target Word Selection Disambiguation using Untagged Text Data in English-Korean Machine Translation (영한 기계 번역에서 미가공 텍스트 데이터를 이용한 대역어 선택 중의성 해소)

  • Kim Yu-Seop;Chang Jeong-Ho
    • The KIPS Transactions:PartB
    • /
    • v.11B no.6
    • /
    • pp.749-758
    • /
    • 2004
  • In this paper, we propose a new method utilizing only raw corpus without additional human effort for disambiguation of target word selection in English-Korean machine translation. We use two data-driven techniques; one is the Latent Semantic Analysis(LSA) and the other the Probabilistic Latent Semantic Analysis(PLSA). These two techniques can represent complex semantic structures in given contexts like text passages. We construct linguistic semantic knowledge by using the two techniques and use the knowledge for target word selection in English-Korean machine translation. For target word selection, we utilize a grammatical relationship stored in a dictionary. We use k- nearest neighbor learning algorithm for the resolution of data sparseness Problem in target word selection and estimate the distance between instances based on these models. In experiments, we use TREC data of AP news for construction of latent semantic space and Wail Street Journal corpus for evaluation of target word selection. Through the Latent Semantic Analysis methods, the accuracy of target word selection has improved over 10% and PLSA has showed better accuracy than LSA method. finally we have showed the relatedness between the accuracy and two important factors ; one is dimensionality of latent space and k value of k-NT learning by using correlation calculation.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

Calculation of future rainfall scenarios to consider the impact of climate change in Seoul City's hydraulic facility design standards (서울시 수리시설 설계기준의 기후변화 영향 고려를 위한 미래강우시나리오 산정)

  • Yoon, Sun-Kwon;Lee, Taesam;Seong, Kiyoung;Ahn, Yujin
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.6
    • /
    • pp.419-431
    • /
    • 2021
  • In Seoul, it has been confirmed that the duration of rainfall is shortened and the frequency and intensity of heavy rains are increasing with a changing climate. In addition, due to high population density and urbanization in most areas, floods frequently occur in flood-prone areas for the increase in impermeable areas. Furthermore, the Seoul City is pursuing various projects such as structural and non-structural measures to resolve flood-prone areas. A disaster prevention performance target was set in consideration of the climate change impact of future precipitation, and this study conducted to reduce the overall flood damage in Seoul for the long-term. In this study, 29 GCMs with RCP4.5 and RCP8.5 scenarios were used for spatial and temporal disaggregation, and we also considered for 3 research periods, which is short-term (2006-2040, P1), mid-term (2041-2070, P2), and long-term (2071-2100, P3), respectively. For spatial downscaling, daily data of GCM was processed through Quantile Mapping based on the rainfall of the Seoul station managed by the Korea Meteorological Administration and for temporal downscaling, daily data were downscaled to hourly data through k-nearest neighbor resampling and nonparametric temporal detailing techniques using genetic algorithms. Through temporal downscaling, 100 detailed scenarios were calculated for each GCM scenario, and the IDF curve was calculated based on a total of 2,900 detailed scenarios, and by averaging this, the change in the future extreme rainfall was calculated. As a result, it was confirmed that the probability of rainfall for a duration of 100 years and a duration of 1 hour increased by 8 to 16% in the RCP4.5 scenario, and increased by 7 to 26% in the RCP8.5 scenario. Based on the results of this study, the amount of rainfall designed to prepare for future climate change in Seoul was estimated and if can be used to establish purpose-wise water related disaster prevention policies.

Estimation of River Flow Data Using Machine Learning (머신러닝 기법을 이용한 유량 자료 생산 방법)

  • Kang, Noel;Lee, Ji Hun;Lee, Jung Hoon;Lee, Chungdae
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.261-261
    • /
    • 2020
  • 물관리의 기본이 되는 연속적인 유량 자료 확보를 위해서는 정확도 높은 수위-유량 관계 곡선식 개발이 필수적이다. 수위-유량 관계곡선식은 모든 수문시설 설계의 기초가 되며 홍수, 가뭄 등 물재해 대응을 위해서도 중요한 의미를 가지고 있다. 그러나 일반적으로 유량 측정은 많은 비용과 시간이 들고, 식생성장, 단면변화 등의 통제특성(control)이 변함에 따라 구간분리, 기간분리와 같은 비선형적인 양상이 나타나 자료 해석에 어려움이 존재한다. 특히, 국내 하천의 경우 자연적 및 인위적인 환경 변화가 다양하여 지점 및 기간에 따라 세밀한 분석이 요구된다. 머신러닝(Machine Learning)이란 데이터를 통해 컴퓨터가 스스로 학습하여 모델을 구축하고 성능을 향상시키는 일련의 과정을 뜻한다. 기존의 수위-유량 관계곡선식은 개발자의 판단에 의해 데이터의 종류와 기간 등을 설정하여 회귀식의 파라미터를 산출한다면, 머신러닝은 유효한 전체 데이터를 이용해 스스로 학습하여 자료 간 상관성을 찾아내 모델을 구축하고 성능을 지속적으로 향상 시킬 수 있다. 머신러닝은 충분한 수문자료가 확보되었다는 전제 하에 복잡하고 가변적인 수자원 환경을 반영하여 유량 추정의 정확도를 지속적으로 향상시킬 수 있다는 이점을 가지고 있다. 본 연구는 머신러닝의 대표적인 알고리즘들을 활용하여 유량을 추정하는 모델을 구축하고 성능을 비교·분석하였다. 대상지역은 안정적인 수량을 확보하고 있는 한강수계의 거운교 지점이며, 사용자료는 2010~2018년의 시간, 수위, 유량, 수면폭 등 이다. 프로그램은 파이썬을 기반으로 한 머신러닝 라이브러리인 사이킷런(sklearn)을 사용하였고 알고리즘은 랜덤포레스트 회귀, 의사결정트리, KNN(K-Nearest Neighbor), rgboost을 적용하였다. 학습(train) 데이터는 입력자료 종류별로 조합하여 6개의 세트로 구분하여 모델을 구축하였고, 이를 적용해 검증(test) 데이터를 RMSE(Roog Mean Square Error)로 평가하였다. 그 결과 모델 및 입력 자료의 조합에 따라 3.67~171.46로 다소 넓은 범위의 값이 도출되었다. 그 중 가장 우수한 유형은 수위, 연도, 수면폭 3개의 입력자료를 조합하여 랜덤포레스트 회귀 모델에 적용한 경우이다. 비교를 위해 동일한 검증 데이터를 한국수문조사연보(2018년) 내거운교 지점의 수위별 수위-유량 곡선식을 이용해 유량을 추정한 결과 RMSE가 3.76이 산출되어, 머신러닝이 세분화된 수위-유량 곡선식과 비슷한 수준까지 성능을 내는 것으로 확인되었다. 본 연구는 양질의 유량자료 생산을 위해 기 구축된 수문자료를 기반으로 머신러닝 기법의 적용 가능성을 검토한 기초 연구로써, 국내 효율적인 수문자료 측정 및 수위-유량 곡선 산출에 도움이 될 수 있을 것으로 판단된다. 향후 수자원 환경 및 통제특성에 영향을 미치는 다양한 영향변수를 파악하기 위해 기상자료, 취수량 등의 입력 자료를 적용할 필요가 있으며, 머신러닝 내 비지도학습인 딥러닝과 같은 보다 정교한 모델에 대한 추가적인 연구도 수행되어야 할 것이다.

  • PDF

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

Development of Regularized Expectation Maximization Algorithms for Fan-Beam SPECT Data (부채살 SPECT 데이터를 위한 정칙화된 기댓값 최대화 재구성기법 개발)

  • Kim, Soo-Mee;Lee, Jae-Sung;Lee, Soo-Jin;Kim, Kyeong-Min;Lee, Dong-Soo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.6
    • /
    • pp.464-472
    • /
    • 2005
  • Purpose: SPECT using a fan-beam collimator improves spatial resolution and sensitivity. For the reconstruction from fan-beam projections, it is necessary to implement direct fan-beam reconstruction methods without transforming the data into the parallel geometry. In this study, various fan-beam reconstruction algorithms were implemented and their performances were compared. Materials and Methods: The projector for fan-beam SPECT was implemented using a ray-tracing method. The direct reconstruction algorithms implemented for fan-beam projection data were FBP (filtered backprojection), EM (expectation maximization), OS-EM (ordered subsets EM) and MAP-EM OSL (maximum a posteriori EM using the one-step late method) with membrane and thin-plate models as priors. For comparison, the fan-beam protection data were also rebinned into the parallel data using various interpolation methods, such as the nearest neighbor, bilinear and bicubic interpolations, and reconstructed using the conventional EM algorithm for parallel data. Noiseless and noisy projection data from the digital Hoffman brain and Shepp/Logan phantoms were reconstructed using the above algorithms. The reconstructed images were compared in terms of a percent error metric. Results: for the fan-beam data with Poisson noise, the MAP-EM OSL algorithm with the thin-plate prior showed the best result in both percent error and stability. Bilinear interpolation was the most effective method for rebinning from the fan-beam to parallel geometry when the accuracy and computation load were considered. Direct fan-beam EM reconstructions were more accurate than the standard EM reconstructions obtained from rebinned parallel data. Conclusion: Direct fan-beam reconstruction algorithms were implemented, which provided significantly improved reconstructions.

Multi-target Data Association Filter Based on Order Statistics for Millimeter-wave Automotive Radar (밀리미터파 대역 차량용 레이더를 위한 순서통계 기법을 이용한 다중표적의 데이터 연관 필터)

  • Lee, Moon-Sik;Kim, Yong-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.37 no.5
    • /
    • pp.94-104
    • /
    • 2000
  • The accuracy and reliability of the target tracking is very critical issue in the design of automotive collision warning radar A significant problem in multi-target tracking (MTT) is the target-to-measurement data association If an incorrect measurement is associated with a target, the target could diverge the track and be prematurely terminated or cause other targets to also diverge the track. Most methods for target-to-measurement data association tend to coalesce neighboring targets Therefore, many algorithms have been developed to solve this data association problem. In this paper, a new multi-target data association method based on order statistics is described The new approaches. called the order statistics probabilistic data association (OSPDA) and the order statistics joint probabilistic data association (OSJPDA), are formulated using the association probabilities of the probabilistic data association (PDA) and the joint probabilistic data association (JPDA) filters, respectively Using the decision logic. an optimal or near optimal target-to-measurement data association is made A computer simulation of the proposed method in a heavy cluttered condition is given, including a comparison With the nearest-neighbor CNN). the PDA, and the JPDA filters, Simulation results show that the performances of the OSPDA filter and the OSJPDA filter are superior to those of the PDA filter and the JPDA filter in terms of tracking accuracy about 18% and 19%, respectively In addition, the proposed method is implemented using a developed digital signal processing (DSP) board which can be interfaced with the engine control unit (ECU) of car engine and with the d?xer through the controller area network (CAN)

  • PDF

Effect of Location Error on the Estimation of Aboveground Biomass Carbon Stock (지상부 바이오매스 탄소저장량의 추정에 위치 오차가 미치는 영향)

  • Kim, Sang-Pil;Heo, Joon;Jung, Jae-Hoon;Yoo, Su-Hong;Kim, Kyoung-Min
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.29 no.2
    • /
    • pp.133-139
    • /
    • 2011
  • Estimation of biomass carbon stock is an important research for estimation of public benefit of forest. Previous studies about biomass carbon stock estimation have limitations, which come from the used deterministic models. The most serious problem of deterministic models is that deterministic models do not provide any explanation about the relevant effects of errors. In this study, the effects of location errors were analyzed in order to estimation of biomass carbon stock of Danyang area using Monte Carlo simulation method. More specifically, the k-Nearest Neighbor(kNN) algorithm was used for basic estimation. In this procedure, random and systematic errors were added on the location of Sample plot, and effects on estimation error were analyzed by checking the changes of RMSE. As a result of random error simulation, mean RMSE of estimation was increased from 24.8 tonC/ha to 26 tonC/ha when 0.5~1 pixel location errors were added. However, mean RMSE was converged after the location errors were added 0.8 pixel, because of characteristic of study site. In case of the systematic error simulation, any significant trends of RMSE were not detected in the test data.

Optimizing Similarity Threshold and Coverage of CBR (사례기반추론의 유사 임계치 및 커버리지 최적화)

  • Ahn, Hyunchul
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.8
    • /
    • pp.535-542
    • /
    • 2013
  • Since case-based reasoning(CBR) has many advantages, it has been used for supporting decision making in various areas including medical checkup, production planning, customer classification, and so on. However, there are several factors to be set by heuristics when designing effective CBR systems. Among these factors, this study addresses the issue of selecting appropriate neighbors in case retrieval step. As the criterion for selecting appropriate neighbors, conventional studies have used the preset number of neighbors to combine(i.e. k of k-nearest neighbor), or the relative portion of the maximum similarity. However, this study proposes to use the absolute similarity threshold varying from 0 to 1, as the criterion for selecting appropriate neighbors to combine. In this case, too small similarity threshold value may make the model rarely produce the solution. To avoid this, we propose to adopt the coverage, which implies the ratio of the cases in which solutions are produced over the total number of the training cases, and to set it as the constraint when optimizing the similarity threshold. To validate the usefulness of the proposed model, we applied it to a real-world target marketing case of an online shopping mall in Korea. As a result, we found that the proposed model might significantly improve the performance of CBR.

A Study on Data Clustering of Light Buoy Using DBSCAN(I) (DBSCAN을 이용한 등부표 위치 데이터 Clustering 연구(I))

  • Gwang-Young Choi;So-Ra Kim;Sang-Won Park;Chae-Uk Song
    • Journal of Navigation and Port Research
    • /
    • v.47 no.4
    • /
    • pp.231-238
    • /
    • 2023
  • The position of a light buoy is always flexible due to the influence of external forces such as tides and wind. The position can be checked through AIS (Automatic Identification System) or RTU (Remote Terminal Unit) for AtoN. As a result of analyzing the position data for the last five years (2017-2021) of a light buoy, the average position error was 15.4%. It is necessary to detect position error data and obtain refined position data to prevent navigation safety accidents and management. This study aimed to detect position error data and obtain refined position data by DBSCAN Clustering position data obtained through AIS or RTU for AtoN. For this purpose, 21 position data of Gunsan Port No. 1 light buoy where RTU was installed among western waters with the most position errors were DBSCAN clustered using Python library. The minPts required for DBSCAN Clustering applied the value commonly used for two-dimensional data. Epsilon was calculated and its value was applied using the k-NN (nearest neighbor) algorithm. As a result of DBSCAN Clustering, position error data that did not satisfy minPts and epsilon were detected and refined position data were acquired. This study can be used as asic data for obtaining reliable position data of a light buoy installed with AIS or RTU for AtoN. It is expected to be of great help in preventing navigation safety accidents.