• Title/Summary/Keyword: 최근접 이웃

Search Result 187, Processing Time 0.032 seconds

A Comparison of Ensemble Methods Combining Resampling Techniques for Class Imbalanced Data (데이터 전처리와 앙상블 기법을 통한 불균형 데이터의 분류모형 비교 연구)

  • Leea, Hee-Jae;Lee, Sungim
    • The Korean Journal of Applied Statistics
    • /
    • v.27 no.3
    • /
    • pp.357-371
    • /
    • 2014
  • There are many studies related to imbalanced data in which the class distribution is highly skewed. To address the problem of imbalanced data, previous studies deal with resampling techniques which correct the skewness of the class distribution in each sampled subset by using under-sampling, over-sampling or hybrid-sampling such as SMOTE. Ensemble methods have also alleviated the problem of class imbalanced data. In this paper, we compare around a dozen algorithms that combine the ensemble methods and resampling techniques based on simulated data sets generated by the Backbone model, which can handle the imbalance rate. The results on various real imbalanced data sets are also presented to compare the effectiveness of algorithms. As a result, we highly recommend the resampling technique combining ensemble methods for imbalanced data in which the proportion of the minority class is less than 10%. We also find that each ensemble method has a well-matched sampling technique. The algorithms which combine bagging or random forest ensembles with random undersampling tend to perform well; however, the boosting ensemble appears to perform better with over-sampling. All ensemble methods combined with SMOTE outperform in most situations.

A Study on Adaptive Learning Model for Performance Improvement of Stream Analytics (실시간 데이터 분석의 성능개선을 위한 적응형 학습 모델 연구)

  • Ku, Jin-Hee
    • Journal of Convergence for Information Technology
    • /
    • v.8 no.1
    • /
    • pp.201-206
    • /
    • 2018
  • Recently, as technologies for realizing artificial intelligence have become more common, machine learning is widely used. Machine learning provides insight into collecting large amounts of data, batch processing, and taking final action, but the effects of the work are not immediately integrated into the learning process. In this paper proposed an adaptive learning model to improve the performance of real-time stream analysis as a big business issue. Adaptive learning generates the ensemble by adapting to the complexity of the data set, and the algorithm uses the data needed to determine the optimal data point to sample. In an experiment for six standard data sets, the adaptive learning model outperformed the simple machine learning model for classification at the learning time and accuracy. In particular, the support vector machine showed excellent performance at the end of all ensembles. Adaptive learning is expected to be applicable to a wide range of problems that need to be adaptively updated in the inference of changes in various parameters over time.

Collaborative Filtering for Recommendation based on Neural Network (추천을 위한 신경망 기반 협력적 여과)

  • 김은주;류정우;김명원
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.4
    • /
    • pp.457-466
    • /
    • 2004
  • Recommendation is to offer information which fits user's interests and tastes to provide better services and to reduce information overload. It recently draws attention upon Internet users and information providers. The collaborative filtering is one of the widely used methods for recommendation. It recommends an item to a user based on the reference users' preferences for the target item or the target user's preferences for the reference items. In this paper, we propose a neural network based collaborative filtering method. Our method builds a model by learning correlation between users or items using a multi-layer perceptron. We also investigate integration of diverse information to solve the sparsity problem and selecting the reference users or items based on similarity to improve performance. We finally demonstrate that our method outperforms the existing methods through experiments using the EachMovie data.

Stock prediction analysis through artificial intelligence using big data (빅데이터를 활용한 인공지능 주식 예측 분석)

  • Choi, Hun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.10
    • /
    • pp.1435-1440
    • /
    • 2021
  • With the advent of the low interest rate era, many investors are flocking to the stock market. In the past stock market, people invested in stocks labor-intensively through company analysis and their own investment techniques. However, in recent years, stock investment using artificial intelligence and data has been widely used. The success rate of stock prediction through artificial intelligence is currently not high, so various artificial intelligence models are trying to increase the stock prediction rate. In this study, we will look at various artificial intelligence models and examine the pros and cons and prediction rates between each model. This study investigated as stock prediction programs using artificial intelligence artificial neural network (ANN), deep learning or hierarchical learning (DNN), k-nearest neighbor algorithm(k-NN), convolutional neural network (CNN), recurrent neural network (RNN), and LSTMs.

Development of Machine Learning based Flood Depth and Location Prediction Model (머신러닝을 이용한 침수 깊이와 위치예측 모델 개발)

  • Ji-Wook Kang;Jong-Hyeok Park;Soo-Hee Han;Kyung-Jun Kim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.1
    • /
    • pp.91-98
    • /
    • 2023
  • With the increasing flood damage by frequently localized heavy rains, flood prediction research are being conducted to prevent flooding damage in advance. In this paper, we present a machine-learning scheme for developing a flooding depth and location prediction model using real-time rainfall data. This scheme proposes a dataset configuration method using the data as input, which can robustly configure various rainfall distribution patterns and train the model with less memory. These data are composed of two: valid total data and valid local. The one data that has a significant effect on flooding predicted the flooding location well but tended to have different values for predicting specific rainfall patterns. The other data that means the flood area partially affects flooding refers to valid local data. The valid local data was well learned for the fixed point method, but the flooding location was not accurately indicated for the arbitrary point method. Through this study, it is expected that a lot of damage can be prevented by predicting the depth and location of flooding in a real-time manner.

Study on Soil Moisture Predictability using Machine Learning Technique (머신러닝 기법을 활용한 토양수분 예측 가능성 연구)

  • Jo, Bongjun;Choi, Wanmin;Kim, Youngdae;kim, Kisung;Kim, Jonggun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.248-248
    • /
    • 2020
  • 토양수분은 증발산, 유출, 침투 등 물수지 요소들과 밀접한 연관이 있는 주요한 변수 중에 하나이다. 토양수분의 정도는 토양의 특성, 토지이용 형태, 기상 상태 등에 따라 공간적으로 상이하며, 특히 기상 상태에 따라 시간적 변동성을 보이고 있다. 기존 토양수분 측정은 토양시료 채취를 통한 실내 실험 측정과 측정 장비를 통한 현장 조사 방법이 있으나 시간적, 경제적 한계점이 있으며, 원격탐사 기법은 공간적으로 넓은 범위를 포함하지만 시간 해상도가 낮은 단점이 있다. 또한, 모델링을 통한 토양수분 예측 기술은 전문적인 지식이 요구되며, 복잡한 입력자료의 구축이 요구된다. 최근 머신러닝 기법은 수많은 자료 학습을 통해 사용자가 원하는 출력값을 도출하는데 널리 활용되고 있다. 이에 본 연구에서는 토양수분과 연관된 다양한 기상 인자들(강수량, 풍속, 습도 등)을 활용하여 머신러닝기법의 반복학습을 통한 토양수분의 예측 가능성을 분석하고자 한다. 이를 위해 시공간적으로 토양수분 실측 자료가 잘 구축되어 있는 청미천과 설마천 유역을 대상으로 머신러닝 기법을 적용하였다. 두 대상지에서 2008년~2012년 수문자료를 확보하였으며, 기상자료는 기상자료개방포털과 WAMIS를 통해 자료를 확보하였다. 토양수분 자료와 기상자료를 머신러닝 알고리즘을 통해 학습하고 2012년 기상 자료를 바탕으로 토양수분을 예측하였다. 사용되는 머신러닝 기법은 의사결정 나무(Decision Tree), 신경망(Multi Layer Perceptron, MLP), K-최근접 이웃(K-Nearest Neighbors, KNN), 서포트 벡터 머신(Support Vector Machine, SVM), 랜덤 포레스트(Random Forest), 그래디언트 부스팅 (Gradient Boosting)이다. 토양수분과 기상인자 간의 상관관계를 분석하기 위해 히트맵(Heat Map)을 이용하였다. 히트맵 분석 결과 토양수분의 시간적 변동은 다양한 기상 자료 중 강수량과 상대습도가 가장 큰 영향력을 보여주었다. 또한 다양한 기상 인자 기반 머신러닝 기법 적용 결과에서는 두 지역 모두 신경망(MLP) 기법을 제외한 모든 기법이 전반적으로 실측값과 유사한 형태를 보였으며 비교 그래프에서도 실측값과 예측 값이 유사한 추세를 나타냈다. 따라서 상관관계있는 과거 기상자료를 통해 머신러닝 기법 기반 토양수분의 시간적 변동 예측이 가능할 것으로 판단된다.

  • PDF

Region of Interest Extraction and Bilinear Interpolation Application for Preprocessing of Lipreading Systems (입 모양 인식 시스템 전처리를 위한 관심 영역 추출과 이중 선형 보간법 적용)

  • Jae Hyeok Han;Yong Ki Kim;Mi Hye Kim
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.189-198
    • /
    • 2024
  • Lipreading is one of the important parts of speech recognition, and several studies have been conducted to improve the performance of lipreading in lipreading systems for speech recognition. Recent studies have used method to modify the model architecture of lipreading system to improve recognition performance. Unlike previous research that improve recognition performance by modifying model architecture, we aim to improve recognition performance without any change in model architecture. In order to improve the recognition performance without modifying the model architecture, we refer to the cues used in human lipreading and set other regions such as chin and cheeks as regions of interest along with the lip region, which is the existing region of interest of lipreading systems, and compare the recognition rate of each region of interest to propose the highest performing region of interest In addition, assuming that the difference in normalization results caused by the difference in interpolation method during the process of normalizing the size of the region of interest affects the recognition performance, we interpolate the same region of interest using nearest neighbor interpolation, bilinear interpolation, and bicubic interpolation, and compare the recognition rate of each interpolation method to propose the best performing interpolation method. Each region of interest was detected by training an object detection neural network, and dynamic time warping templates were generated by normalizing each region of interest, extracting and combining features, and mapping the dimensionality reduction of the combined features into a low-dimensional space. The recognition rate was evaluated by comparing the distance between the generated dynamic time warping templates and the data mapped to the low-dimensional space. In the comparison of regions of interest, the result of the region of interest containing only the lip region showed an average recognition rate of 97.36%, which is 3.44% higher than the average recognition rate of 93.92% in the previous study, and in the comparison of interpolation methods, the bilinear interpolation method performed 97.36%, which is 14.65% higher than the nearest neighbor interpolation method and 5.55% higher than the bicubic interpolation method. The code used in this study can be found a https://github.com/haraisi2/Lipreading-Systems.

Hyper-Rectangle Based Prototype Selection Algorithm Preserving Class Regions (클래스 영역을 보존하는 초월 사각형에 의한 프로토타입 선택 알고리즘)

  • Baek, Byunghyun;Euh, Seongyul;Hwang, Doosung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.3
    • /
    • pp.83-90
    • /
    • 2020
  • Prototype selection offers the advantage of ensuring low learning time and storage space by selecting the minimum data representative of in-class partitions from the training data. This paper designs a new training data generation method using hyper-rectangles that can be applied to general classification algorithms. Hyper-rectangular regions do not contain different class data and divide the same class space. The median value of the data within a hyper-rectangle is selected as a prototype to form new training data, and the size of the hyper-rectangle is adjusted to reflect the data distribution in the class area. A set cover optimization algorithm is proposed to select the minimum prototype set that represents the whole training data. The proposed method reduces the time complexity that requires the polynomial time of the set cover optimization algorithm by using the greedy algorithm and the distance equation without multiplication. In experimented comparison with hyper-sphere prototype selections, the proposed method is superior in terms of prototype rate and generalization performance.

Classifying Cancer Using Partially Correlated Genes Selected by Forward Selection Method (전진선택법에 의해 선택된 부분 상관관계의 유전자들을 이용한 암 분류)

  • 유시호;조성배
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.3
    • /
    • pp.83-92
    • /
    • 2004
  • Gene expression profile is numerical data of gene expression level from organism measured on the microarray. Generally, each specific tissue indicates different expression levels in related genes, so that we can classify cancer with gene expression profile. Because not all the genes are related to classification, it is needed to select related genes that is called feature selection. This paper proposes a new gene selection method using forward selection method in regression analysis. This method reduces redundant information in the selected genes to have more efficient classification. We used k-nearest neighbor as a classifier and tested with colon cancer dataset. The results are compared with Pearson's coefficient and Spearman's coefficient methods and the proposed method showed better performance. It showed 90.3% accuracy in classification. The method also successfully applied to lymphoma cancer dataset.

Analysis of GPU-based Parallel Shifted Sort Algorithm by comparing with General GPU-based Tree Traversal (일반적인 GPU 트리 탐색과의 비교실험을 통한 GPU 기반 병렬 Shifted Sort 알고리즘 분석)

  • Kim, Heesu;Park, Taejung
    • Journal of Digital Contents Society
    • /
    • v.18 no.6
    • /
    • pp.1151-1156
    • /
    • 2017
  • It is common to achieve lower performance in traversing tree data structures in GPU than one expects. In this paper, we analyze the reason of lower-than-expected performance in GPU tree traversal and present that the warp divergences is caused by the branch instructions ("if${\ldots}$ else") which appear commonly in tree traversal CUDA codes. Also, we compare the parallel shifted sort algorithm which can reduce the number of warp divergences with a kd-tree CUDA implementation to show that the shifted sort algorithm can work faster than the kd-tree CUDA implementation thanks to less warp divergences. As the analysis result, the shifted sort algorithm worked about 16-fold faster than the kd-tree CUDA implementation for $2^{23}$ query points and $2^{23}$ data points in $R^3$ space. The performance gaps tend to increase in proportion to the number of query points and data points.