• Title/Summary/Keyword: Overfitting 문제

Search Result 66, Processing Time 0.036 seconds

An Improved AdaBoost Algorithm by Clustering Samples (샘플 군집화를 이용한 개선된 아다부스트 알고리즘)

  • Baek, Yeul-Min;Kim, Joong-Geun;Kim, Whoi-Yul
    • Journal of Broadcast Engineering
    • /
    • v.18 no.4
    • /
    • pp.643-646
    • /
    • 2013
  • We present an improved AdaBoost algorithm to avoid overfitting phenomenon. AdaBoost is widely known as one of the best solutions for object detection. However, AdaBoost tends to be overfitting when a training dataset has noisy samples. To avoid the overfitting phenomenon of AdaBoost, the proposed method divides positive samples into K clusters using k-means algorithm, and then uses only one cluster to minimize the training error at each iteration of weak learning. Through this, excessive partitions of samples are prevented. Also, noisy samples are excluded for the training of weak learners so that the overfitting phenomenon is effectively reduced. In our experiment, the proposed method shows better classification and generalization ability than conventional boosting algorithms with various real world datasets.

An Incremental Rule Extraction Algorithm Based on Recursive Partition Averaging (재귀적 분할 평균에 기반한 점진적 규칙 추출 알고리즘)

  • Han, Jin-Chul;Kim, Sang-Kwi;Yoon, Chung-Hwa
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.1
    • /
    • pp.11-17
    • /
    • 2007
  • One of the popular methods used for pattern classification is the MBR (Memory-Based Reasoning) algorithm. Since it simply computes distances between a test pattern and training patterns or hyperplanes stored in memory, and then assigns the class of the nearest training pattern, it cannot explain how the classification result is obtained. In order to overcome this problem, we propose an incremental teaming algorithm based on RPA (Recursive Partition Averaging) to extract IF-THEN rules that describe regularities inherent in training patterns. But rules generated by RPA eventually show an overfitting phenomenon, because they depend too strongly on the details of given training patterns. Also RPA produces more number of rules than necessary, due to over-partitioning of the pattern space. Consequently, we present the IREA (Incremental Rule Extraction Algorithm) that overcomes overfitting problem by removing useless conditions from rules and reduces the number of rules at the same time. We verify the performance of proposed algorithm using benchmark data sets from UCI Machine Learning Repository.

Indoor positioning system using Xgboosting (Xgboosting 기법을 이용한 실내 위치 측위 기법)

  • Hwang, Chi-Gon;Yoon, Chang-Pyo;Kim, Dae-Jin
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.492-494
    • /
    • 2021
  • The decision tree technique is used as a classification technique in machine learning. However, the decision tree has a problem of consuming a lot of speed or resources due to the problem of overfitting. To solve this problem, there are bagging and boosting techniques. Bagging creates multiple samplings and models them using them, and boosting models the sampled data and adjusts weights to reduce overfitting. In addition, recently, techniques Xgboost have been introduced to improve performance. Therefore, in this paper, we collect wifi signal data for indoor positioning, apply it to the existing method and Xgboost, and perform performance evaluation through it.

  • PDF

Application of Random Over Sampling Examples(ROSE) for an Effective Bankruptcy Prediction Model (효과적인 기업부도 예측모형을 위한 ROSE 표본추출기법의 적용)

  • Ahn, Cheolhwi;Ahn, Hyunchul
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.8
    • /
    • pp.525-535
    • /
    • 2018
  • If the frequency of a particular class is excessively higher than the frequency of other classes in the classification problem, data imbalance problems occur, which make machine learning distorted. Corporate bankruptcy prediction often suffers from data imbalance problems since the ratio of insolvent companies is generally very low, whereas the ratio of solvent companies is very high. To mitigate these problems, it is required to apply a proper sampling technique. Until now, oversampling techniques which adjust the class distribution of a data set by sampling minor class with replacement have popularly been used. However, they are a risk of overfitting. Under this background, this study proposes ROSE(Random Over Sampling Examples) technique which is proposed by Menardi and Torelli in 2014 for the effective corporate bankruptcy prediction. The ROSE technique creates new learning samples by synthesizing the samples for learning, so it leads to better prediction accuracy of the classifiers while avoiding the risk of overfitting. Specifically, our study proposes to combine the ROSE method with SVM(support vector machine), which is known as the best binary classifier. We applied the proposed method to a real-world bankruptcy prediction case of a Korean major bank, and compared its performance with other sampling techniques. Experimental results showed that ROSE contributed to the improvement of the prediction accuracy of SVM in bankruptcy prediction compared to other techniques, with statistical significance. These results shed a light on the fact that ROSE can be a good alternative for resolving data imbalance problems of the prediction problems in social science area other than bankruptcy prediction.

A Study on Random Selection of Pooling Operations for Regularization and Reduction of Cross Validation (정규화 및 교차검증 횟수 감소를 위한 무작위 풀링 연산 선택에 관한 연구)

  • Ryu, Seo-Hyeon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.4
    • /
    • pp.161-166
    • /
    • 2018
  • In this paper, we propose a method for the random selection of pooling operations for the regularization and reduction of cross validation in convolutional neural networks. The pooling operation in convolutional neural networks is used to reduce the size of the feature map and for its shift invariant properties. In the existing pooling method, one pooling operation is applied in each pooling layer. Because this method fixes the convolution network, the network suffers from overfitting, which means that it excessively fits the models to the training samples. In addition, to find the best combination of pooling operations to maximize the performance, cross validation must be performed. To solve these problems, we introduce the probability concept into the pooling layers. The proposed method does not select one pooling operation in each pooling layer. Instead, we randomly select one pooling operation among multiple pooling operations in each pooling region during training, and for testing purposes, we use probabilistic weighting to produce the expected output. The proposed method can be seen as a technique in which many networks are approximately averaged using a different pooling operation in each pooling region. Therefore, this method avoids the overfitting problem, as well as reducing the amount of cross validation. The experimental results show that the proposed method can achieve better generalization performance and reduce the need for cross validation.

A Method of Activity Recognition in Small-Scale Activity Classification Problems via Optimization of Deep Neural Networks (심층 신경망의 최적화를 통한 소규모 행동 분류 문제의 행동 인식 방법)

  • Kim, Seunghyun;Kim, Yeon-Ho;Kim, Do-Yeon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.3
    • /
    • pp.155-160
    • /
    • 2017
  • Recently, Deep learning has been used successfully to solve many recognition problems. It has many advantages over existing machine learning methods that extract feature points through hand-crafting. Deep neural networks for human activity recognition split video data into frame images, and then classify activities by analysing the connectivity of frame images according to the time. But it is difficult to apply to actual problems which has small-scale activity classes. Because this situations has a problem of overfitting and insufficient training data. In this paper, we defined 5 type of small-scale human activities, and classified them. We construct video database using 700 video clips, and obtained a classifying accuracy of 74.00%.

A Study on Simplification of Machine Learning Model (기계학습 모델의 간략화 방법에 대한 연구)

  • Lee, Gye-Sung;Kim, In-Kook
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.4
    • /
    • pp.147-152
    • /
    • 2016
  • One of major issues in machine learning that extracts and acquires knowledge implicit in data is to find an appropriate way of representing it. Knowledge can be represented by a number of structures such as networks, trees, lists, and rules. The differences among these exist not only in their structures but also in effectiveness of the models for their problem solving capability. In this paper, we propose partition utility as a criterion function for clustering that can lead to simplification of the model and thus avoid overfitting problem. In addition, a heuristic is proposed as a way to construct balanced hierarchical models.

Predictive Optimization Adjusted With Pseudo Data From A Missing Data Imputation Technique (결측 데이터 보정법에 의한 의사 데이터로 조정된 예측 최적화 방법)

  • Kim, Jeong-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.2
    • /
    • pp.200-209
    • /
    • 2019
  • When forecasting future values, a model estimated after minimizing training errors can yield test errors higher than the training errors. This result is the over-fitting problem caused by an increase in model complexity when the model is focused only on a given dataset. Some regularization and resampling methods have been introduced to reduce test errors by alleviating this problem but have been designed for use with only a given dataset. In this paper, we propose a new optimization approach to reduce test errors by transforming a test error minimization problem into a training error minimization problem. To carry out this transformation, we needed additional data for the given dataset, termed pseudo data. To make proper use of pseudo data, we used three types of missing data imputation techniques. As an optimization tool, we chose the least squares method and combined it with an extra pseudo data instance. Furthermore, we present the numerical results supporting our proposed approach, which resulted in less test errors than the ordinary least squares method.

Overfitting Reduction of Intelligence Web Search based on Enforcement Learning (강화학습에 기초한 지능형 웹 검색의 과잉적합 감소방안)

  • Han, Song-Yi;Jung, Yong-Gyu
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.9 no.3
    • /
    • pp.25-30
    • /
    • 2009
  • Recent days intellectual systems using reinforcement learning are being researched at various fields of game and web searching applications. A good training models are called to be fitted with trainning data and also classified with new records accurately. A overfitted model with training data may possibly bring the unfavored fallacy of hasty generalization. But it would be unavoidable in actual world. The entropy and mutation model are suggested to reduce the overfitting problems on this paper. It explains variation of entropy and artificial development of entropy in datamining, which can tell development of mutation to survive in nature world. Periodical generation of maximum entropy are introduced in this paper to reduce overfitting. Maximum entropy model can be considered as a periodical generalization in intensified process of intellectual web searching.

  • PDF

Comparison of model selection criteria in graphical LASSO (그래프 LASSO에서 모형선택기준의 비교)

  • Ahn, Hyeongseok;Park, Changyi
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.4
    • /
    • pp.881-891
    • /
    • 2014
  • Graphical models can be used as an intuitive tool for modeling a complex stochastic system with a large number of variables related each other because the conditional independence between random variables can be visualized as a network. Graphical least absolute shrinkage and selection operator (LASSO) is considered to be effective in avoiding overfitting in the estimation of Gaussian graphical models for high dimensional data. In this paper, we consider the model selection problem in graphical LASSO. Particularly, we compare various model selection criteria via simulations and analyze a real financial data set.