• Title/Summary/Keyword: Overfitting Problem

Search Result 88, Processing Time 0.031 seconds

An Incremental Rule Extraction Algorithm Based on Recursive Partition Averaging (재귀적 분할 평균에 기반한 점진적 규칙 추출 알고리즘)

  • Han, Jin-Chul;Kim, Sang-Kwi;Yoon, Chung-Hwa
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.1
    • /
    • pp.11-17
    • /
    • 2007
  • One of the popular methods used for pattern classification is the MBR (Memory-Based Reasoning) algorithm. Since it simply computes distances between a test pattern and training patterns or hyperplanes stored in memory, and then assigns the class of the nearest training pattern, it cannot explain how the classification result is obtained. In order to overcome this problem, we propose an incremental teaming algorithm based on RPA (Recursive Partition Averaging) to extract IF-THEN rules that describe regularities inherent in training patterns. But rules generated by RPA eventually show an overfitting phenomenon, because they depend too strongly on the details of given training patterns. Also RPA produces more number of rules than necessary, due to over-partitioning of the pattern space. Consequently, we present the IREA (Incremental Rule Extraction Algorithm) that overcomes overfitting problem by removing useless conditions from rules and reduces the number of rules at the same time. We verify the performance of proposed algorithm using benchmark data sets from UCI Machine Learning Repository.

A Study on Polynomial Neural Networks for Stabilized Deep Networks Structure (안정화된 딥 네트워크 구조를 위한 다항식 신경회로망의 연구)

  • Jeon, Pil-Han;Kim, Eun-Hu;Oh, Sung-Kwun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.12
    • /
    • pp.1772-1781
    • /
    • 2017
  • In this study, the design methodology for alleviating the overfitting problem of Polynomial Neural Networks(PNN) is realized with the aid of two kinds techniques such as L2 regularization and Sum of Squared Coefficients (SSC). The PNN is widely used as a kind of mathematical modeling methods such as the identification of linear system by input/output data and the regression analysis modeling method for prediction problem. PNN is an algorithm that obtains preferred network structure by generating consecutive layers as well as nodes by using a multivariate polynomial subexpression. It has much fewer nodes and more flexible adaptability than existing neural network algorithms. However, such algorithms lead to overfitting problems due to noise sensitivity as well as excessive trainning while generation of successive network layers. To alleviate such overfitting problem and also effectively design its ensuing deep network structure, two techniques are introduced. That is we use the two techniques of both SSC(Sum of Squared Coefficients) and $L_2$ regularization for consecutive generation of each layer's nodes as well as each layer in order to construct the deep PNN structure. The technique of $L_2$ regularization is used for the minimum coefficient estimation by adding penalty term to cost function. $L_2$ regularization is a kind of representative methods of reducing the influence of noise by flattening the solution space and also lessening coefficient size. The technique for the SSC is implemented for the minimization of Sum of Squared Coefficients of polynomial instead of using the square of errors. In the sequel, the overfitting problem of the deep PNN structure is stabilized by the proposed method. This study leads to the possibility of deep network structure design as well as big data processing and also the superiority of the network performance through experiments is shown.

Variable Selection Theorems in General Linear Model

  • Park, Jeong-Soo;Yoon, Sang-Hoo
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 2006.04a
    • /
    • pp.171-179
    • /
    • 2006
  • For the problem of variable selection in linear models, we consider the errors are correlated with V covariance matrix. Hocking's theorems on the effects of the overfitting and the underfitting in linear model are extended to the less than full rank and correlated error model, and to the ANCOVA model.

  • PDF

Improved Deep Learning Algorithm

  • Kim, Byung Joo
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.8 no.2
    • /
    • pp.119-127
    • /
    • 2018
  • Training a very large deep neural network can be painfully slow and prone to overfitting. Many researches have done for overcoming the problem. In this paper, a combination of early stopping and ADAM based deep neural network was presented. This form of deep network is useful for handling the big data because it automatically stop the training before overfitting occurs. Also generalization ability is better than pure deep neural network model.

A comparison of methods to reduce overfitting in neural networks

  • Kim, Ho-Chan;Kang, Min-Jae
    • International journal of advanced smart convergence
    • /
    • v.9 no.2
    • /
    • pp.173-178
    • /
    • 2020
  • A common problem with neural network learning is that it is too suitable for the specificity of learning. In this paper, various methods were compared to avoid overfitting: regularization, drop-out, different numbers of data and different types of neural networks. Comparative studies of the above-mentioned methods have been provided to evaluate the test accuracy. I found that the more data using method is better than the regularization and dropout methods. Moreover, we know that deep convolutional neural networks outperform multi-layer neural networks and simple convolution neural networks.

A Spatial Regularization of LDA for Face Recognition

  • Park, Lae-Jeong
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.10 no.2
    • /
    • pp.95-100
    • /
    • 2010
  • This paper proposes a new spatial regularization of Fisher linear discriminant analysis (LDA) to reduce the overfitting due to small size sample (SSS) problem in face recognition. Many regularized LDAs have been proposed to alleviate the overfitting by regularizing an estimate of the within-class scatter matrix. Spatial regularization methods have been suggested that make the discriminant vectors spatially smooth, leading to mitigation of the overfitting. As a generalized version of the spatially regularized LDA, the proposed regularized LDA utilizes the non-uniformity of spatial correlation structures in face images in adding a spatial smoothness constraint into an LDA framework. The region-dependent spatial regularization is advantageous for capturing the non-flat spatial correlation structure within face image as well as obtaining a spatially smooth projection of LDA. Experimental results on public face databases such as ORL and CMU PIE show that the proposed regularized LDA performs well especially when the number of training images per individual is quite small, compared with other regularized LDAs.

A Study on Simplification of Machine Learning Model (기계학습 모델의 간략화 방법에 대한 연구)

  • Lee, Gye-Sung;Kim, In-Kook
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.4
    • /
    • pp.147-152
    • /
    • 2016
  • One of major issues in machine learning that extracts and acquires knowledge implicit in data is to find an appropriate way of representing it. Knowledge can be represented by a number of structures such as networks, trees, lists, and rules. The differences among these exist not only in their structures but also in effectiveness of the models for their problem solving capability. In this paper, we propose partition utility as a criterion function for clustering that can lead to simplification of the model and thus avoid overfitting problem. In addition, a heuristic is proposed as a way to construct balanced hierarchical models.

Indoor positioning system using Xgboosting (Xgboosting 기법을 이용한 실내 위치 측위 기법)

  • Hwang, Chi-Gon;Yoon, Chang-Pyo;Kim, Dae-Jin
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.492-494
    • /
    • 2021
  • The decision tree technique is used as a classification technique in machine learning. However, the decision tree has a problem of consuming a lot of speed or resources due to the problem of overfitting. To solve this problem, there are bagging and boosting techniques. Bagging creates multiple samplings and models them using them, and boosting models the sampled data and adjusts weights to reduce overfitting. In addition, recently, techniques Xgboost have been introduced to improve performance. Therefore, in this paper, we collect wifi signal data for indoor positioning, apply it to the existing method and Xgboost, and perform performance evaluation through it.

  • PDF

Predictive Optimization Adjusted With Pseudo Data From A Missing Data Imputation Technique (결측 데이터 보정법에 의한 의사 데이터로 조정된 예측 최적화 방법)

  • Kim, Jeong-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.2
    • /
    • pp.200-209
    • /
    • 2019
  • When forecasting future values, a model estimated after minimizing training errors can yield test errors higher than the training errors. This result is the over-fitting problem caused by an increase in model complexity when the model is focused only on a given dataset. Some regularization and resampling methods have been introduced to reduce test errors by alleviating this problem but have been designed for use with only a given dataset. In this paper, we propose a new optimization approach to reduce test errors by transforming a test error minimization problem into a training error minimization problem. To carry out this transformation, we needed additional data for the given dataset, termed pseudo data. To make proper use of pseudo data, we used three types of missing data imputation techniques. As an optimization tool, we chose the least squares method and combined it with an extra pseudo data instance. Furthermore, we present the numerical results supporting our proposed approach, which resulted in less test errors than the ordinary least squares method.

Variable Selection Theorems in General Linear Model

  • Yoon, Sang-Hoo;Park, Jeong-Soo
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2005.11a
    • /
    • pp.187-192
    • /
    • 2005
  • For the problem of variable selection in linear models, we consider the errors are correlated with V covariance matrix. Hocking's theorems on the effects of the overfitting and the undefitting in linear model are extended to the less than full rank and correlated error model, and to the ANCOVA model

  • PDF