• Title/Summary/Keyword: Overfitting 문제

Search Result 69, Processing Time 0.024 seconds

A Technique for Pattern Recognition of Concrete Surface Cracks (콘크리트 표면 균열 패턴인식 기법 개발)

  • Lee Bang-Yeon;Park Yon-Dong;Kim Jin-Keun
    • Journal of the Korea Concrete Institute
    • /
    • v.17 no.3 s.87
    • /
    • pp.369-374
    • /
    • 2005
  • This study proposes a technique for the recognition of crack patterns, which includes horizontal, vertical, diagonal($-45^{\circ}$), diagonal($+45^{\circ}$), and random cracks, based on image processing technique and artificial neural network. A MATLAB code was developed for the proposed image processing algorithm and artificial neural network. Features were determined using total projection technique, and the structure(no. of layers and hidden neurons) and weight of artificial neural network were determined by learning from artificial crack images. In this process, we adopted Bayesian regularization technique as a generalization method to eliminate overfitting Problem. Numerical tests were performed on thirty-eight crack images to examine validity of the algorithm. Within the limited tests in the present study, the proposed algorithm was revealed as accurately recognizing the crack patterns when compared to those classified by a human expert.

A layered-wise data augmenting algorithm for small sampling data (적은 양의 데이터에 적용 가능한 계층별 데이터 증강 알고리즘)

  • Cho, Hee-chan;Moon, Jong-sub
    • Journal of Internet Computing and Services
    • /
    • v.20 no.6
    • /
    • pp.65-72
    • /
    • 2019
  • Data augmentation is a method that increases the amount of data through various algorithms based on a small amount of sample data. When machine learning and deep learning techniques are used to solve real-world problems, there is often a lack of data sets. The lack of data is at greater risk of underfitting and overfitting, in addition to the poor reflection of the characteristics of the set of data when learning a model. Thus, in this paper, through the layer-wise data augmenting method at each layer of deep neural network, the proposed method produces augmented data that is substantially meaningful and shows that the method presented by the paper through experimentation is effective in the learning of the model by measuring whether the method presented by the paper improves classification accuracy.

Optimized Bankruptcy Prediction through Combining SVM with Fuzzy Theory (퍼지이론과 SVM 결합을 통한 기업부도예측 최적화)

  • Choi, So-Yun;Ahn, Hyun-Chul
    • Journal of Digital Convergence
    • /
    • v.13 no.3
    • /
    • pp.155-165
    • /
    • 2015
  • Bankruptcy prediction has been one of the important research topics in finance since 1960s. In Korea, it has gotten attention from researchers since IMF crisis in 1998. This study aims at proposing a novel model for better bankruptcy prediction by converging three techniques - support vector machine(SVM), fuzzy theory, and genetic algorithm(GA). Our convergence model is basically based on SVM, a classification algorithm enables to predict accurately and to avoid overfitting. It also incorporates fuzzy theory to extend the dimensions of the input variables, and GA to optimize the controlling parameters and feature subset selection. To validate the usefulness of the proposed model, we applied it to H Bank's non-external auditing companies' data. We also experimented six comparative models to validate the superiority of the proposed model. As a result, our model was found to show the best prediction accuracy among the models. Our study is expected to contribute to the relevant literature and practitioners on bankruptcy prediction.

Predictive Analysis of Problematic Smartphone Use by Machine Learning Technique

  • Kim, Yu Jeong;Lee, Dong Su
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.2
    • /
    • pp.213-219
    • /
    • 2020
  • In this paper, we propose a classification analysis method for diagnosing and predicting problematic smartphone use in order to provide policy data on problematic smartphone use, which is getting worse year after year. Attempts have been made to identify key variables that affect the study. For this purpose, the classification rates of Decision Tree, Random Forest, and Support Vector Machine among machine learning analysis methods, which are artificial intelligence methods, were compared. The data were from 25,465 people who responded to the '2018 Problematic Smartphone Use Survey' provided by the Korea Information Society Agency and analyzed using the R statistical package (ver. 3.6.2). As a result, the three classification techniques showed similar classification rates, and there was no problem of overfitting the model. The classification rate of the Support Vector Machine was the highest among the three classification methods, followed by Decision Tree and Random Forest. The top three variables affecting the classification rate among smartphone use types were Life Service type, Information Seeking type, and Leisure Activity Seeking type.

Enhancement of Tongue Segmentation by Using Data Augmentation (데이터 증강을 이용한 혀 영역 분할 성능 개선)

  • Chen, Hong;Jung, Sung-Tae
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.5
    • /
    • pp.313-322
    • /
    • 2020
  • A large volume of data will improve the robustness of deep learning models and avoid overfitting problems. In automatic tongue segmentation, the availability of annotated tongue images is often limited because of the difficulty of collecting and labeling the tongue image datasets in reality. Data augmentation can expand the training dataset and increase the diversity of training data by using label-preserving transformations without collecting new data. In this paper, augmented tongue image datasets were developed using seven augmentation techniques such as image cropping, rotation, flipping, color transformations. Performance of the data augmentation techniques were studied using state-of-the-art transfer learning models, for instance, InceptionV3, EfficientNet, ResNet, DenseNet and etc. Our results show that geometric transformations can lead to more performance gains than color transformations and the segmentation accuracy can be increased by 5% to 20% compared with no augmentation. Furthermore, a random linear combination of geometric and color transformations augmentation dataset gives the superior segmentation performance than all other datasets and results in a better accuracy of 94.98% with InceptionV3 models.

A study on bias effect of LASSO regression for model selection criteria (모형 선택 기준들에 대한 LASSO 회귀 모형 편의의 영향 연구)

  • Yu, Donghyeon
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.4
    • /
    • pp.643-656
    • /
    • 2016
  • High dimensional data are frequently encountered in various fields where the number of variables is greater than the number of samples. It is usually necessary to select variables to estimate regression coefficients and avoid overfitting in high dimensional data. A penalized regression model simultaneously obtains variable selection and estimation of coefficients which makes them frequently used for high dimensional data. However, the penalized regression model also needs to select the optimal model by choosing a tuning parameter based on the model selection criterion. This study deals with the bias effect of LASSO regression for model selection criteria. We numerically describes the bias effect to the model selection criteria and apply the proposed correction to the identification of biomarkers for lung cancer based on gene expression data.

LSTM based sequence-to-sequence Model for Korean Automatic Word-spacing (LSTM 기반의 sequence-to-sequence 모델을 이용한 한글 자동 띄어쓰기)

  • Lee, Tae Seok;Kang, Seung Shik
    • Smart Media Journal
    • /
    • v.7 no.4
    • /
    • pp.17-23
    • /
    • 2018
  • We proposed a LSTM-based RNN model that can effectively perform the automatic spacing characteristics. For those long or noisy sentences which are known to be difficult to handle within Neural Network Learning, we defined a proper input data format and decoding data format, and added dropout, bidirectional multi-layer LSTM, layer normalization, and attention mechanism to improve the performance. Despite of the fact that Sejong corpus contains some spacing errors, a noise-robust learning model developed in this study with no overfitting through a dropout method helped training and returned meaningful results of Korean word spacing and its patterns. The experimental results showed that the performance of LSTM sequence-to-sequence model is 0.94 in F1-measure, which is better than the rule-based deep-learning method of GRU-CRF.

Study on Beamforming of Conformal Array Antenna Using Support Vector Regression (Support Vector Regression을 이용한 컨포멀 배열 안테나의 빔 형성 연구)

  • Lee, Kang-In;Jung, Sang-Hoon;Ryu, Hong-Kyun;Yoon, Young-Joong;Nam, Sang-Wook;Chung, Young-Seek
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.29 no.11
    • /
    • pp.868-877
    • /
    • 2018
  • In this paper, we propose a new beamforming algorithm for a conformal array antenna based on support vector regression(SVR). While the conventional least squares method(LSM) considers all sample errors, SVR considers errors beyond the given error bound to obtain the optimum weight vector, which has a sparse solution and the advantage of the minimization of the overfitting problem. To verify the performance of the proposed algorithm, we apply SVR to the experimentally measured active element patterns of the conformal array antenna and obtain the weights for beamforming. In addition, we compare the beamforming results of SVR and LSM.

A review of gene selection methods based on machine learning approaches (기계학습 접근법에 기반한 유전자 선택 방법들에 대한 리뷰)

  • Lee, Hajoung;Kim, Jaejik
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.5
    • /
    • pp.667-684
    • /
    • 2022
  • Gene expression data present the level of mRNA abundance of each gene, and analyses of gene expressions have provided key ideas for understanding the mechanism of diseases and developing new drugs and therapies. Nowadays high-throughput technologies such as DNA microarray and RNA-sequencing enabled the simultaneous measurement of thousands of gene expressions, giving rise to a characteristic of gene expression data known as high dimensionality. Due to the high-dimensionality, learning models to analyze gene expression data are prone to overfitting problems, and to solve this issue, dimension reduction or feature selection techniques are commonly used as a preprocessing step. In particular, we can remove irrelevant and redundant genes and identify important genes using gene selection methods in the preprocessing step. Various gene selection methods have been developed in the context of machine learning so far. In this paper, we intensively review recent works on gene selection methods using machine learning approaches. In addition, the underlying difficulties with current gene selection methods as well as future research directions are discussed.

Boosting the Performance of the Predictive Model on the Imbalanced Dataset Using SVM Based Bagging and Out-of-Distribution Detection (SVM 기반 Bagging과 OoD 탐색을 활용한 제조공정의 불균형 Dataset에 대한 예측모델의 성능향상)

  • Kim, Jong Hoon;Oh, Hayoung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.11
    • /
    • pp.455-464
    • /
    • 2022
  • There are two unique characteristics of the datasets from a manufacturing process. They are the severe class imbalance and lots of Out-of-Distribution samples. Some good strategies such as the oversampling over the minority class, and the down-sampling over the majority class, are well known to handle the class imbalance. In addition, SMOTE has been chosen to address the issue recently. But, Out-of-Distribution samples have been studied just with neural networks. It seems to be hardly shown that Out-of-Distribution detection is applied to the predictive model using conventional machine learning algorithms such as SVM, Random Forest and KNN. It is known that conventional machine learning algorithms are much better than neural networks in prediction performance, because neural networks are vulnerable to over-fitting and requires much bigger dataset than conventional machine learning algorithms does. So, we suggests a new approach to utilize Out-of-Distribution detection based on SVM algorithm. In addition to that, bagging technique will be adopted to improve the precision of the model.