• 제목/요약/키워드: Training and Validation Data

검색결과 317건 처리시간 0.036초

인공지능 데이터 품질검증 기술 및 오픈소스 프레임워크 분석 연구 (An Evaluation Study on Artificial Intelligence Data Validation Methods and Open-source Frameworks)

  • 윤창희;신호경;추승연;김재일
    • 한국멀티미디어학회논문지
    • /
    • 제24권10호
    • /
    • pp.1403-1413
    • /
    • 2021
  • In this paper, we investigate automated data validation techniques for artificial intelligence training, and also disclose open-source frameworks, such as Google's TensorFlow Data Validation (TFDV), that support automated data validation in the AI model development process. We also introduce an experimental study using public data sets to demonstrate the effectiveness of the open-source data validation framework. In particular, we presents experimental results of the data validation functions for schema testing and discuss the limitations of the current open-source frameworks for semantic data. Last, we introduce the latest studies for the semantic data validation using machine learning techniques.

신경회로망을 이용한 원자력발전소 증기발생기의 모델링 (Modeling of Nuclear Power Plant Steam Generator using Neural Networks)

  • 이재기;최진영
    • 제어로봇시스템학회논문지
    • /
    • 제4권4호
    • /
    • pp.551-560
    • /
    • 1998
  • This paper presents a neural network model representing complex hydro-thermo-dynamic characteristics of a steam generator in nuclear power plants. The key modeling processes include training data gathering process, analysis of system dynamics and determining of the neural network structure, training process, and the final process for validation of the trained model. In this paper, we suggest a training data gathering method from an unstable steam generator so that the data sufficiently represent the dynamic characteristics of the plant over a wide operating range. In addition, we define the inputs and outputs of neural network model by analyzing the system dimension, relative degree, and inputs/outputs of the plant. Several types of neural networks are applied to the modeling and training process. The trained networks are verified by using a class of test data, and their performances are discussed.

  • PDF

신경망 학습앙상블에 관한 연구 - 주가예측을 중심으로 - (A Study on Training Ensembles of Neural Networks - A Case of Stock Price Prediction)

  • 이영찬;곽수환
    • 지능정보연구
    • /
    • 제5권1호
    • /
    • pp.95-101
    • /
    • 1999
  • In this paper, a comparison between different methods to combine predictions from neural networks will be given. These methods are bagging, bumping, and balancing. Those are based on the analysis of the ensemble generalization error into an ambiguity term and a term incorporating generalization performances of individual networks. Neural Networks and AI machine learning models are prone to overfitting. A strategy to prevent a neural network from overfitting, is to stop training in early stage of the learning process. The complete data set is spilt up into a training set and a validation set. Training is stopped when the error on the validation set starts increasing. The stability of the networks is highly dependent on the division in training and validation set, and also on the random initial weights and the chosen minimization procedure. This causes early stopped networks to be rather unstable: a small change in the data or different initial conditions can produce large changes in the prediction. Therefore, it is advisable to apply the same procedure several times starting from different initial weights. This technique is often referred to as training ensembles of neural networks. In this paper, we presented a comparison of three statistical methods to prevent overfitting of neural network.

  • PDF

Cross-Validation Probabilistic Neural Network Based Face Identification

  • Lotfi, Abdelhadi;Benyettou, Abdelkader
    • Journal of Information Processing Systems
    • /
    • 제14권5호
    • /
    • pp.1075-1086
    • /
    • 2018
  • In this paper a cross-validation algorithm for training probabilistic neural networks (PNNs) is presented in order to be applied to automatic face identification. Actually, standard PNNs perform pretty well for small and medium sized databases but they suffer from serious problems when it comes to using them with large databases like those encountered in biometrics applications. To address this issue, we proposed in this work a new training algorithm for PNNs to reduce the hidden layer's size and avoid over-fitting at the same time. The proposed training algorithm generates networks with a smaller hidden layer which contains only representative examples in the training data set. Moreover, adding new classes or samples after training does not require retraining, which is one of the main characteristics of this solution. Results presented in this work show a great improvement both in the processing speed and generalization of the proposed classifier. This improvement is mainly caused by reducing significantly the size of the hidden layer.

Finding Unexpected Test Accuracy by Cross Validation in Machine Learning

  • Yoon, Hoijin
    • International Journal of Computer Science & Network Security
    • /
    • 제21권12spc호
    • /
    • pp.549-555
    • /
    • 2021
  • Machine Learning(ML) splits data into 3 parts, which are usually 60% for training, 20% for validation, and 20% for testing. It just splits quantitatively instead of selecting each set of data by a criterion, which is very important concept for the adequacy of test data. ML measures a model's accuracy by applying a set of validation data, and revises the model until the validation accuracy reaches on a certain level. After the validation process, the complete model is tested with the set of test data, which are not seen by the model yet. If the set of test data covers the model's attributes well, the test accuracy will be close to the validation accuracy of the model. To make sure that ML's set of test data works adequately, we design an experiment and see if the test accuracy of model is always close to its validation adequacy as expected. The experiment builds 100 different SVM models for each of six data sets published in UCI ML repository. From the test accuracy and its validation accuracy of 600 cases, we find some unexpected cases, where the test accuracy is very different from its validation accuracy. Consequently, it is not always true that ML's set of test data is adequate to assure a model's quality.

Prediction of the compressive strength of fly ash geopolymer concrete using gene expression programming

  • Alkroosh, Iyad S.;Sarker, Prabir K.
    • Computers and Concrete
    • /
    • 제24권4호
    • /
    • pp.295-302
    • /
    • 2019
  • Evolutionary algorithms based on conventional statistical methods such as regression and classification have been widely used in data mining applications. This work involves application of gene expression programming (GEP) for predicting compressive strength of fly ash geopolymer concrete, which is gaining increasing interest as an environmentally friendly alternative of Portland cement concrete. Based on 56 test results from the existing literature, a model was obtained relating the compressive strength of fly ash geopolymer concrete with the significantly influencing mix design parameters. The predictions of the model in training and validation were evaluated. The coefficient of determination ($R^2$), mean (${\mu}$) and standard deviation (${\sigma}$) were 0.89, 1.0 and 0.12 respectively, for the training set, and 0.89, 0.99 and 0.13 respectively, for the validation set. The error of prediction by the model was also evaluated and found to be very low. This indicates that the predictions of GEP model are in close agreement with the experimental results suggesting this as a promising method for compressive strength prediction of fly ash geopolymer concrete.

Subset 샘플링 검증 기법을 활용한 MSCRED 모델 기반 발전소 진동 데이터의 이상 진단 (Anomaly Detection In Real Power Plant Vibration Data by MSCRED Base Model Improved By Subset Sampling Validation)

  • 홍수웅;권장우
    • 융합정보논문지
    • /
    • 제12권1호
    • /
    • pp.31-38
    • /
    • 2022
  • 본 논문은 전문가 독립적 비지도 신경망 학습 기반 다변량 시계열 데이터 분석 모델인 MSCRED(Multi-Scale Convolutional Recurrent Encoder-Decoder)의 실제 현장에서의 적용과 Auto-encoder 기반인 MSCRED 모델의 한계인, 학습 데이터가 오염되지 않아야 된다는 점을 극복하기 위한 학습 데이터 샘플링 기법인 Subset Sampling Validation을 제시한다. 라벨 분류가 되어있는 발전소 장비의 진동 데이터를 이용하여 1) 학습 데이터에 비정상 데이터가 섞여 있는 상황을 재현하고, 이를 학습한 경우 2) 1과 같은 상황에서 Subset Sampling Validation 기법을 통해 학습 데이터에서 비정상 데이터를 제거한 경우의 Anomaly Score를 비교하여 MSCRED와 Subset Sampling Validation 기법을 유효성을 평가한다. 이를 통해 본 논문은 전문가 독립적이며 오류 데이터에 강한 이상 진단 프레임워크를 제시해, 다양한 다변량 시계열 데이터 분야에서의 간결하고 정확한 해결 방법을 제시한다.

대용량 훈련 데이타의 점진적 학습에 기반한 얼굴 검출 방법 (Face Detection Based on Incremental Learning from Very Large Size Training Data)

  • 박지영;이준호
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제31권7호
    • /
    • pp.949-958
    • /
    • 2004
  • 본 연구는 대용량 훈련 데이타를 사용하는 얼굴 검출 분류기의 학습과정에서 새로운 데이터의 추가 학습이 가능한 새로운 방법을 제안한다. 추가되는 데이타로부터 새로운 정보를 학습하여 이미 습득된 기존의 지식을 갱신하는 것이 점진적 학습의 목표이다. 이러한 학습 기법에 기반한 분류기의 설계에서는 최종 분류기가 전체 훈련 데이타 집합의 특성을 반영하는 것이 매우 중요한 문제이다. 제안하는 알고리즘은 최적화된 최종 분류기 획득을 위하여 훈련 집합의 전역적인 특성을 대표하는 검증집합을 생성하고, 이 집단 내에서의 분류성능을 기준으로 중간단계 분류기들의 가중치를 결정한다. 각 중간단계 분류기는 개변 데이타 집합의 학습 결과로써 가중치 기반 결합 방식에 의해 최종 분류기로 구성된다. 반복적인 실험을 통해, 제안한 알고리즘을 사용하여 학습한 얼굴 검출 분류기의 성능이 AdaBoost 및 Learn++기반의 분류기보다 우수한 검출 성능을 보임을 확인하였다.

뉴로-퍼지 소프트웨어 신뢰성 예측에 대한 최적의 데이터 분할비율에 관한 연구 (A Study of Optimal Ratio of Data Partition for Neuro-Fuzzy-Based Software Reliability Prediction)

  • 이상운
    • 정보처리학회논문지D
    • /
    • 제8D권2호
    • /
    • pp.175-180
    • /
    • 2001
  • 본 논문은 미래의 소프크웨어 공장 수나 고장시간 예측 정확성을 얻기 위해, 뉴로-피지 시스템을 이용할 경우 최적의 검증 데이터 할당 비율에 대한 연구이다. 훈련 데이터가 주어졌을 때, 과소 적합과 과잉 적합을 회피하면서 최적의 일반화 능력을 얻기 취해 Early Stopping 방법이 일반적으로 사용되고 있다. 그러나 훈련과 검증 데이터로 얼마나 많은 데이터를 할당갈 것인가는 시행착오법을 이용해 경험적으로 해를 구해야만 하며, 과다한 시간이 소요된다. 최적의 검증 데이터 양을 구하기 위해 규칙 수를 증가시키면서 다양한 검증 데이터 양을 할당하였다. 실험결과 최소의 검증 데이터로도 좋은 예측 능력을 보였다. 이 결과는 뉴로-퍼지 시스템을 소프트웨어 신뢰성 분야에 적용시 실질직언 지침을 제공할 수 있는 것이다.

  • PDF

딥러닝을 이용한 당뇨성황반부종 등급 분류의 정확도 개선을 위한 검증 데이터 증강 기법 (Validation Data Augmentation for Improving the Grading Accuracy of Diabetic Macular Edema using Deep Learning)

  • 이태수
    • 대한의용생체공학회:의공학회지
    • /
    • 제40권2호
    • /
    • pp.48-54
    • /
    • 2019
  • This paper proposed a method of validation data augmentation for improving the grading accuracy of diabetic macular edema (DME) using deep learning. The data augmentation technique is basically applied in order to secure diversity of data by transforming one image to several images through random translation, rotation, scaling and reflection in preparation of input data of the deep neural network (DNN). In this paper, we apply this technique in the validation process of the trained DNN, and improve the grading accuracy by combining the classification results of the augmented images. To verify the effectiveness, 1,200 retinal images of Messidor dataset was divided into training and validation data at the ratio 7:3. By applying random augmentation to 359 validation data, $1.61{\pm}0.55%$ accuracy improvement was achieved in the case of six times augmentation (N=6). This simple method has shown that the accuracy can be improved in the N range from 2 to 6 with the correlation coefficient of 0.5667. Therefore, it is expected to help improve the diagnostic accuracy of DME with the grading information provided by the proposed DNN.