• Title/Summary/Keyword: Test Validation

Search Result 1,786, Processing Time 0.031 seconds

Performance Analysis of Cloud-Net with Cross-sensor Training Dataset for Satellite Image-based Cloud Detection

  • Kim, Mi-Jeong;Ko, Yun-Ho
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.1
    • /
    • pp.103-110
    • /
    • 2022
  • Since satellite images generally include clouds in the atmosphere, it is essential to detect or mask clouds before satellite image processing. Clouds were detected using physical characteristics of clouds in previous research. Cloud detection methods using deep learning techniques such as CNN or the modified U-Net in image segmentation field have been studied recently. Since image segmentation is the process of assigning a label to every pixel in an image, precise pixel-based dataset is required for cloud detection. Obtaining accurate training datasets is more important than a network configuration in image segmentation for cloud detection. Existing deep learning techniques used different training datasets. And test datasets were extracted from intra-dataset which were acquired by same sensor and procedure as training dataset. Different datasets make it difficult to determine which network shows a better overall performance. To verify the effectiveness of the cloud detection network such as Cloud-Net, two types of networks were trained using the cloud dataset from KOMPSAT-3 images provided by the AIHUB site and the L8-Cloud dataset from Landsat8 images which was publicly opened by a Cloud-Net author. Test data from intra-dataset of KOMPSAT-3 cloud dataset were used for validating the network. The simulation results show that the network trained with KOMPSAT-3 cloud dataset shows good performance on the network trained with L8-Cloud dataset. Because Landsat8 and KOMPSAT-3 satellite images have different GSDs, making it difficult to achieve good results from cross-sensor validation. The network could be superior for intra-dataset, but it could be inferior for cross-sensor data. It is necessary to study techniques that show good results in cross-senor validation dataset in the future.

Validation of the Korean version of the Perinatal Infant Care Social Support scale: a methodological study

  • Park, Mihyeon;Yoo, Hyeji;Ahn, Sukhee
    • Women's Health Nursing
    • /
    • v.27 no.4
    • /
    • pp.307-317
    • /
    • 2021
  • Purpose: The purpose of this study was to develop and test the validity and reliability of the Korean version of the Perinatal Infant Care Social Support (K-PICSS) for postpartum mothers. Methods: This study used a cross-sectional design. The K-PICSS was developed through forward-backward translation. Online survey data were collected from 284 Korean mothers with infants 1-2 months of age. The 19-item K-PICSS consists of functional and structural domains. The functional domain of social support measures infant care practices of postpartum mothers. Exploratory factor analysis (EFA) and known-group comparison were used to verify the construct validity of the K-PICSS. Social support and postpartum depression were also measured to test criterion validity. Psychometric testing was not applicable to the structural social support domain. Results: The average age of mothers was 32.76±3.34 years, and they had been married for 38.45±29.48 months. Construct validity was supported by the results of EFA, which confirmed a three-factor structure of the scale (informational support, supporting presence, and practical support). Significant correlations of the K-PICSS with social support (r=.71, p<.001) and depression (r=-.40, p<.001) were found. The K-PICSS showed reliable internal consistency, with Cronbach's α values of .90 overall and .82-.83 in the three subscales. The vast majority of respondents reported that their husband or their parents were their main sources of support for infant care. Conclusion: This study demonstrates that the K-PICSS has satisfactory construct validity and reliability to measure infant care social support in Korea.

Development and Validation of a Korean Nursing Work Environment Scale for Critical Care Nurses (한국형 중환자실 간호근무환경 측정도구 개발 및 평가)

  • Lee, Hyo Jin;Moon, Ji Hyun;Kim, Se Ra;Shim, Mi Young;Kim, Jung Yeon;Lee, Mi Aie
    • Journal of Korean Clinical Nursing Research
    • /
    • v.27 no.3
    • /
    • pp.279-293
    • /
    • 2021
  • Purpose: The purpose of this study was to develop a Korean nursing work environment scale for critical care nurses (KNWES-CCN) and verify its validity and reliability. Methods: A total of 46 preliminary items were selected using content validity analysis of experts on 64 candidate items derived through literature reviews and in-depth interviews with critical care nurses. 535 critical care nurses from 21 hospitals responded to the preliminary questionnaire from February to March 2021. The collected data were analysed using construct, convergent and discriminant validities, and internal consistency and test-retest reliability. Results: The 23 items in 4 factors accounted for 55.6% of the total variance were identified through item analysis and exploratory factor analysis (EFA). EFA was performed with maximum likelihood method including direct oblimin method. In the confirmatory factor analysis, KNWES-CCN consisted of 21 items in 4 factors by deleting the items that were not meet the condition that the factor loading over .50 or the squared multiple correlation over .30. This model was considered to be suitable because it satisfied the fit index and acceptable criteria of the model [𝒳2=440.47 (p<.001), CMIN/DF=2.41, GFI=.86, SRMR=.06, RMSEA=.07, TLI=.90, CFI=.91]. The item total correlation values ranged form .32 to .73 and its internal consistency was Cronbach's α=.92. The reliability of the test-retest correlation coefficient was .72 and the intra-class correlation coefficient was .83. Conclusion: The KNWES-CCN showed good validity and reliability. Therefore, it is expected that the use of this scale would measure and improve nursing work environment for critical care nurses in Korea.

Validation of the seismic response of an RC frame building with masonry infill walls - The case of the 2017 Mexico earthquake

  • Albornoz, Tania C.;Massone, Leonardo M.;Carrillo, Julian;Hernandez, Francisco;Alberto, Yolanda
    • Advances in Computational Design
    • /
    • v.7 no.3
    • /
    • pp.229-251
    • /
    • 2022
  • In 2017, an intraplate earthquake of Mw 7.1 occurred 120 km from Mexico City (CDMX). Most collapsed structural buildings stroked by the earthquake were flat slab systems joined to reinforced concrete (RC) columns, unreinforced masonry, confined masonry, and dual systems. This article presents the simulated response of an actual six-story RC frame building with masonry infill walls that did not collapse during the 2017 earthquake. It has a structural system similar to that of many of the collapsed buildings and is located in a high seismic amplification zone. Five 3D numerical models were used in the study to model the seismic response of the building. The building dynamic properties were identified using an ambient vibration test (AVT), enabling validation of the building's finite element models. Several assumptions were made to calibrate the numerical model to the properties identified from the AVT, such as the presence of adjacent buildings, variations in masonry properties, soil-foundation-structure interaction, and the contribution of non-structural elements. The results showed that the infill masonry wall would act as a compression strut and crack along the transverse direction because the shear stresses in the original model (0.85 MPa) exceeded the shear strength (0.38 MPa). In compression, the strut presents lower stresses (3.42 MPa) well below its capacity (6.8 MPa). Although the non-structural elements were not considered to be part of the lateral resistant system, the results showed that these elements could contribute by resisting part of the base shear force, reaching a force of 82 kN.

Deep Learning-based Pes Planus Classification Model Using Transfer Learning

  • Kim, Yeonho;Kim, Namgyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.21-28
    • /
    • 2021
  • This study proposes a deep learning-based flat foot classification methodology using transfer learning. We used a transfer learning with VGG16 pre-trained model and a data augmentation technique to generate a model with high predictive accuracy from a total of 176 image data consisting of 88 flat feet and 88 normal feet. To evaluate the performance of the proposed model, we performed an experiment comparing the prediction accuracy of the basic CNN-based model and the prediction model derived through the proposed methodology. In the case of the basic CNN model, the training accuracy was 77.27%, the validation accuracy was 61.36%, and the test accuracy was 59.09%. Meanwhile, in the case of our proposed model, the training accuracy was 94.32%, the validation accuracy was 86.36%, and the test accuracy was 84.09%, indicating that the accuracy of our model was significantly higher than that of the basic CNN model.

Development and Validation of Physical Education Teaching Anxiety Scale for Preservice Special Teacher (예비특수교사의 체육교수불안 척도 개발 및 타당화)

  • Lee, Yong-Kuk
    • 한국체육학회지인문사회과학편
    • /
    • v.57 no.3
    • /
    • pp.375-389
    • /
    • 2018
  • This study aims to development and validation a tool to measure physical education teaching anxiety(PETA) among preservice special teacher. In order to achieve this study purpose, first, the preliminary items for measurement scale were collected through open-ended questions and were developed by inductive content analysis from the 100 preservice special teacher. Second, investigation for construct validity was conducted on exploratory factor analysis from the 100 preservice special teacher. Third, in the final process, verify exernal validity was conducted by confirmatory factor analysis and t-test from the 300 presevice special teacher. As a result, the measurement scale of physical education teaching anxiety(PETA) among preservice special teacher consists of 4 main factors with 14 items: pedagogical content knowledge(n=4), understanding students(n=4), class environment(n=3), class management(n=3) factors and this scale has enough suitability.

Development of Long-Term Hospitalization Prediction Model for Minor Automobile Accident Patients (자동차 사고 경상환자의 장기입원 예측 모델 개발)

  • DoegGyu Lee;DongHyun Nam;Sung-Phil Heo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.28 no.6
    • /
    • pp.11-20
    • /
    • 2023
  • The cost of medical treatment for motor vehicle accidents is increasing every year. In this study, we created a model to predict long-term hospitalization(more than 18 days) among minor patients, which is the main item of increasing traffic accident medical expenses, using five algorithms such as decision tree, and analyzed the factors affecting long-term hospitalization. As a result, the accuracy of the prediction models ranged from 91.377 to 91.451, and there was no significant difference between each model, but the random forest and XGBoost models had the highest accuracy of 91.451. There were significant differences between models in the importance of explanatory variables, such as hospital location, name of disease, and type of hospital, between the long-stay and non-long-stay groups. Model validation was tested by comparing the average accuracy of each model cross-validated(10 times) on the training data with the accuracy of the validation data. To test of the explanatory variables, the chi-square test was used for categorical variables.

Performance Evaluation of U-net Deep Learning Model for Noise Reduction according to Various Hyper Parameters in Lung CT Images (폐 CT 영상에서의 노이즈 감소를 위한 U-net 딥러닝 모델의 다양한 학습 파라미터 적용에 따른 성능 평가)

  • Min-Gwan Lee;Chanrok Park
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.5
    • /
    • pp.709-715
    • /
    • 2023
  • In this study, the performance evaluation of image quality for noise reduction was implemented using the U-net deep learning architecture in computed tomography (CT) images. In order to generate input data, the Gaussian noise was applied to ground truth (GT) data, and datasets were consisted of 8:1:1 ratio of train, validation, and test sets among 1300 CT images. The Adagrad, Adam, and AdamW were used as optimizer function, and 10, 50 and 100 times for number of epochs were applied. In addition, learning rates of 0.01, 0.001, and 0.0001 were applied using the U-net deep learning model to compare the output image quality. To analyze the quantitative values, the peak signal to noise ratio (PSNR) and coefficient of variation (COV) were calculated. Based on the results, deep learning model was useful for noise reduction. We suggested that optimized hyper parameters for noise reduction in CT images were AdamW optimizer function, 100 times number of epochs and 0.0001 learning rates.

Cold sensitivity classification using facial image based on convolutional neural network

  • lkoo Ahn;Younghwa Baek;Kwang-Ho Bae;Bok-Nam Seo;Kyoungsik Jung;Siwoo Lee
    • The Journal of Korean Medicine
    • /
    • v.44 no.4
    • /
    • pp.136-149
    • /
    • 2023
  • Objectives: Facial diagnosis is an important part of clinical diagnosis in traditional East Asian Medicine. In this paper, we proposed a model to quantitatively classify cold sensitivity using a fully automated facial image analysis system. Methods: We investigated cold sensitivity in 452 subjects. Cold sensitivity was determined using a questionnaire and the Cold Pattern Score (CPS) was used for analysis. Subjects with a CPS score below the first quartile (low CPS group) belonged to the cold non-sensitivity group, and subjects with a CPS score above the third quartile (high CPS group) belonged to the cold sensitivity group. After splitting the facial images into train/validation/test sets, the train and validation set were input into a convolutional neural network to learn the model, and then the classification accuracy was calculated for the test set. Results: The classification accuracy of the low CPS group and high CPS group using facial images in all subjects was 76.17%. The classification accuracy by sex was 69.91% for female and 62.86% for male. It is presumed that the deep learning model used facial color or facial shape to classify the low CPS group and the high CPS group, but it is difficult to specifically determine which feature was more important. Conclusions: The experimental results of this study showed that the low CPS group and the high CPS group can be classified with a modest level of accuracy using only facial images. There was a need to develop more advanced models to increase classification accuracy.

Coating defect classification method for steel structures with vision-thermography imaging and zero-shot learning

  • Jun Lee;Kiyoung Kim;Hyeonjin Kim;Hoon Sohn
    • Smart Structures and Systems
    • /
    • v.33 no.1
    • /
    • pp.55-64
    • /
    • 2024
  • This paper proposes a fusion imaging-based coating-defect classification method for steel structures that uses zero-shot learning. In the proposed method, a halogen lamp generates heat energy on the coating surface of a steel structure, and the resulting heat responses are measured by an infrared (IR) camera, while photos of the coating surface are captured by a charge-coupled device (CCD) camera. The measured heat responses and visual images are then analyzed using zero-shot learning to classify the coating defects, and the estimated coating defects are visualized throughout the inspection surface of the steel structure. In contrast to older approaches to coating-defect classification that relied on visual inspection and were limited to surface defects, and older artificial neural network (ANN)-based methods that required large amounts of data for training and validation, the proposed method accurately classifies both internal and external defects and can classify coating defects for unobserved classes that are not included in the training. Additionally, the proposed model easily learns about additional classifying conditions, making it simple to add classes for problems of interest and field application. Based on the results of validation via field testing, the defect-type classification performance is improved 22.7% of accuracy by fusing visual and thermal imaging compared to using only a visual dataset. Furthermore, the classification accuracy of the proposed method on a test dataset with only trained classes is validated to be 100%. With word-embedding vectors for the labels of untrained classes, the classification accuracy of the proposed method is 86.4%.