• Title/Summary/Keyword: 인공지능모델

Search Result 1,546, Processing Time 0.022 seconds

A Performance Comparison of Machine Learning Classification Methods for Soil Creep Susceptibility Assessment (땅밀림 위험지 평가를 위한 기계학습 분류모델 비교)

  • Lee, Jeman;Seo, Jung Il;Lee, Jin-Ho;Im, Sangjun
    • Journal of Korean Society of Forest Science
    • /
    • v.110 no.4
    • /
    • pp.610-621
    • /
    • 2021
  • The soil creep, primarily caused by earthquakes and torrential rainfall events, has widely occurred across the country. The Korea Forest Service attempted to quantify the soil creep susceptible areas using a discriminant value table to prevent or mitigate casualties and/or property damages in advance. With the advent of advanced computer technologies, machine learning-based classification models have been employed for managing mountainous disasters, such as landslides and debris flows. This study aims to quantify the soil creep susceptibility using several classifiers, namely the k-Nearest Neighbor (k-NN), Naive Bayes (NB), Random Forest (RF), and Support Vector Machine (SVM) models. To develop the classification models, we downscaled 292 data from 4,618 field survey data. About 70% of the selected data were used for training, with the remaining 30% used for model testing. The developed models have the classification accuracy of 0.727 for k-NN, 0.750 for NB, 0.807 for RF, and 0.750 for SVM against test datasets representing 30% of the total data. Furthermore, we estimated Cohen's Kappa index as 0.534, 0.580, 0.673, and 0.585, with AUC values of 0.872, 0.912, 0.943, and 0.834, respectively. The machine learning-based classifications for soil creep susceptibility were RF, NB, SVM, and k-NN in that order. Our findings indicate that the machine learning classifiers can provide valuable information in establishing and implementing natural disaster management plans in mountainous areas.

Fire Detection using Deep Convolutional Neural Networks for Assisting People with Visual Impairments in an Emergency Situation (시각 장애인을 위한 영상 기반 심층 합성곱 신경망을 이용한 화재 감지기)

  • Kong, Borasy;Won, Insu;Kwon, Jangwoo
    • 재활복지
    • /
    • v.21 no.3
    • /
    • pp.129-146
    • /
    • 2017
  • In an event of an emergency, such as fire in a building, visually impaired and blind people are prone to exposed to a level of danger that is greater than that of normal people, for they cannot be aware of it quickly. Current fire detection methods such as smoke detector is very slow and unreliable because it usually uses chemical sensor based technology to detect fire particles. But by using vision sensor instead, fire can be proven to be detected much faster as we show in our experiments. Previous studies have applied various image processing and machine learning techniques to detect fire, but they usually don't work very well because these techniques require hand-crafted features that do not generalize well to various scenarios. But with the help of recent advancement in the field of deep learning, this research can be conducted to help solve this problem by using deep learning-based object detector that can detect fire using images from security camera. Deep learning based approach can learn features automatically so they can usually generalize well to various scenes. In order to ensure maximum capacity, we applied the latest technologies in the field of computer vision such as YOLO detector in order to solve this task. Considering the trade-off between recall vs. complexity, we introduced two convolutional neural networks with slightly different model's complexity to detect fire at different recall rate. Both models can detect fire at 99% average precision, but one model has 76% recall at 30 FPS while another has 61% recall at 50 FPS. We also compare our model memory consumption with each other and show our models robustness by testing on various real-world scenarios.

A Study of the Nonlinear Characteristics Improvement for a Electronic Scale using Multiple Regression Analysis (다항식 회귀분석을 이용한 전자저울의 비선형 특성 개선 연구)

  • Chae, Gyoo-Soo
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.6
    • /
    • pp.1-6
    • /
    • 2019
  • In this study, the development of a weight estimation model of electronic scale with nonlinear characteristics is presented using polynomial regression analysis. The output voltage of the load cell was measured directly using the reference mass. And a polynomial regression model was obtained using the matrix and curve fitting function of MS Office Excel. The weight was measured in 100g units using a load cell electronic scale measuring up to 5kg and the polynomial regression model was obtained. The error was calculated for simple($1^{st}$), $2^{nd}$ and $3^{rd}$ order polynomial regression. To analyze the suitability of the regression function for each model, the coefficient of determination was presented to indicate the correlation between the estimated mass and the measured data. Using the third order polynomial model proposed here, a very accurate model was obtained with a standard deviation of 10g and the determinant coefficient of 1.0. Based on the theory of multi regression model presented here, it can be used in various statistical researches such as weather forecast, new drug development and economic indicators analysis using logistic regression analysis, which has been widely used in artificial intelligence fields.

Comparative Analysis of Anomaly Detection Models using AE and Suggestion of Criteria for Determining Outliers

  • Kang, Gun-Ha;Sohn, Jung-Mo;Sim, Gun-Wu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.8
    • /
    • pp.23-30
    • /
    • 2021
  • In this study, we present a comparative analysis of major autoencoder(AE)-based anomaly detection methods for quality determination in the manufacturing process and a new anomaly discrimination criterion. Due to the characteristics of manufacturing site, anomalous instances are few and their types greatly vary. These properties degrade the performance of an AI-based anomaly detection model using the dataset for both normal and anomalous cases, and incur a lot of time and costs in obtaining additional data for performance improvement. To solve this problem, the studies on AE-based models such as AE and VAE are underway, which perform anomaly detection using only normal data. In this work, based on Convolutional AE, VAE, and Dilated VAE models, statistics on residual images, MSE, and information entropy were selected as outlier discriminant criteria to compare and analyze the performance of each model. In particular, the range value applied to the Convolutional AE model showed the best performance with AUC PRC 0.9570, F1 Score 0.8812 and AUC ROC 0.9548, accuracy 87.60%. This shows a performance improvement of an accuracy about 20%P(Percentage Point) compared to MSE, which was frequently used as a standard for determining outliers, and confirmed that model performance can be improved according to the criteria for determining outliers.

A Study on the Restoration of Korean Traditional Palace Image by Adjusting the Receptive Field of Pix2Pix (Pix2Pix의 수용 영역 조절을 통한 전통 고궁 이미지 복원 연구)

  • Hwang, Won-Yong;Kim, Hyo-Kwan
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.5
    • /
    • pp.360-366
    • /
    • 2022
  • This paper presents a AI model structure for restoring Korean traditional palace photographs, which remain only black-and-white photographs, to color photographs using Pix2Pix, one of the adversarial generative neural network techniques. Pix2Pix consists of a combination of a synthetic image generator model and a discriminator model that determines whether a synthetic image is real or fake. This paper deals with an artificial intelligence model by adjusting a receptive field of the discriminator, and analyzes the results by considering the characteristics of the ancient palace photograph. The receptive field of Pix2Pix, which is used to restore black-and-white photographs, was commonly used in a fixed size, but a fixed size of receptive field is not suitable for a photograph which consisting with various change in an image. This paper observed the result of changing the size of the existing fixed a receptive field to identify the proper size of the discriminator that could reflect the characteristics of ancient palaces. In this experiment, the receptive field of the discriminator was adjusted based on the prepared ancient palace photos. This paper measure a loss of the model according to the change in a receptive field of the discriminator and check the results of restored photos using a well trained AI model from experiments.

A Study on Orthogonal Image Detection Precision Improvement Using Data of Dead Pine Trees Extracted by Period Based on U-Net model (U-Net 모델에 기반한 기간별 추출 소나무 고사목 데이터를 이용한 정사영상 탐지 정밀도 향상 연구)

  • Kim, Sung Hun;Kwon, Ki Wook;Kim, Jun Hyun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.4
    • /
    • pp.251-260
    • /
    • 2022
  • Although the number of trees affected by pine wilt disease is decreasing, the affected area is expanding across the country. Recently, with the development of deep learning technology, it is being rapidly applied to the detection study of pine wilt nematodes and dead trees. The purpose of this study is to efficiently acquire deep learning training data and acquire accurate true values to further improve the detection ability of U-Net models through learning. To achieve this purpose, by using a filtering method applying a step-by-step deep learning algorithm the ambiguous analysis basis of the deep learning model is minimized, enabling efficient analysis and judgment. As a result of the analysis the U-Net model using the true values analyzed by period in the detection and performance improvement of dead pine trees of wilt nematode using the U-Net algorithm had a recall rate of -0.5%p than the U-Net model using the previously provided true values, precision was 7.6%p and F-1 score was 4.1%p. In the future, it is judged that there is a possibility to increase the precision of wilt detection by applying various filtering techniques, and it is judged that the drone surveillance method using drone orthographic images and artificial intelligence can be used in the pine wilt nematode disaster prevention project.

Development of Deep Learning Model for Detecting Road Cracks Based on Drone Image Data (드론 촬영 이미지 데이터를 기반으로 한 도로 균열 탐지 딥러닝 모델 개발)

  • Young-Ju Kwon;Sung-ho Mun
    • Land and Housing Review
    • /
    • v.14 no.2
    • /
    • pp.125-135
    • /
    • 2023
  • Drones are used in various fields, including land survey, transportation, forestry/agriculture, marine, environment, disaster prevention, water resources, cultural assets, and construction, as their industrial importance and market size have increased. In this study, image data for deep learning was collected using a mavic3 drone capturing images at a shooting altitude was 20 m with ×7 magnification. Swin Transformer and UperNet were employed as the backbone and architecture of the deep learning model. About 800 sheets of labeled data were augmented to increase the amount of data. The learning process encompassed three rounds. The Cross-Entropy loss function was used in the first and second learning; the Tversky loss function was used in the third learning. In the future, when the crack detection model is advanced through convergence with the Internet of Things (IoT) through additional research, it will be possible to detect patching or potholes. In addition, it is expected that real-time detection tasks of drones can quickly secure the detection of pavement maintenance sections.

Korean Facial Expression Emotion Recognition based on Image Meta Information (이미지 메타 정보 기반 한국인 표정 감정 인식)

  • Hyeong Ju Moon;Myung Jin Lim;Eun Hee Kim;Ju Hyun Shin
    • Smart Media Journal
    • /
    • v.13 no.3
    • /
    • pp.9-17
    • /
    • 2024
  • Due to the recent pandemic and the development of ICT technology, the use of non-face-to-face and unmanned systems is expanding, and it is very important to understand emotions in communication in non-face-to-face situations. As emotion recognition methods for various facial expressions are required to understand emotions, artificial intelligence-based research is being conducted to improve facial expression emotion recognition in image data. However, existing research on facial expression emotion recognition requires high computing power and a lot of learning time because it utilizes a large amount of data to improve accuracy. To improve these limitations, this paper proposes a method of recognizing facial expressions using age and gender, which are image meta information, as a method of recognizing facial expressions with even a small amount of data. For facial expression emotion recognition, a face was detected using the Yolo Face model from the original image data, and age and gender were classified through the VGG model based on image meta information, and then seven emotions were recognized using the EfficientNet model. The accuracy of the proposed data classification learning model was higher as a result of comparing the meta-information-based data classification model with the model trained with all data.

A Study on Biometric Model for Information Security (정보보안을 위한 생체 인식 모델에 관한 연구)

  • Jun-Yeong Kim;Se-Hoon Jung;Chun-Bo Sim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.317-326
    • /
    • 2024
  • Biometric recognition is a technology that determines whether a person is identified by extracting information on a person's biometric and behavioral characteristics with a specific device. Cyber threats such as forgery, duplication, and hacking of biometric characteristics are increasing in the field of biometrics. In response, the security system is strengthened and complex, and it is becoming difficult for individuals to use. To this end, multiple biometric models are being studied. Existing studies have suggested feature fusion methods, but comparisons between feature fusion methods are insufficient. Therefore, in this paper, we compared and evaluated the fusion method of multiple biometric models using fingerprint, face, and iris images. VGG-16, ResNet-50, EfficientNet-B1, EfficientNet-B4, EfficientNet-B7, and Inception-v3 were used for feature extraction, and the fusion methods of 'Sensor-Level', 'Feature-Level', 'Score-Level', and 'Rank-Level' were compared and evaluated for feature fusion. As a result of the comparative evaluation, the EfficientNet-B7 model showed 98.51% accuracy and high stability in the 'Feature-Level' fusion method. However, because the EfficietnNet-B7 model is large in size, model lightweight studies are needed for biocharacteristic fusion.

Deep Learning in Thyroid Ultrasonography to Predict Tumor Recurrence in Thyroid Cancers (인공지능 딥러닝을 이용한 갑상선 초음파에서의 갑상선암의 재발 예측)

  • Jieun Kil;Kwang Gi Kim;Young Jae Kim;Hye Ryoung Koo;Jeong Seon Park
    • Journal of the Korean Society of Radiology
    • /
    • v.81 no.5
    • /
    • pp.1164-1174
    • /
    • 2020
  • Purpose To evaluate a deep learning model to predict recurrence of thyroid tumor using preoperative ultrasonography (US). Materials and Methods We included representative images from 229 US-based patients (male:female = 42:187; mean age, 49.6 years) who had been diagnosed with thyroid cancer on preoperative US and subsequently underwent thyroid surgery. After selecting each representative transverse or longitudinal US image, we created a data set from the resulting database of 898 images after augmentation. The Python 2.7.6 and Keras 2.1.5 framework for neural networks were used for deep learning with a convolutional neural network. We compared the clinical and histological features between patients with and without recurrence. The predictive performance of the deep learning model between groups was evaluated using receiver operating characteristic (ROC) analysis, and the area under the ROC curve served as a summary of the prognostic performance of the deep learning model to predict recurrent thyroid cancer. Results Tumor recurrence was noted in 49 (21.4%) among the 229 patients. Tumor size and multifocality varied significantly between the groups with and without recurrence (p < 0.05). The overall mean area under the curve (AUC) value of the deep learning model for prediction of recurrent thyroid cancer was 0.9 ± 0.06. The mean AUC value was 0.87 ± 0.03 in macrocarcinoma and 0.79 ± 0.16 in microcarcinoma. Conclusion A deep learning model for analysis of US images of thyroid cancer showed the possibility of predicting recurrence of thyroid cancer.