• Title/Summary/Keyword: Deep Learning

Search Result 5,631, Processing Time 0.032 seconds

A Study on the Automatic Digital DB of Boring Log Using AI (AI를 활용한 시추주상도 자동 디지털 DB화 방안에 관한 연구)

  • Park, Ka-Hyun;Han, Jin-Tae;Yoon, Youngno
    • Journal of the Korean Geotechnical Society
    • /
    • v.37 no.11
    • /
    • pp.119-129
    • /
    • 2021
  • The process of constructing the DB in the current geotechnical information DB system needs a lot of human and time resource consumption. In addition, it causes accuracy problems frequently because the current input method is a person viewing the PDF and directly inputting the results. Therefore, this study proposes building an automatic digital DB using AI (artificial intelligence) of boring logs. In order to automatically construct DB for various boring log formats without exception, the boring log forms were classified using the deep learning model ResNet 34 for a total of 6 boring log forms. As a result, the overall accuracy was 99.7, and the ROC_AUC score was 1.0, which separated the boring log forms with very high performance. After that, the text in the PDF is automatically read using the robotic processing automation technique fine-tuned for each form. Furthermore, the general information, strata information, and standard penetration test information were extracted, separated, and saved in the same format provided by the geotechnical information DB system. Finally, the information in the boring log was automatically converted into a DB at a speed of 140 pages per second.

Analysis Study on the Detection and Classification of COVID-19 in Chest X-ray Images using Artificial Intelligence (인공지능을 활용한 흉부 엑스선 영상의 코로나19 검출 및 분류에 대한 분석 연구)

  • Yoon, Myeong-Seong;Kwon, Chae-Rim;Kim, Sung-Min;Kim, Su-In;Jo, Sung-Jun;Choi, Yu-Chan;Kim, Sang-Hyun
    • Journal of the Korean Society of Radiology
    • /
    • v.16 no.5
    • /
    • pp.661-672
    • /
    • 2022
  • After the outbreak of the SARS-CoV2 virus that causes COVID-19, it spreads around the world with the number of infections and deaths rising rapidly caused a shortage of medical resources. As a way to solve this problem, chest X-ray diagnosis using Artificial Intelligence(AI) received attention as a primary diagnostic method. The purpose of this study is to comprehensively analyze the detection of COVID-19 via AI. To achieve this purpose, 292 studies were collected through a series of Classification methods. Based on these data, performance measurement information including Accuracy, Precision, Area Under Cover(AUC), Sensitivity, Specificity, F1-score, Recall, K-fold, Architecture and Class were analyzed. As a result, the average Accuracy, Precision, AUC, Sensitivity and Specificity were achieved as 95.2%, 94.81%, 94.01%, 93.5%, and 93.92%, respectively. Although the performance measurement information on a year-on-year basis gradually increased, furthermore, we conducted a study on the rate of change according to the number of Class and image data, the ratio of use of Architecture and about the K-fold. Currently, diagnosis of COVID-19 using AI has several problems to be used independently, however, it is expected that it will be sufficient to be used as a doctor's assistant.

Water Segmentation Based on Morphologic and Edge-enhanced U-Net Using Sentinel-1 SAR Images (형태학적 연산과 경계추출 학습이 강화된 U-Net을 활용한 Sentinel-1 영상 기반 수체탐지)

  • Kim, Hwisong;Kim, Duk-jin;Kim, Junwoo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_2
    • /
    • pp.793-810
    • /
    • 2022
  • Synthetic Aperture Radar (SAR) is considered to be suitable for near real-time inundation monitoring. The distinctly different intensity between water and land makes it adequate for waterbody detection, but the intrinsic speckle noise and variable intensity of SAR images decrease the accuracy of waterbody detection. In this study, we suggest two modules, named 'morphology module' and 'edge-enhanced module', which are the combinations of pooling layers and convolutional layers, improving the accuracy of waterbody detection. The morphology module is composed of min-pooling layers and max-pooling layers, which shows the effect of morphological transformation. The edge-enhanced module is composed of convolution layers, which has the fixed weights of the traditional edge detection algorithm. After comparing the accuracy of various versions of each module for U-Net, we found that the optimal combination is the case that the morphology module of min-pooling and successive layers of min-pooling and max-pooling, and the edge-enhanced module of Scharr filter were the inputs of conv9. This morphologic and edge-enhanced U-Net improved the F1-score by 9.81% than the original U-Net. Qualitative inspection showed that our model has capability of detecting small-sized waterbody and detailed edge of water, which are the distinct advancement of the model presented in this research, compared to the original U-Net.

Calibration of Thermal Camera with Enhanced Image (개선된 화질의 영상을 이용한 열화상 카메라 캘리브레이션)

  • Kim, Ju O;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.4
    • /
    • pp.621-628
    • /
    • 2021
  • This paper proposes a method to calibrate a thermal camera with three different perspectives. In particular, the intrinsic parameters of the camera and re-projection errors were provided to quantify the accuracy of the calibration result. Three lenses of the camera capture the same image, but they are not overlapped, and the image resolution is worse than the one captured by the RGB camera. In computer vision, camera calibration is one of the most important and fundamental tasks to calculate the distance between camera (s) and a target object or the three-dimensional (3D) coordinates of a point in a 3D object. Once calibration is complete, the intrinsic and the extrinsic parameters of the camera(s) are provided. The intrinsic parameters are composed of the focal length, skewness factor, and principal points, and the extrinsic parameters are composed of the relative rotation and translation of the camera(s). This study estimated the intrinsic parameters of thermal cameras that have three lenses of different perspectives. In particular, image enhancement based on a deep learning algorithm was carried out to improve the quality of the calibration results. Experimental results are provided to substantiate the proposed method.

A Design of the Vehicle Crisis Detection System(VCDS) based on vehicle internal and external data and deep learning (차량 내·외부 데이터 및 딥러닝 기반 차량 위기 감지 시스템 설계)

  • Son, Su-Rak;Jeong, Yi-Na
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.2
    • /
    • pp.128-133
    • /
    • 2021
  • Currently, autonomous vehicle markets are commercializing a third-level autonomous vehicle, but there is a possibility that an accident may occur even during fully autonomous driving due to stability issues. In fact, autonomous vehicles have recorded 81 accidents. This is because, unlike level 3, autonomous vehicles after level 4 have to judge and respond to emergency situations by themselves. Therefore, this paper proposes a vehicle crisis detection system(VCDS) that collects and stores information outside the vehicle through CNN, and uses the stored information and vehicle sensor data to output the crisis situation of the vehicle as a number between 0 and 1. The VCDS consists of two modules. The vehicle external situation collection module collects surrounding vehicle and pedestrian data using a CNN-based neural network model. The vehicle crisis situation determination module detects a crisis situation in the vehicle by using the output of the vehicle external situation collection module and the vehicle internal sensor data. As a result of the experiment, the average operation time of VESCM was 55ms, R-CNN was 74ms, and CNN was 101ms. In particular, R-CNN shows similar computation time to VESCM when the number of pedestrians is small, but it takes more computation time than VESCM as the number of pedestrians increases. On average, VESCM had 25.68% faster computation time than R-CNN and 45.54% faster than CNN, and the accuracy of all three models did not decrease below 80% and showed high accuracy.

Application of convolutional autoencoder for spatiotemporal bias-correction of radar precipitation (CAE 알고리즘을 이용한 레이더 강우 보정 평가)

  • Jung, Sungho;Oh, Sungryul;Lee, Daeeop;Le, Xuan Hien;Lee, Giha
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.7
    • /
    • pp.453-462
    • /
    • 2021
  • As the frequency of localized heavy rainfall has increased during recent years, the importance of high-resolution radar data has also increased. This study aims to correct the bias of Dual Polarization radar that still has a spatial and temporal bias. In many studies, various statistical techniques have been attempted to correct the bias of radar rainfall. In this study, the bias correction of the S-band Dual Polarization radar used in flood forecasting of ME was implemented by a Convolutional Autoencoder (CAE) algorithm, which is a type of Convolutional Neural Network (CNN). The CAE model was trained based on radar data sets that have a 10-min temporal resolution for the July 2017 flood event in Cheongju. The results showed that the newly developed CAE model provided improved simulation results in time and space by reducing the bias of raw radar rainfall. Therefore, the CAE model, which learns the spatial relationship between each adjacent grid, can be used for real-time updates of grid-based climate data generated by radar and satellites.

Real-time Segmentation of Black Ice Region in Infrared Road Images

  • Li, Yu-Jie;Kang, Sun-Kyoung;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.2
    • /
    • pp.33-42
    • /
    • 2022
  • In this paper, we proposed a deep learning model based on multi-scale dilated convolution feature fusion for the segmentation of black ice region in road image to send black ice warning to drivers in real time. In the proposed multi-scale dilated convolution feature fusion network, different dilated ratio convolutions are connected in parallel in the encoder blocks, and different dilated ratios are used in different resolution feature maps, and multi-layer feature information are fused together. The multi-scale dilated convolution feature fusion improves the performance by diversifying and expending the receptive field of the network and by preserving detailed space information and enhancing the effectiveness of diated convolutions. The performance of the proposed network model was gradually improved with the increase of the number of dilated convolution branch. The mIoU value of the proposed method is 96.46%, which was higher than the existing networks such as U-Net, FCN, PSPNet, ENet, LinkNet. The parameter was 1,858K, which was 6 times smaller than the existing LinkNet model. From the experimental results of Jetson Nano, the FPS of the proposed method was 3.63, which can realize segmentation of black ice field in real time.

Spatial Replicability Assessment of Land Cover Classification Using Unmanned Aerial Vehicle and Artificial Intelligence in Urban Area (무인항공기 및 인공지능을 활용한 도시지역 토지피복 분류 기법의 공간적 재현성 평가)

  • Geon-Ung, PARK;Bong-Geun, SONG;Kyung-Hun, PARK;Hung-Kyu, LEE
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.25 no.4
    • /
    • pp.63-80
    • /
    • 2022
  • As a technology to analyze and predict an issue has been developed by constructing real space into virtual space, it is becoming more important to acquire precise spatial information in complex cities. In this study, images were acquired using an unmanned aerial vehicle for urban area with complex landscapes, and land cover classification was performed object-based image analysis and semantic segmentation techniques, which were image classification technique suitable for high-resolution imagery. In addition, based on the imagery collected at the same time, the replicability of land cover classification of each artificial intelligence (AI) model was examined for areas that AI model did not learn. When the AI models are trained on the training site, the land cover classification accuracy is analyzed to be 89.3% for OBIA-RF, 85.0% for OBIA-DNN, and 95.3% for U-Net. When the AI models are applied to the replicability assessment site to evaluate replicability, the accuracy of OBIA-RF decreased by 7%, OBIA-DNN by 2.1% and U-Net by 2.3%. It is found that U-Net, which considers both morphological and spectroscopic characteristics, performs well in land cover classification accuracy and replicability evaluation. As precise spatial information becomes important, the results of this study are expected to contribute to urban environment research as a basic data generation method.

Analysis Temporal Variations Marine Debris by using Raspberry Pi and YOLOv5 (라즈베리파이와 YOLOv5를 이용한 해양쓰레기 시계열 변화량 분석)

  • Bo-Ram, Kim;Mi-So, Park;Jea-Won, Kim;Ye-Been, Do;Se-Yun, Oh;Hong-Joo, Yoon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.6
    • /
    • pp.1249-1258
    • /
    • 2022
  • Marine debris is defined as a substance that is intentionally or inadvertently left on the shore or is introduced or discharged into the ocean, which has or is likely to have a harmful effect on the marine environments. In this study, the detection of marine debris and the analysis of the amount of change on marine debris were performed using the object detection method for an efficient method of identifying the quantity of marine debris and analyzing the amount of change. The study area is Yuho Mongdol Beach in the northeastern part of Geoje Island, and the amount of change was analyzed through images collected at 15-minute intervals for 32 days from September 12 to October 14, 2022. Marine debris detection using YOLOv5x, a one-stage object detection model, derived the performance of plastic bottles mAP 0.869 and styrofoam buoys mAP 0.862. As a result, marine debris showed a large decrease at 8-day intervals, and it was found that the quantity of Styrofoam buoys was about three times larger and the range of change was also larger.

Artificial Intelligence for Assistance of Facial Expression Practice Using Emotion Classification (감정 분류를 이용한 표정 연습 보조 인공지능)

  • Dong-Kyu, Kim;So Hwa, Lee;Jae Hwan, Bong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.6
    • /
    • pp.1137-1144
    • /
    • 2022
  • In this study, an artificial intelligence(AI) was developed to help with facial expression practice in order to express emotions. The developed AI used multimodal inputs consisting of sentences and facial images for deep neural networks (DNNs). The DNNs calculated similarities between the emotions predicted by the sentences and the emotions predicted by facial images. The user practiced facial expressions based on the situation given by sentences, and the AI provided the user with numerical feedback based on the similarity between the emotion predicted by sentence and the emotion predicted by facial expression. ResNet34 structure was trained on FER2013 public data to predict emotions from facial images. To predict emotions in sentences, KoBERT model was trained in transfer learning manner using the conversational speech dataset for emotion classification opened to the public by AIHub. The DNN that predicts emotions from the facial images demonstrated 65% accuracy, which is comparable to human emotional classification ability. The DNN that predicts emotions from the sentences achieved 90% accuracy. The performance of the developed AI was evaluated through experiments with changing facial expressions in which an ordinary person was participated.