• Title/Summary/Keyword: u- 러닝

Search Result 306, Processing Time 0.028 seconds

Effect of Learning Data on the Semantic Segmentation of Railroad Tunnel Using Deep Learning (딥러닝을 활용한 철도 터널 객체 분할에 학습 데이터가 미치는 영향)

  • Ryu, Young-Moo;Kim, Byung-Kyu;Park, Jeongjun
    • Journal of the Korean Geotechnical Society
    • /
    • v.37 no.11
    • /
    • pp.107-118
    • /
    • 2021
  • Scan-to-BIM can be precisely mod eled by measuring structures with Light Detection And Ranging (LiDAR) and build ing a 3D BIM (Building Information Modeling) model based on it, but has a limitation in that it consumes a lot of manpower, time, and cost. To overcome these limitations, studies are being conducted to perform semantic segmentation of 3D point cloud data applying deep learning algorithms, but studies on how segmentation result changes depending on learning data are insufficient. In this study, a parametric study was conducted to determine how the size and track type of railroad tunnels constituting learning data affect the semantic segmentation of railroad tunnels through deep learning. As a result of the parametric study, the similar size of the tunnels used for learning and testing, the higher segmentation accuracy, and the better results when learning through a double-track tunnel than a single-line tunnel. In addition, when the training data is composed of two or more tunnels, overall accuracy (OA) and mean intersection over union (MIoU) increased by 10% to 50%, it has been confirmed that various configurations of learning data can contribute to efficient learning.

Development of Deep Learning Structure for Defective Pixel Detection of Next-Generation Smart LED Display Board using Imaging Device (영상장치를 이용한 차세대 스마트 LED 전광판의 불량픽셀 검출을 위한 딥러닝 구조 개발)

  • Sun-Gu Lee;Tae-Yoon Lee;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • v.27 no.3
    • /
    • pp.345-349
    • /
    • 2023
  • In this paper, we propose a study on the development of deep learning structure for defective pixel detection of next-generation smart LED display board using imaging device. In this research, a technique utilizing imaging devices and deep learning is introduced to automatically detect defects in outdoor LED billboards. Through this approach, the effective management of LED billboards and the resolution of various errors and issues are aimed. The research process consists of three stages. Firstly, the planarized image data of the billboard is processed through calibration to completely remove the background and undergo necessary preprocessing to generate a training dataset. Secondly, the generated dataset is employed to train an object recognition network. This network is composed of a Backbone and a Head. The Backbone employs CSP-Darknet to extract feature maps, while the Head utilizes extracted feature maps as the basis for object detection. Throughout this process, the network is adjusted to align the Confidence score and Intersection over Union (IoU) error, sustaining continuous learning. In the third stage, the created model is employed to automatically detect defective pixels on actual outdoor LED billboards. The proposed method, applied in this paper, yielded results from accredited measurement experiments that achieved 100% detection of defective pixels on real LED billboards. This confirms the improved efficiency in managing and maintaining LED billboards. Such research findings are anticipated to bring about a revolutionary advancement in the management of LED billboards.

Image-to-Image Translation Based on U-Net with R2 and Attention (R2와 어텐션을 적용한 유넷 기반의 영상 간 변환에 관한 연구)

  • Lim, So-hyun;Chun, Jun-chul
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.9-16
    • /
    • 2020
  • In the Image processing and computer vision, the problem of reconstructing from one image to another or generating a new image has been steadily drawing attention as hardware advances. However, the problem of computer-generated images also continues to emerge when viewed with human eyes because it is not natural. Due to the recent active research in deep learning, image generating and improvement problem using it are also actively being studied, and among them, the network called Generative Adversarial Network(GAN) is doing well in the image generating. Various models of GAN have been presented since the proposed GAN, allowing for the generation of more natural images compared to the results of research in the image generating. Among them, pix2pix is a conditional GAN model, which is a general-purpose network that shows good performance in various datasets. pix2pix is based on U-Net, but there are many networks that show better performance among U-Net based networks. Therefore, in this study, images are generated by applying various networks to U-Net of pix2pix, and the results are compared and evaluated. The images generated through each network confirm that the pix2pix model with Attention, R2, and Attention-R2 networks shows better performance than the existing pix2pix model using U-Net, and check the limitations of the most powerful network. It is suggested as a future study.

Deep Learning-based Spine Segmentation Technique Using the Center Point of the Spine and Modified U-Net (척추의 중심점과 Modified U-Net을 활용한 딥러닝 기반 척추 자동 분할)

  • Sungjoo Lim;Hwiyoung Kim
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.2
    • /
    • pp.139-146
    • /
    • 2023
  • Osteoporosis is a disease in which the risk of bone fractures increases due to a decrease in bone density caused by aging. Osteoporosis is diagnosed by measuring bone density in the total hip, femoral neck, and lumbar spine. To accurately measure bone density in the lumbar spine, the vertebral region must be segmented from the lumbar X-ray image. Deep learning-based automatic spinal segmentation methods can provide fast and precise information about the vertebral region. In this study, we used 695 lumbar spine images as training and test datasets for a deep learning segmentation model. We proposed a lumbar automatic segmentation model, CM-Net, which combines the center point of the spine and the modified U-Net network. As a result, the average Dice Similarity Coefficient(DSC) was 0.974, precision was 0.916, recall was 0.906, accuracy was 0.998, and Area under the Precision-Recall Curve (AUPRC) was 0.912. This study demonstrates a high-performance automatic segmentation model for lumbar X-ray images, which overcomes noise such as spinal fractures and implants. Furthermore, we can perform accurate measurement of bone density on lumbar X-ray images using an automatic segmentation methodology for the spine, which can prevent the risk of compression fractures at an early stage and improve the accuracy and efficiency of osteoporosis diagnosis.

Evaluation of the Feasibility of Deep Learning for Vegetation Monitoring (딥러닝 기반의 식생 모니터링 가능성 평가)

  • Kim, Dong-woo;Son, Seung-Woo
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.26 no.6
    • /
    • pp.85-96
    • /
    • 2023
  • This study proposes a method for forest vegetation monitoring using high-resolution aerial imagery captured by unmanned aerial vehicles(UAV) and deep learning technology. The research site was selected in the forested area of Mountain Dogo, Asan City, Chungcheongnam-do, and the target species for monitoring included Pinus densiflora, Quercus mongolica, and Quercus acutissima. To classify vegetation species at the pixel level in UAV imagery based on characteristics such as leaf shape, size, and color, the study employed the semantic segmentation method using the prominent U-net deep learning model. The research results indicated that it was possible to visually distinguish Pinus densiflora Siebold & Zucc, Quercus mongolica Fisch. ex Ledeb, and Quercus acutissima Carruth in 135 aerial images captured by UAV. Out of these, 104 images were used as training data for the deep learning model, while 31 images were used for inference. The optimization of the deep learning model resulted in an overall average pixel accuracy of 92.60, with mIoU at 0.80 and FIoU at 0.82, demonstrating the successful construction of a reliable deep learning model. This study is significant as a pilot case for the application of UAV and deep learning to monitor and manage representative species among climate-vulnerable vegetation, including Pinus densiflora, Quercus mongolica, and Quercus acutissima. It is expected that in the future, UAV and deep learning models can be applied to a variety of vegetation species to better address forest management.

Development of Marine Debris Monitoring Methods Using Satellite and Drone Images (위성 및 드론 영상을 이용한 해안쓰레기 모니터링 기법 개발)

  • Kim, Heung-Min;Bak, Suho;Han, Jeong-ik;Ye, Geon Hui;Jang, Seon Woong
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1109-1124
    • /
    • 2022
  • This study proposes a marine debris monitoring methods using satellite and drone multispectral images. A multi-layer perceptron (MLP) model was applied to detect marine debris using Sentinel-2 satellite image. And for the detection of marine debris using drone multispectral images, performance evaluation and comparison of U-Net, DeepLabv3+ (ResNet50) and DeepLabv3+ (Inceptionv3) among deep learning models were performed (mIoU 0.68). As a result of marine debris detection using satellite image, the F1-Score was 0.97. Marine debris detection using drone multispectral images was performed on vegetative debris and plastics. As a result of detection, when DeepLabv3+ (Inceptionv3) was used, the most model accuracy, mean intersection over union (mIoU), was 0.68. Vegetative debris showed an F1-Score of 0.93 and IoU of 0.86, while plastics showed low performance with an F1-Score of 0.5 and IoU of 0.33. However, the F1-Score of the spectral index applied to generate plastic mask images was 0.81, which was higher than the plastics detection performance of DeepLabv3+ (Inceptionv3), and it was confirmed that plastics monitoring using the spectral index was possible. The marine debris monitoring technique proposed in this study can be used to establish a plan for marine debris collection and treatment as well as to provide quantitative data on marine debris generation.

3DentAI: U-Nets for 3D Oral Structure Reconstruction from Panoramic X-rays (3DentAI: 파노라마 X-ray로부터 3차원 구강구조 복원을 위한 U-Nets)

  • Anusree P.Sunilkumar;Seong Yong Moon;Wonsang You
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.7
    • /
    • pp.326-334
    • /
    • 2024
  • Extra-oral imaging techniques such as Panoramic X-rays (PXs) and Cone Beam Computed Tomography (CBCT) are the most preferred imaging modalities in dental clinics owing to its patient convenience during imaging as well as their ability to visualize entire teeth information. PXs are preferred for routine clinical treatments and CBCTs for complex surgeries and implant treatments. However, PXs are limited by the lack of third dimensional spatial information whereas CBCTs inflict high radiation exposure to patient. When a PX is already available, it is beneficial to reconstruct the 3D oral structure from the PX to avoid further expenses and radiation dose. In this paper, we propose 3DentAI - an U-Net based deep learning framework for 3D reconstruction of oral structure from a PX image. Our framework consists of three module - a reconstruction module based on attention U-Net for estimating depth from a PX image, a realignment module for aligning the predicted flattened volume to the shape of jaw using a predefined focal trough and ray data, and lastly a refinement module based on 3D U-Net for interpolating the missing information to obtain a smooth representation of oral cavity. Synthetic PXs obtained from CBCT by ray tracing and rendering were used to train the networks without the need of paired PX and CBCT datasets. Our method, trained and tested on a diverse datasets of 600 patients, achieved superior performance to GAN-based models even with low computational complexity.

Design and Implementation learning English words Smart-phone application for Elementary school students on Android platform by Focus on form (형태초점교수법 기반 초등학교 영어 단어 학습 스마트폰 어플리케이션 설계 및 구현)

  • Kim, Seung-Jun;Kim, Kap-Su
    • Journal of The Korean Association of Information Education
    • /
    • v.16 no.2
    • /
    • pp.223-231
    • /
    • 2012
  • Recently, We need a change of our education, Owing to trend of smart phone, approaching Digital Natives. We need some Teaching-Learning method, materials, softwares which realize Student-centered, Customized education more than E-learning, U-learning. Thus, In demand of This, This study will suggest a creative idea that how to design and produce smart-phone application which refers to 800 English words in elementary school recommended The Ministry of Education, Science, and Technology in Android Platform.

  • PDF

Prediction on Web-based simulation result through Machine learning (머신러닝을 통한 웹 기반 시뮬레이션 결과 예측)

  • Kim, JiSu;Kang, MinKyu;Kwon, Hoon;Lee, JeongCheol
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.789-792
    • /
    • 2019
  • 최근 IT 기술의 발전으로 웹 기반 시뮬레이션이 많은 연구에 도움을 주고 있다. EDISON은 그러한 시뮬레이션 환경을 제공해주는 플랫폼으로 전산열유체, 나노물리, 계산화학, 등 다양한 전문분야의 앱(이하 솔버)을 제공해준다. 이러한 솔버는 사용자가 그 솔버에 맞는 간단한 파라미터들만 입력하면 다양한 결과를 알아서 계산해 주는 편의를 제공해 주지만 입력 데이터에 따라 작업 시간이 상당히 혹은 무한히 걸릴 수 있기 때문에 언제 끝날지 모르는 작업의 완료 여부를 수시로 확인해야만 하는 불편함이 있다. 때문에 그 시간을 예측할 수 있다면 수시로 확인하는 불편함을 줄일 수 있다. 또한 오랜 시간이 걸리는 작업의 결과를 미리 알 수 있으면 사용자들에게 큰 도움이 될 것이다. 이런 점에서 본 논문에서는 시뮬레이션의 작업 결과와 수행 시간의 예측 모델을 적용해 보았다. 본 논문에서는 계산화학분야의 uChem 솔버의 결과 예측을 진행하였는데 uChem 솔버는 1주기 및 2주기 원자들로 이루어진 화합물의 최적화된 상태의 에너지 값과 구조를 보여주는 프로그램이다. 예측을 진행한 결과 에너지는 99%이상의 상당히 높은 정확도를 얻을 수 있었고 수행 시간의 경우 약 90%의 정확도를 얻었다. 이를 통해서 사용자로 하여금 더욱 편리한 서비스를 제공할 수 있을 것이다.

A Computer Aided Diagnosis Algorithm for Classification of Malignant Melanoma based on Deep Learning (딥 러닝 기반의 악성흑색종 분류를 위한 컴퓨터 보조진단 알고리즘)

  • Lim, Sangheon;Lee, Myungsuk
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.14 no.4
    • /
    • pp.69-77
    • /
    • 2018
  • The malignant melanoma accounts for about 1 to 3% of the total malignant tumor in the West, especially in the US, it is a disease that causes more than 9,000 deaths each year. Generally, skin lesions are difficult to detect the features through photography. In this paper, we propose a computer-aided diagnosis algorithm based on deep learning for classification of malignant melanoma and benign skin tumor in RGB channel skin images. The proposed deep learning model configures the tumor lesion segmentation model and a classification model of malignant melanoma. First, U-Net was used to segment a skin lesion area in the dermoscopic image. We could implement algorithms to classify malignant melanoma and benign tumor using skin lesion image and results of expert's labeling in ResNet. The U-Net model obtained a dice similarity coefficient of 83.45% compared with results of expert's labeling. The classification accuracy of malignant melanoma obtained the 83.06%. As the result, it is expected that the proposed artificial intelligence algorithm will utilize as a computer-aided diagnosis algorithm and help to detect malignant melanoma at an early stage.