• Title/Summary/Keyword: 딥러닝 융합 영상처리

Search Result 72, Processing Time 0.02 seconds

Adversarial learning for underground structure concrete crack detection based on semi­supervised semantic segmentation (지하구조물 콘크리트 균열 탐지를 위한 semi-supervised 의미론적 분할 기반의 적대적 학습 기법 연구)

  • Shim, Seungbo;Choi, Sang-Il;Kong, Suk-Min;Lee, Seong-Won
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.22 no.5
    • /
    • pp.515-528
    • /
    • 2020
  • Underground concrete structures are usually designed to be used for decades, but in recent years, many of them are nearing their original life expectancy. As a result, it is necessary to promptly inspect and repair the structure, since it can cause lost of fundamental functions and bring unexpected problems. Therefore, personnel-based inspections and repairs have been underway for maintenance of underground structures, but nowadays, objective inspection technologies have been actively developed through the fusion of deep learning and image process. In particular, various researches have been conducted on developing a concrete crack detection algorithm based on supervised learning. Most of these studies requires a large amount of image data, especially, label images. In order to secure those images, it takes a lot of time and labor in reality. To resolve this problem, we introduce a method to increase the accuracy of crack area detection, improved by 0.25% on average by applying adversarial learning in this paper. The adversarial learning consists of a segmentation neural network and a discriminator neural network, and it is an algorithm that improves recognition performance by generating a virtual label image in a competitive structure. In this study, an efficient deep neural network learning method was proposed using this method, and it is expected to be used for accurate crack detection in the future.

Real-Time 3D Volume Deformation and Visualization by Integrating NeRF, PBD, and Parallel Resampling (NeRF, PBD 및 병렬 리샘플링을 결합한 실시간 3D 볼륨 변형체 시각화)

  • Sangmin Kwon;Sojin Jeon;Juni Park;Dasol Kim;Heewon Kye
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.189-198
    • /
    • 2024
  • Research combining deep learning-based models and physical simulations is making important advances in the medical field. This extracts the necessary information from medical image data and enables fast and accurate prediction of deformation of the skeleton and soft tissue based on physical laws. This study proposes a system that integrates Neural Radiance Fields (NeRF), Position-Based Dynamics (PBD), and Parallel Resampling to generate 3D volume data, and deform and visualize them in real-time. NeRF uses 2D images and camera coordinates to produce high-resolution 3D volume data, while PBD enables real-time deformation and interaction through physics-based simulation. Parallel Resampling improves rendering efficiency by dividing the volume into tetrahedral meshes and utilizing GPU parallel processing. This system renders the deformed volume data using ray casting, leveraging GPU parallel processing for fast real-time visualization. Experimental results show that this system can generate and deform 3D data without expensive equipment, demonstrating potential applications in engineering, education, and medicine.