• Title/Summary/Keyword: image deep learning

Search Result 1,845, Processing Time 0.022 seconds

Phase Segmentation of PVA Fiber-Reinforced Cementitious Composites Using U-net Deep Learning Approach (U-net 딥러닝 기법을 활용한 PVA 섬유 보강 시멘트 복합체의 섬유 분리)

  • Jeewoo Suh;Tong-Seok Han
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.36 no.5
    • /
    • pp.323-330
    • /
    • 2023
  • The development of an analysis model that reflects the microstructure characteristics of polyvinyl alcohol (PVA) fiber-reinforced cementitious composites, which have a highly complex microstructure, enables synergy between efficient material design and real experiments. PVA fiber orientations are an important factor that influences the mechanical behavior of PVA fiber-reinforced cementitious composites. Owing to the difficulty in distinguishing the gray level value obtained from micro-CT images of PVA fibers from adjacent phases, fiber segmentation is time-consuming work. In this study, a micro-CT test with a voxel size of 0.65 ㎛3 was performed to investigate the three-dimensional distribution of fibers. To segment the fibers and generate training data, histogram, morphology, and gradient-based phase-segmentation methods were used. A U-net model was proposed to segment fibers from micro-CT images of PVA fiber-reinforced cementitious composites. Data augmentation was applied to increase the accuracy of the training, using a total of 1024 images as training data. The performance of the model was evaluated using accuracy, precision, recall, and F1 score. The trained model achieved a high fiber segmentation performance and efficiency, and the approach can be applied to other specimens as well.

Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model (카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법)

  • Yi-ji Im;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1099-1110
    • /
    • 2023
  • The recognition system of autonomous driving and robot navigation performs vision work such as object recognition, tracking, and lane detection after multi-sensor fusion to improve performance. Currently, research on a deep learning model based on the fusion of a camera and a lidar sensor is being actively conducted. However, deep learning models are vulnerable to adversarial attacks through modulation of input data. Attacks on the existing multi-sensor-based autonomous driving recognition system are focused on inducing obstacle detection by lowering the confidence score of the object recognition model.However, there is a limitation that an attack is possible only in the target model. In the case of attacks on the sensor fusion stage, errors in vision work after fusion can be cascaded, and this risk needs to be considered. In addition, an attack on LIDAR's point cloud data, which is difficult to judge visually, makes it difficult to determine whether it is an attack. In this study, image scaling-based camera-lidar We propose an attack method that reduces the accuracy of LCCNet, a fusion model (camera-LiDAR calibration model). The proposed method is to perform a scaling attack on the point of the input lidar. As a result of conducting an attack performance experiment by size with a scaling algorithm, an average of more than 77% of fusion errors were caused.

Applicability Evaluation of Deep Learning-Based Object Detection for Coastal Debris Monitoring: A Comparative Study of YOLOv8 and RT-DETR (해안쓰레기 탐지 및 모니터링에 대한 딥러닝 기반 객체 탐지 기술의 적용성 평가: YOLOv8과 RT-DETR을 중심으로)

  • Suho Bak;Heung-Min Kim;Youngmin Kim;Inji Lee;Miso Park;Seungyeol Oh;Tak-Young Kim;Seon Woong Jang
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1195-1210
    • /
    • 2023
  • Coastal debris has emerged as a salient issue due to its adverse effects on coastal aesthetics, ecological systems, and human health. In pursuit of effective countermeasures, the present study delineated the construction of a specialized image dataset for coastal debris detection and embarked on a comparative analysis between two paramount real-time object detection algorithms, YOLOv8 and RT-DETR. Rigorous assessments of robustness under multifarious conditions were instituted, subjecting the models to assorted distortion paradigms. YOLOv8 manifested a detection accuracy with a mean Average Precision (mAP) value ranging from 0.927 to 0.945 and an operational speed between 65 and 135 Frames Per Second (FPS). Conversely, RT-DETR yielded an mAP value bracket of 0.917 to 0.918 with a detection velocity spanning 40 to 53 FPS. While RT-DETR exhibited enhanced robustness against color distortions, YOLOv8 surpassed resilience under other evaluative criteria. The implications derived from this investigation are poised to furnish pivotal directives for algorithmic selection in the practical deployment of marine debris monitoring systems.

Classification of Industrial Parks and Quarries Using U-Net from KOMPSAT-3/3A Imagery (KOMPSAT-3/3A 영상으로부터 U-Net을 이용한 산업단지와 채석장 분류)

  • Che-Won Park;Hyung-Sup Jung;Won-Jin Lee;Kwang-Jae Lee;Kwan-Young Oh;Jae-Young Chang;Moung-Jin Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_3
    • /
    • pp.1679-1692
    • /
    • 2023
  • South Korea is a country that emits a large amount of pollutants as a result of population growth and industrial development and is also severely affected by transboundary air pollution due to its geographical location. As pollutants from both domestic and foreign sources contribute to air pollution in Korea, the location of air pollutant emission sources is crucial for understanding the movement and distribution of pollutants in the atmosphere and establishing national-level air pollution management and response strategies. Based on this background, this study aims to effectively acquire spatial information on domestic and international air pollutant emission sources, which is essential for analyzing air pollution status, by utilizing high-resolution optical satellite images and deep learning-based image segmentation models. In particular, industrial parks and quarries, which have been evaluated as contributing significantly to transboundary air pollution, were selected as the main research subjects, and images of these areas from multi-purpose satellites 3 and 3A were collected, preprocessed, and converted into input and label data for model training. As a result of training the U-Net model using this data, the overall accuracy of 0.8484 and mean Intersection over Union (mIoU) of 0.6490 were achieved, and the predicted maps showed significant results in extracting object boundaries more accurately than the label data created by course annotations.

Semantic Segmentation of Hazardous Facilities in Rural Area Using U-Net from KOMPSAT Ortho Mosaic Imagery (KOMPSAT 정사모자이크 영상으로부터 U-Net 모델을 활용한 농촌위해시설 분류)

  • Sung-Hyun Gong;Hyung-Sup Jung;Moung-Jin Lee;Kwang-Jae Lee;Kwan-Young Oh;Jae-Young Chang
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_3
    • /
    • pp.1693-1705
    • /
    • 2023
  • Rural areas, which account for about 90% of the country's land area, are increasing in importance and value as a space that performs various public functions. However, facilities that adversely affect residents' lives, such as livestock facilities, factories, and solar panels, are being built indiscriminately near residential areas, damaging the rural environment and landscape and lowering the quality of residents' lives. In order to prevent disorderly development in rural areas and manage rural space in a planned manner, detection and monitoring of hazardous facilities in rural areas is necessary. Data can be acquired through satellite imagery, which can be acquired periodically and provide information on the entire region. Effective detection is possible by utilizing image-based deep learning techniques using convolutional neural networks. Therefore, U-Net model, which shows high performance in semantic segmentation, was used to classify potentially hazardous facilities in rural areas. In this study, KOMPSAT ortho-mosaic optical imagery provided by the Korea Aerospace Research Institute in 2020 with a spatial resolution of 0.7 meters was used, and AI training data for livestock facilities, factories, and solar panels were produced by hand for training and inference. After training with U-Net, pixel accuracy of 0.9739 and mean Intersection over Union (mIoU) of 0.7025 were achieved. The results of this study can be used for monitoring hazardous facilities in rural areas and are expected to be used as basis for rural planning.

Fully Automatic Coronary Calcium Score Software Empowered by Artificial Intelligence Technology: Validation Study Using Three CT Cohorts

  • June-Goo Lee;HeeSoo Kim;Heejun Kang;Hyun Jung Koo;Joon-Won Kang;Young-Hak Kim;Dong Hyun Yang
    • Korean Journal of Radiology
    • /
    • v.22 no.11
    • /
    • pp.1764-1776
    • /
    • 2021
  • Objective: This study aimed to validate a deep learning-based fully automatic calcium scoring (coronary artery calcium [CAC]_auto) system using previously published cardiac computed tomography (CT) cohort data with the manually segmented coronary calcium scoring (CAC_hand) system as the reference standard. Materials and Methods: We developed the CAC_auto system using 100 co-registered, non-enhanced and contrast-enhanced CT scans. For the validation of the CAC_auto system, three previously published CT cohorts (n = 2985) were chosen to represent different clinical scenarios (i.e., 2647 asymptomatic, 220 symptomatic, 118 valve disease) and four CT models. The performance of the CAC_auto system in detecting coronary calcium was determined. The reliability of the system in measuring the Agatston score as compared with CAC_hand was also evaluated per vessel and per patient using intraclass correlation coefficients (ICCs) and Bland-Altman analysis. The agreement between CAC_auto and CAC_hand based on the cardiovascular risk stratification categories (Agatston score: 0, 1-10, 11-100, 101-400, > 400) was evaluated. Results: In 2985 patients, 6218 coronary calcium lesions were identified using CAC_hand. The per-lesion sensitivity and false-positive rate of the CAC_auto system in detecting coronary calcium were 93.3% (5800 of 6218) and 0.11 false-positive lesions per patient, respectively. The CAC_auto system, in measuring the Agatston score, yielded ICCs of 0.99 for all the vessels (left main 0.91, left anterior descending 0.99, left circumflex 0.96, right coronary 0.99). The limits of agreement between CAC_auto and CAC_hand were 1.6 ± 52.2. The linearly weighted kappa value for the Agatston score categorization was 0.94. The main causes of false-positive results were image noise (29.1%, 97/333 lesions), aortic wall calcification (25.5%, 85/333 lesions), and pericardial calcification (24.3%, 81/333 lesions). Conclusion: The atlas-based CAC_auto empowered by deep learning provided accurate calcium score measurement as compared with manual method and risk category classification, which could potentially streamline CAC imaging workflows.

Automated Data Extraction from Unstructured Geotechnical Report based on AI and Text-mining Techniques (AI 및 텍스트 마이닝 기법을 활용한 지반조사보고서 데이터 추출 자동화)

  • Park, Jimin;Seo, Wanhyuk;Seo, Dong-Hee;Yun, Tae-Sup
    • Journal of the Korean Geotechnical Society
    • /
    • v.40 no.4
    • /
    • pp.69-79
    • /
    • 2024
  • Field geotechnical data are obtained from various field and laboratory tests and are documented in geotechnical investigation reports. For efficient design and construction, digitizing these geotechnical parameters is essential. However, current practices involve manual data entry, which is time-consuming, labor-intensive, and prone to errors. Thus, this study proposes an automatic data extraction method from geotechnical investigation reports using image-based deep learning models and text-mining techniques. A deep-learning-based page classification model and a text-searching algorithm were employed to classify geotechnical investigation report pages with 100% accuracy. Computer vision algorithms were utilized to identify valid data regions within report pages, and text analysis was used to match and extract the corresponding geotechnical data. The proposed model was validated using a dataset of 205 geotechnical investigation reports, achieving an average data extraction accuracy of 93.0%. Finally, a user-interface-based program was developed to enhance the practical application of the extraction model. It allowed users to upload PDF files of geotechnical investigation reports, automatically analyze these reports, and extract and edit data. This approach is expected to improve the efficiency and accuracy of digitizing geotechnical investigation reports and building geotechnical databases.

Spine Computed Tomography to Magnetic Resonance Image Synthesis Using Generative Adversarial Networks : A Preliminary Study

  • Lee, Jung Hwan;Han, In Ho;Kim, Dong Hwan;Yu, Seunghan;Lee, In Sook;Song, You Seon;Joo, Seongsu;Jin, Cheng-Bin;Kim, Hakil
    • Journal of Korean Neurosurgical Society
    • /
    • v.63 no.3
    • /
    • pp.386-396
    • /
    • 2020
  • Objective : To generate synthetic spine magnetic resonance (MR) images from spine computed tomography (CT) using generative adversarial networks (GANs), as well as to determine the similarities between synthesized and real MR images. Methods : GANs were trained to transform spine CT image slices into spine magnetic resonance T2 weighted (MRT2) axial image slices by combining adversarial loss and voxel-wise loss. Experiments were performed using 280 pairs of lumbar spine CT scans and MRT2 images. The MRT2 images were then synthesized from 15 other spine CT scans. To evaluate whether the synthetic MR images were realistic, two radiologists, two spine surgeons, and two residents blindly classified the real and synthetic MRT2 images. Two experienced radiologists then evaluated the similarities between subdivisions of the real and synthetic MRT2 images. Quantitative analysis of the synthetic MRT2 images was performed using the mean absolute error (MAE) and peak signal-to-noise ratio (PSNR). Results : The mean overall similarity of the synthetic MRT2 images evaluated by radiologists was 80.2%. In the blind classification of the real MRT2 images, the failure rate ranged from 0% to 40%. The MAE value of each image ranged from 13.75 to 34.24 pixels (mean, 21.19 pixels), and the PSNR of each image ranged from 61.96 to 68.16 dB (mean, 64.92 dB). Conclusion : This was the first study to apply GANs to synthesize spine MR images from CT images. Despite the small dataset of 280 pairs, the synthetic MR images were relatively well implemented. Synthesis of medical images using GANs is a new paradigm of artificial intelligence application in medical imaging. We expect that synthesis of MR images from spine CT images using GANs will improve the diagnostic usefulness of CT. To better inform the clinical applications of this technique, further studies are needed involving a large dataset, a variety of pathologies, and other MR sequence of the lumbar spine.

A Study on Similar Trademark Search Model Using Convolutional Neural Networks (합성곱 신경망(Convolutional Neural Network)을 활용한 지능형 유사상표 검색 모형 개발)

  • Yoon, Jae-Woong;Lee, Suk-Jun;Song, Chil-Yong;Kim, Yeon-Sik;Jung, Mi-Young;Jeong, Sang-Il
    • Management & Information Systems Review
    • /
    • v.38 no.3
    • /
    • pp.55-80
    • /
    • 2019
  • Recently, many companies improving their management performance by building a powerful brand value which is recognized for trademark rights. However, as growing up the size of online commerce market, the infringement of trademark rights is increasing. According to various studies and reports, cases of foreign and domestic companies infringing on their trademark rights are increased. As the manpower and the cost required for the protection of trademark are enormous, small and medium enterprises(SMEs) could not conduct preliminary investigations to protect their trademark rights. Besides, due to the trademark image search service does not exist, many domestic companies have a problem that investigating huge amounts of trademarks manually when conducting preliminary investigations to protect their rights of trademark. Therefore, we develop an intelligent similar trademark search model to reduce the manpower and cost for preliminary investigation. To measure the performance of the model which is developed in this study, test data selected by intellectual property experts was used, and the performance of ResNet V1 101 was the highest. The significance of this study is as follows. The experimental results empirically demonstrate that the image classification algorithm shows high performance not only object recognition but also image retrieval. Since the model that developed in this study was learned through actual trademark image data, it is expected that it can be applied in the real industrial environment.

Comparison of Artificial Intelligence Multitask Performance using Object Detection and Foreground Image (물체탐색과 전경영상을 이용한 인공지능 멀티태스크 성능 비교)

  • Jeong, Min Hyuk;Kim, Sang-Kyun;Lee, Jin Young;Choo, Hyon-Gon;Lee, HeeKyung;Cheong, Won-Sik
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.308-317
    • /
    • 2022
  • Researches are underway to efficiently reduce the size of video data transmitted and stored in the image analysis process using deep learning-based machine vision technology. MPEG (Moving Picture Expert Group) has newly established a standardization project called VCM (Video Coding for Machine) and is conducting research on video encoding for machines rather than video encoding for humans. We are researching a multitask that performs various tasks with one image input. The proposed pipeline does not perform all object detection of each task that should precede object detection, but precedes it only once and uses the result as an input for each task. In this paper, we propose a pipeline for efficient multitasking and perform comparative experiments on compression efficiency, execution time, and result accuracy of the input image to check the efficiency. As a result of the experiment, the capacity of the input image decreased by more than 97.5%, while the accuracy of the result decreased slightly, confirming the possibility of efficient multitasking.