• Title/Summary/Keyword: Convolutional Neural Networks (CNNs)

Search Result 83, Processing Time 0.032 seconds

Damage localization and quantification of a truss bridge using PCA and convolutional neural network

  • Jiajia, Hao;Xinqun, Zhu;Yang, Yu;Chunwei, Zhang;Jianchun, Li
    • Smart Structures and Systems
    • /
    • 제30권6호
    • /
    • pp.673-686
    • /
    • 2022
  • Deep learning algorithms for Structural Health Monitoring (SHM) have been extracting the interest of researchers and engineers. These algorithms commonly used loss functions and evaluation indices like the mean square error (MSE) which were not originally designed for SHM problems. An updated loss function which was specifically constructed for deep-learning-based structural damage detection problems has been proposed in this study. By tuning the coefficients of the loss function, the weights for damage localization and quantification can be adapted to the real situation and the deep learning network can avoid unnecessary iterations on damage localization and focus on the damage severity identification. To prove efficiency of the proposed method, structural damage detection using convolutional neural networks (CNNs) was conducted on a truss bridge model. Results showed that the validation curve with the updated loss function converged faster than the traditional MSE. Data augmentation was conducted to improve the anti-noise ability of the proposed method. For reducing the training time, the normalized modal strain energy change (NMSEC) was extracted, and the principal component analysis (PCA) was adopted for dimension reduction. The results showed that the training time was reduced by 90% and the damage identification accuracy could also have a slight increase. Furthermore, the effect of different modes and elements on the training dataset was also analyzed. The proposed method could greatly improve the performance for structural damage detection on both the training time and detection accuracy.

A Remote Sensing Scene Classification Model Based on EfficientNetV2L Deep Neural Networks

  • Aljabri, Atif A.;Alshanqiti, Abdullah;Alkhodre, Ahmad B.;Alzahem, Ayyub;Hagag, Ahmed
    • International Journal of Computer Science & Network Security
    • /
    • 제22권10호
    • /
    • pp.406-412
    • /
    • 2022
  • Scene classification of very high-resolution (VHR) imagery can attribute semantics to land cover in a variety of domains. Real-world application requirements have not been addressed by conventional techniques for remote sensing image classification. Recent research has demonstrated that deep convolutional neural networks (CNNs) are effective at extracting features due to their strong feature extraction capabilities. In order to improve classification performance, these approaches rely primarily on semantic information. Since the abstract and global semantic information makes it difficult for the network to correctly classify scene images with similar structures and high interclass similarity, it achieves a low classification accuracy. We propose a VHR remote sensing image classification model that uses extracts the global feature from the original VHR image using an EfficientNet-V2L CNN pre-trained to detect similar classes. The image is then classified using a multilayer perceptron (MLP). This method was evaluated using two benchmark remote sensing datasets: the 21-class UC Merced, and the 38-class PatternNet. As compared to other state-of-the-art models, the proposed model significantly improves performance.

영상 생성적 데이터 증강을 이용한 딥러닝 기반 SAR 영상 선박 탐지 (Deep-learning based SAR Ship Detection with Generative Data Augmentation)

  • 권형준;정소미;김성태;이재석;손광훈
    • 한국멀티미디어학회논문지
    • /
    • 제25권1호
    • /
    • pp.1-9
    • /
    • 2022
  • Ship detection in synthetic aperture radar (SAR) images is an important application in marine monitoring for the military and civilian domains. Over the past decade, object detection has achieved significant progress with the development of convolutional neural networks (CNNs) and lot of labeled databases. However, due to difficulty in collecting and labeling SAR images, it is still a challenging task to solve SAR ship detection CNNs. To overcome the problem, some methods have employed conventional data augmentation techniques such as flipping, cropping, and affine transformation, but it is insufficient to achieve robust performance to handle a wide variety of types of ships. In this paper, we present a novel and effective approach for deep SAR ship detection, that exploits label-rich Electro-Optical (EO) images. The proposed method consists of two components: a data augmentation network and a ship detection network. First, we train the data augmentation network based on conditional generative adversarial network (cGAN), which aims to generate additional SAR images from EO images. Since it is trained using unpaired EO and SAR images, we impose the cycle-consistency loss to preserve the structural information while translating the characteristics of the images. After training the data augmentation network, we leverage the augmented dataset constituted with real and translated SAR images to train the ship detection network. The experimental results include qualitative evaluation of the translated SAR images and the comparison of detection performance of the networks, trained with non-augmented and augmented dataset, which demonstrates the effectiveness of the proposed framework.

딥러닝 기반 객체 인식 기술 동향 (Trends on Object Detection Techniques Based on Deep Learning)

  • 이진수;이상광;김대욱;홍승진;양성일
    • 전자통신동향분석
    • /
    • 제33권4호
    • /
    • pp.23-32
    • /
    • 2018
  • Object detection is a challenging field in the visual understanding research area, detecting objects in visual scenes, and the location of such objects. It has recently been applied in various fields such as autonomous driving, image surveillance, and face recognition. In traditional methods of object detection, handcrafted features have been designed for overcoming various visual environments; however, they have a trade-off issue between accuracy and computational efficiency. Deep learning is a revolutionary paradigm in the machine-learning field. In addition, because deep-learning-based methods, particularly convolutional neural networks (CNNs), have outperformed conventional methods in terms of object detection, they have been studied in recent years. In this article, we provide a brief descriptive summary of several recent deep-learning methods for object detection and deep learning architectures. We also compare the performance of these methods and present a research guide of the object detection field.

딥 러닝 기반의 초해상도 이미지 복원 기법 성능 분석 (Performance Analysis of Deep Learning-based Image Super Resolution Methods)

  • 이현재;신현광;최규상;진성일
    • 대한임베디드공학회논문지
    • /
    • 제15권2호
    • /
    • pp.61-70
    • /
    • 2020
  • Convolutional Neural Networks (CNN) have been used extensively in recent times to solve image classification and segmentation problems. However, the use of CNNs in image super-resolution problems remains largely unexploited. Filter interpolation and prediction model methods are the most commonly used algorithms in super-resolution algorithm implementations. The major limitation in the above named methods is that images become totally blurred and a lot of the edge information are lost. In this paper, we analyze super resolution based on CNN and the wavelet transform super resolution method. We compare and analyze the performance according to the number of layers and the training data of the CNN.

Impact of Hull Condition and Propeller Surface Maintenance on Fuel Efficiency of Ocean-Going Vessels

  • Tien Anh Tran;Do Kyun Kim
    • 한국해양공학회지
    • /
    • 제37권5호
    • /
    • pp.181-189
    • /
    • 2023
  • The fuel consumption of marine diesel engines holds paramount importance in contemporary maritime transportation and shapes energy efficiency strategies of ocean-going vessels. Nonetheless, a noticeable gap in knowledge prevails concerning the influence of ship hull conditions and propeller roughness on fuel consumption. This study bridges this gap by utilizing artificial intelligence techniques in Matlab, particularly convolutional neural networks (CNNs) to comprehensively investigate these factors. We propose a time-series prediction model that was built on numerical simulations and aimed at forecasting ship hull and propeller conditions. The model's accuracy was validated through a meticulous comparison of predictions with actual ship-hull and propeller conditions. Furthermore, we executed a comparative analysis juxtaposing predictive outcomes with navigational environmental factors encompassing wind speed, wave height, and ship loading conditions by the fuzzy clustering method. This research's significance lies in its pivotal role as a foundation for fostering a more intricate understanding of energy consumption within the realm of maritime transport.

딥 러닝 기반의 이기종 무선 신호 구분을 위한 데이터 수집 효율화 기법 (An Efficient Data Collection Method for Deep Learning-based Wireless Signal Identification in Unlicensed Spectrum)

  • 최재혁
    • 전기전자학회논문지
    • /
    • 제26권1호
    • /
    • pp.62-66
    • /
    • 2022
  • 최근 데이터 기반의 딥러닝 기술을 적용하여 비면허 대역의 다양한 통신 신호를 분류하는 연구가 활발히 수행되고 있다. 하지만, 복잡한 신경망 모델 사용을 기반으로 이뤄진 이러한 접근법은 높은 연산 능력을 필요로 하게 되어, 자원 제약적인 무선 인터페이스 및 사물인터넷(Internet of Things) 장비에서는 사용이 제약된다. 본 연구에서는 비면허 대역의 무선 이기종 기술을 인지하기 위한 데이터 기반의 접근 방법을 살펴보고, 신호의 특징 추출 및 데이터화의 효율화 문제를 다룬다. 구체적으로, 비면허 대역의 다른 종류의 무선 통신 기술을 구분하기 위해 수신 신호 강도 측정을 기반으로 한 시계열 데이터를 이용해 합성곱 신경망(Convolutional Neural Network, CNN) 모델을 학습시켜 신호를 분류하는 방법을 살펴본다. 이 과정에서 동일한 구조의 신경망 모델의 경량화를 위한 효율적 신호의 시계열 데이터 정보 수집시 주파수 대역의 특징을 함께 특징화하는 방법을 제안하고, 그 효과를 평가한다. Bluetooth 호환의 Ubertooth 장비를 이용한 실측 기반의 실험 결과는 제안된 샘플링 기법이 동일한 신경망에 대해서 10% 수준의 샘플링 데이터 이용만으로도 동일한 정확도를 유지함을 보여준다.

CCTV 영상을 활용한 합성곱 신경망 기반 강우강도 산정 (Revolutionizing rainfall estimation through convolutional neural networks leveraging CCTV imagery)

  • 변종윤;김현준;이진욱;전창현
    • 한국수자원학회:학술대회논문집
    • /
    • 한국수자원학회 2023년도 학술발표회
    • /
    • pp.120-120
    • /
    • 2023
  • 본 연구에서는 CCTV 영상 내 빗줄기의 특성을 바탕으로 강우강도를 산정하기 위한 합성곱 신경망(CNNs, Convolutional Neural Networks) 기반 강우강도 산정 모형을 제안하였다. 중앙대학교 및 한국건설생활환경시험연구원 내 대형기후환경시험실에서 얻은 CCTV 영상들을 대상으로 연구를 수행하고, 우적계 등과 같은 지상 관측자료와 강우강도 산정 결과를 비교·검증하였다. 먼저, CCTV 영상 내 빗줄기의 미세한 변동 특성을 반영하기 위해 데이터 전처리 작업을 진행하였다. 이는 원본 영상으로부터 빗줄기 층을 분리해내는 과정, 빗줄기 층에서 빗물 입자를 분리해내는 과정, 그리고 빗물 입자를 인식하는 과정 등 총 세 단계로 구분된다. 합성곱 신경망 기반 강우강도 산정 모형 구축을 위해 영상 전처리가 완료된 데이터들을 입력값으로 설정하고, 촬영 시점에 대응되는 지상관측 자료를 출력값으로 고려하여 강우강도 산정모형을 훈련시켰다. CCTV 원자료 내 특정 영역에 편향되어 강우강도를 산정하는 과적합 현상의 발생을 방지하기 위해 원자료 내 5개의 관심 영역(ROI, Region of Interest)을 설정하였다. 추가로, CCTV의 해상도를 총 4개(2560×1440, 1920×1080, 1280×720, 720×480)로 구분함으로써 해상도 변화에 따른 학습 결과의 차이를 분석·평가하였다. 이는 기존 사례들과 비교했을 때, CCTV 영상을 기반으로 빗줄기의 거동 특성과 같은 물리적인 현상을 직간접적으로 고려하여 강우강도를 산정했다는 점과 더불어 머신러닝을 적용하여 강우 이미지가 갖는 본질적인 특징들을 파악했다는 측면에서, 추후 본 연구에서 제안한 모형의 활용 가치가 극대화될 수 있을 것으로 판단된다.

  • PDF

Edge 분석과 ROI 기법을 활용한 콘크리트 균열 분석 - Edge와 ROI를 적용한 콘크리트 균열 분석 및 검사 - (Edge Detection and ROI-Based Concrete Crack Detection)

  • 박희원;이동은
    • 한국건설관리학회논문집
    • /
    • 제25권2호
    • /
    • pp.36-44
    • /
    • 2024
  • 본 논문에서는 합성곱신경망과 ROI기법을 이용한 콘크리트 균열 분석에 관해 소개한다. 콘크리트 표면, 빔과 같은 구조물은 피로 응력, 주기 부하에 노출되며, 이는 일반적으로 구조물의 표면에서 미세한 수준에서 시작되는 균열을 야기한다. 구조물의 균열은 안정성을 저하시키고 구조물의 견고함을 감소시킨다. 조기 발견을 통해 손상 및 고장 가능성을 방지하기 위한 예방 조치를 취할 수 있다. 일반적으로 수동 검사 결과는 품질이 좋지 않고, 대규모 기반 시설의 경우 접근이 어려우며, 균열을 정확하게 감지하기 어렵다. 이러한 수동검사의 자동화는 기존 방식의 한계를 해결할 수 있기 때문에 컴퓨터 비전 기반의 연구들이 수행되었다. 하지만 다양한 유형의 균열이나, 열화상 카메라 등을 이용한 연구들은 부족한 상태이다. 따라서 본 연에서는 콘크리트 벽의 균열을 자동으로 감지하는 방법론을 개발하여 제시하며, 다음과 같은 연구 내용을 목표로 한다. 첫째, 균열 감지 이미지 기반 분석의 주요 장점인 이미지 처리 기술을 사용하여 기존의 수동 방법과 비교하여 정확도가 향상된 결과 및 정보를 제공한다. 둘째, 강화된 Sobel edge segmentation 기술 및 ROI 기법 기반의 알고리즘을 개발하여 비파괴 시험을 위한 자동 균열 감지 기술을 구현한다.

Assessing Stream Vegetation Dynamics and Revetment Impact Using Time-Series RGB UAV Images and ResNeXt101 CNNs

  • Seung-Hwan Go;Kyeong-Soo Jeong;Jong-Hwa Park
    • 대한원격탐사학회지
    • /
    • 제40권1호
    • /
    • pp.9-18
    • /
    • 2024
  • Small streams, despite their rich ecosystems, face challenges in vegetation assessment due to the limitations of traditional, time-consuming methods. This study presents a groundbreaking approach, combining unmanned aerial vehicles(UAVs), convolutional neural networks(CNNs), and the vegetation differential vegetation index (VDVI), to revolutionize both assessment and management of stream vegetation. Focusing on Idong Stream in South Korea (2.7 km long, 2.34 km2 basin area)with eight diverse revetment methods, we leveraged high-resolution RGB images captured by UAVs across five dates (July-December). These images trained a ResNeXt101 CNN model, achieving an impressive 89% accuracy in classifying vegetation cover(soil,water, and vegetation). This enabled detailed spatial and temporal analysis of vegetation distribution. Further, VDVI calculations on classified vegetation areas allowed assessment of vegetation vitality. Our key findings showcase the power of this approach:(a) TheCNN model generated highly accurate cover maps, facilitating precise monitoring of vegetation changes overtime and space. (b) August displayed the highest average VDVI(0.24), indicating peak vegetation growth crucial for stabilizing streambanks and resisting flow. (c) Different revetment methods impacted vegetation vitality. Fieldstone sections exhibited initial high vitality followed by decline due to leaf browning. Block-type sections and the control group showed a gradual decline after peak growth. Interestingly, the "H environment block" exhibited minimal change, suggesting potential benefits for specific ecological functions.(d) Despite initial differences, all sections converged in vegetation distribution trends after 15 years due to the influence of surrounding vegetation. This study demonstrates the immense potential of UAV-based remote sensing and CNNs for revolutionizing small-stream vegetation assessment and management. By providing high-resolution, temporally detailed data, this approach offers distinct advantages over traditional methods, ultimately benefiting both the environment and surrounding communities through informed decision-making for improved stream health and ecological conservation.