• Title/Summary/Keyword: Grad-CAM

Search Result 39, Processing Time 0.026 seconds

Face Recognition Network using gradCAM (gradCam을 사용한 얼굴인식 신경망)

  • Chan Hyung Baek;Kwon Jihun;Ho Yub Jung
    • Smart Media Journal
    • /
    • v.12 no.2
    • /
    • pp.9-14
    • /
    • 2023
  • In this paper, we proposed a face recognition network which attempts to use more facial features awhile using smaller number of training sets. When combining the neural network together for face recognition, we want to use networks that use different part of the facial features. However, the network training chooses randomly where these facial features are obtained. Other hand, the judgment basis of the network model can be expressed as a saliency map through gradCAM. Therefore, in this paper, we use gradCAM to visualize where the trained face recognition model has made a observations and recognition judgments. Thus, the network combination can be constructed based on the different facial features used. Using this approach, we trained a network for small face recognition problem. In an simple toy face recognition example, the recognition network used in this paper improves the accuracy by 1.79% and reduces the equal error rate (EER) by 0.01788 compared to the conventional approach.

Research of a Method of Generating an Adversarial Sample Using Grad-CAM (Grad-CAM을 이용한 적대적 예제 생성 기법 연구)

  • Kang, Sehyeok
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.6
    • /
    • pp.878-885
    • /
    • 2022
  • Research in the field of computer vision based on deep learning is being actively conducted. However, deep learning-based models have vulnerabilities in adversarial attacks that increase the model's misclassification rate by applying adversarial perturbation. In particular, in the case of FGSM, it is recognized as one of the effective attack methods because it is simple, fast and has a considerable attack success rate. Meanwhile, as one of the efforts to visualize deep learning models, Grad-CAM enables visual explanation of convolutional neural networks. In this paper, I propose a method to generate adversarial examples with high attack success rate by applying Grad-CAM to FGSM. The method chooses fixels, which are closely related to labels, by using Grad-CAM and add perturbations to the fixels intensively. The proposed method has a higher success rate than the FGSM model in the same perturbation for both targeted and untargeted examples. In addition, unlike FGSM, it has the advantage that the distribution of noise is not uniform, and when the success rate is increased by repeatedly applying noise, the attack is successful with fewer iterations.

An XAI approach based on Grad-CAM to analyze learning criteria for DCGANS (DCGAN의 학습 기준을 분석하기 위한 Grad-CAM 기반의 XAI 접근 방법)

  • Jin-Ju Ok
    • Annual Conference of KIPS
    • /
    • 2023.11a
    • /
    • pp.479-480
    • /
    • 2023
  • 생성형 인공지능은 학습의 기준을 파악하기 어려운 모델이다. 그 중 DCGAN을 분석하여 판별자를 통해 생성자의 학습 기준을 판단할 수 있는 하나의 방법을 제안하고자 한다. 그 과정에서 XAI 기법인 Grad-CAM을 활용하여 학습 시에 모델이 중요시하는 부분을 분석하여 적합한 학습과 학습에 적합하지 않은 데이터를 분석하는 방법을 소개하고자 한다.

Detection of Power Transmission Equipment in Image using Guided Grad-CAM (Guided Grad-CAM 을 이용한 영상 내 송전설비 검출기법)

  • Park, Eun-Soo;Kim, SeungHwan;Mujtaba, Ghulam;Ryu, Eun-Seok
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.709-713
    • /
    • 2020
  • 본 논문에서 육안으로도 구별하기 힘든 송전선과 같은 객체가 포함된 송전설비를 효과적으로 검출하는 방법을 제안한다. 객체 인식 모델에 송전탑 데이터 셋을 학습시켜 송전설비 Region of Interest(ROI)를 추출한다. 송전선 데이터 셋을 ResNet50 에 학습하고, 추출된 ROI 영상을 Guided Grad-CAM 을 출력한다. 추출된 Guided Grad-CAM 에 노이즈 제거 후처리를 적용하여 송전설비를 추출한다. 본 논문에서 제안된 기법을 적용할 경우 드론 또는 UAV 헬기 등에서 촬영된 영상으로 송전설비 유지보수가 가능하다.

  • PDF

Automatic detection of icing wind turbine using deep learning method

  • Hacıefendioglu, Kemal;Basaga, Hasan Basri;Ayas, Selen;Karimi, Mohammad Tordi
    • Wind and Structures
    • /
    • v.34 no.6
    • /
    • pp.511-523
    • /
    • 2022
  • Detecting the icing on wind turbine blades built-in cold regions with conventional methods is always a very laborious, expensive and very difficult task. Regarding this issue, the use of smart systems has recently come to the agenda. It is quite possible to eliminate this issue by using the deep learning method, which is one of these methods. In this study, an application has been implemented that can detect icing on wind turbine blades images with visualization techniques based on deep learning using images. Pre-trained models of Resnet-50, VGG-16, VGG-19 and Inception-V3, which are well-known deep learning approaches, are used to classify objects automatically. Grad-CAM, Grad-CAM++, and Score-CAM visualization techniques were considered depending on the deep learning methods used to predict the location of icing regions on the wind turbine blades accurately. It was clearly shown that the best visualization technique for localization is Score-CAM. Finally, visualization performance analyses in various cases which are close-up and remote photos of a wind turbine, density of icing and light were carried out using Score-CAM for Resnet-50. As a result, it is understood that these methods can detect icing occurring on the wind turbine with acceptable high accuracy.

Analyze weeds classification with visual explanation based on Convolutional Neural Networks

  • Vo, Hoang-Trong;Yu, Gwang-Hyun;Nguyen, Huy-Toan;Lee, Ju-Hwan;Dang, Thanh-Vu;Kim, Jin-Young
    • Smart Media Journal
    • /
    • v.8 no.3
    • /
    • pp.31-40
    • /
    • 2019
  • To understand how a Convolutional Neural Network (CNN) model captures the features of a pattern to determine which class it belongs to, in this paper, we use Gradient-weighted Class Activation Mapping (Grad-CAM) to visualize and analyze how well a CNN model behave on the CNU weeds dataset. We apply this technique to Resnet model and figure out which features this model captures to determine a specific class, what makes the model get a correct/wrong classification, and how those wrong label images can cause a negative effect to a CNN model during the training process. In the experiment, Grad-CAM highlights the important regions of weeds, depending on the patterns learned by Resnet, such as the lobe and limb on 미국가막사리, or the entire leaf surface on 단풍잎돼지풀. Besides, Grad-CAM points out a CNN model can localize the object even though it is trained only for the classification problem.

Indirect Inspection Signal Diagnosis of Buried Pipe Coating Flaws Using Deep Learning Algorithm (딥러닝 알고리즘을 이용한 매설 배관 피복 결함의 간접 검사 신호 진단에 관한 연구)

  • Sang Jin Cho;Young-Jin Oh;Soo Young Shin
    • Transactions of the Korean Society of Pressure Vessels and Piping
    • /
    • v.19 no.2
    • /
    • pp.93-101
    • /
    • 2023
  • In this study, a deep learning algorithm was used to diagnose electric potential signals obtained through CIPS and DCVG, used indirect inspection methods to confirm the soundness of buried pipes. The deep learning algorithm consisted of CNN(Convolutional Neural Network) model for diagnosing the electric potential signal and Grad CAM(Gradient-weighted Class Activation Mapping) for showing the flaw prediction point. The CNN model for diagnosing electric potential signals classifies input data as normal/abnormal according to the presence or absence of flaw in the buried pipe, and for abnormal data, Grad CAM generates a heat map that visualizes the flaw prediction part of the buried pipe. The CIPS/DCVG signal and piping layout obtained from the 3D finite element model were used as input data for learning the CNN. The trained CNN classified the normal/abnormal data with 93% accuracy, and the Grad-CAM predicted flaws point with an average error of 2m. As a result, it confirmed that the electric potential signal of buried pipe can be diagnosed using a CNN-based deep learning algorithm.

Fingertip Detection through Atrous Convolution and Grad-CAM (Atrous Convolution과 Grad-CAM을 통한 손 끝 탐지)

  • Noh, Dae-Cheol;Kim, Tae-Young
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.5
    • /
    • pp.11-20
    • /
    • 2019
  • With the development of deep learning technology, research is being actively carried out on user-friendly interfaces that are suitable for use in virtual reality or augmented reality applications. To support the interface using the user's hands, this paper proposes a deep learning-based fingertip detection method to enable the tracking of fingertip coordinates to select virtual objects, or to write or draw in the air. After cutting the approximate part of the corresponding fingertip object from the input image with the Grad-CAM, and perform the convolution neural network with Atrous Convolution for the cut image to detect fingertip location. This method is simpler and easier to implement than existing object detection algorithms without requiring a pre-processing for annotating objects. To verify this method we implemented an air writing application and showed that the recognition rate of 81% and the speed of 76 ms were able to write smoothly without delay in the air, making it possible to utilize the application in real time.

Real-Time Fire Detection based on CNN and Grad-CAM (CNN과 Grad-CAM 기반의 실시간 화재 감지)

  • Kim, Young-Jin;Kim, Eun-Gyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.12
    • /
    • pp.1596-1603
    • /
    • 2018
  • Rapidly detecting and warning of fires is necessary for minimizing human injury and property damage. Generally, when fires occur, both the smoke and the flames are generated, so fire detection systems need to detect both the smoke and the flames. However, most fire detection systems only detect flames or smoke and have the disadvantage of slower processing speed due to additional preprocessing task. In this paper, we implemented a fire detection system which predicts the flames and the smoke at the same time by constructing a CNN model that supports multi-labeled classification. Also, the system can monitor the fire status in real time by using Grad-CAM which visualizes the position of classes based on the characteristics of CNN. Also, we tested our proposed system with 13 fire videos and got an average accuracy of 98.73% and 95.77% respectively for the flames and the smoke.

Grad-CAM based deep learning network for location detection of the main object (주 객체 위치 검출을 위한 Grad-CAM 기반의 딥러닝 네트워크)

  • Kim, Seon-Jin;Lee, Jong-Keun;Kwak, Nae-Jung;Ryu, Sung-Pil;Ahn, Jae-Hyeong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.2
    • /
    • pp.204-211
    • /
    • 2020
  • In this paper, we propose an optimal deep learning network architecture for main object location detection through weak supervised learning. The proposed network adds convolution blocks for improving the localization accuracy of the main object through weakly-supervised learning. The additional deep learning network consists of five additional blocks that add a composite product layer based on VGG-16. And the proposed network was trained by the method of weakly-supervised learning that does not require real location information for objects. In addition, Grad-CAM to compensate for the weakness of GAP in CAM, which is one of weak supervised learning methods, was used. The proposed network was tested through the CUB-200-2011 data set, we could obtain 50.13% in top-1 localization error. Also, the proposed network shows higher accuracy in detecting the main object than the existing method.