• Title/Summary/Keyword: Knowledge distillation

Search Result 49, Processing Time 0.03 seconds

Visual Explanation of Black-box Models Using Layer-wise Class Activation Maps from Approximating Neural Networks (신경망 근사에 의한 다중 레이어의 클래스 활성화 맵을 이용한 블랙박스 모델의 시각적 설명 기법)

  • Kang, JuneGyu;Jeon, MinGyeong;Lee, HyeonSeok;Kim, Sungchan
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.4
    • /
    • pp.145-151
    • /
    • 2021
  • In this paper, we propose a novel visualization technique to explain the predictions of deep neural networks. We use knowledge distillation (KD) to identify the interior of a black-box model for which we know only inputs and outputs. The information of the black box model will be transferred to a white box model that we aim to create through the KD. The white box model will learn the representation of the black-box model. Second, the white-box model generates attention maps for each of its layers using Grad-CAM. Then we combine the attention maps of different layers using the pixel-wise summation to generate a final saliency map that contains information from all layers of the model. The experiments show that the proposed technique found important layers and explained which part of the input is important. Saliency maps generated by the proposed technique performed better than those of Grad-CAM in deletion game.

Dictionary Distillation in Face Super-Resolution (딕셔너리 증류 기법을 적용한 얼굴 초해상화)

  • Jo, Byungho;Park, In Kyu;Hong, Sungeun
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.193-194
    • /
    • 2021
  • 본 논문에서는 지식 증류 (knowledge distillation) 기법을 적용한 얼굴 초해상화 모델을 제안한다. 제안하는 기법은 최근 얼굴 복원 분야에서 좋은 성능을 보여준 얼굴 영역의 딕셔너리 (dictionary) 정보를 사용한 모델을 선생 모델로 선정하여 적대적 (adversarial) 지식 증류 기법을 통해 효율적인 학생 모델을 구축하였다. 본 논문은 테스트시 얼굴의 사전 정보가 초래하는 추가적인 비용이 필요 없는 얼굴 초해상화 방법을 제시하고, 제안하는 기법과 다양한 기존 초해상화 기법과의 정량적, 정성적 비교를 통해 우수성을 보인다.

  • PDF

On Evaluating Recommender Systems with Knowledge Distillation in Multi-Class Feedback Environment (다중클래스 피드백을 이용한 지식증류기법 기반의 추천시스템 정확도 평가)

  • Kim, Jiyeon;Bae, Hong-Kyun;Kim, Sang-Wook
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.310-311
    • /
    • 2021
  • 추천시스템은 사용자가 아이템들에 남긴 과거 피드백을 바탕으로 사용자가 선호할 법할 아이템을 추천한다. 추천시스템에서 사용자의 선호도는 단일클래스 세팅과 다중클래스 세팅 두 가지로 표현 할 수 있다. 우리는 추천시스템을 위해 제안된 지식증류기법인 Ranking Distillation 을 다중클래스 세팅에서 실험하여, 증류된 지식을 통한 작은 모델 학습이 효과적인지에 대해 알아보고자 한다.

Deep Learning-Enabled Detection of Pneumoperitoneum in Supine and Erect Abdominal Radiography: Modeling Using Transfer Learning and Semi-Supervised Learning

  • Sangjoon Park;Jong Chul Ye;Eun Sun Lee;Gyeongme Cho;Jin Woo Yoon;Joo Hyeok Choi;Ijin Joo;Yoon Jin Lee
    • Korean Journal of Radiology
    • /
    • v.24 no.6
    • /
    • pp.541-552
    • /
    • 2023
  • Objective: Detection of pneumoperitoneum using abdominal radiography, particularly in the supine position, is often challenging. This study aimed to develop and externally validate a deep learning model for the detection of pneumoperitoneum using supine and erect abdominal radiography. Materials and Methods: A model that can utilize "pneumoperitoneum" and "non-pneumoperitoneum" classes was developed through knowledge distillation. To train the proposed model with limited training data and weak labels, it was trained using a recently proposed semi-supervised learning method called distillation for self-supervised and self-train learning (DISTL), which leverages the Vision Transformer. The proposed model was first pre-trained with chest radiographs to utilize common knowledge between modalities, fine-tuned, and self-trained on labeled and unlabeled abdominal radiographs. The proposed model was trained using data from supine and erect abdominal radiographs. In total, 191212 chest radiographs (CheXpert data) were used for pre-training, and 5518 labeled and 16671 unlabeled abdominal radiographs were used for fine-tuning and self-supervised learning, respectively. The proposed model was internally validated on 389 abdominal radiographs and externally validated on 475 and 798 abdominal radiographs from the two institutions. We evaluated the performance in diagnosing pneumoperitoneum using the area under the receiver operating characteristic curve (AUC) and compared it with that of radiologists. Results: In the internal validation, the proposed model had an AUC, sensitivity, and specificity of 0.881, 85.4%, and 73.3% and 0.968, 91.1, and 95.0 for supine and erect positions, respectively. In the external validation at the two institutions, the AUCs were 0.835 and 0.852 for the supine position and 0.909 and 0.944 for the erect position. In the reader study, the readers' performances improved with the assistance of the proposed model. Conclusion: The proposed model trained with the DISTL method can accurately detect pneumoperitoneum on abdominal radiography in both the supine and erect positions.

Layer-wise hint-based training for knowledge transfer in a teacher-student framework

  • Bae, Ji-Hoon;Yim, Junho;Kim, Nae-Soo;Pyo, Cheol-Sig;Kim, Junmo
    • ETRI Journal
    • /
    • v.41 no.2
    • /
    • pp.242-253
    • /
    • 2019
  • We devise a layer-wise hint training method to improve the existing hint-based knowledge distillation (KD) training approach, which is employed for knowledge transfer in a teacher-student framework using a residual network (ResNet). To achieve this objective, the proposed method first iteratively trains the student ResNet and incrementally employs hint-based information extracted from the pretrained teacher ResNet containing several hint and guided layers. Next, typical softening factor-based KD training is performed using the previously estimated hint-based information. We compare the recognition accuracy of the proposed approach with that of KD training without hints, hint-based KD training, and ResNet-based layer-wise pretraining using reliable datasets, including CIFAR-10, CIFAR-100, and MNIST. When using the selected multiple hint-based information items and their layer-wise transfer in the proposed method, the trained student ResNet more accurately reflects the pretrained teacher ResNet's rich information than the baseline training methods, for all the benchmark datasets we consider in this study.

Lightweight Deep Learning Model for Real-Time 3D Object Detection in Point Clouds (실시간 3차원 객체 검출을 위한 포인트 클라우드 기반 딥러닝 모델 경량화)

  • Kim, Gyu-Min;Baek, Joong-Hwan;Kim, Hee Yeong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.9
    • /
    • pp.1330-1339
    • /
    • 2022
  • 3D object detection generally aims to detect relatively large data such as automobiles, buses, persons, furniture, etc, so it is vulnerable to small object detection. In addition, in an environment with limited resources such as embedded devices, it is difficult to apply the model because of the huge amount of computation. In this paper, the accuracy of small object detection was improved by focusing on local features using only one layer, and the inference speed was improved through the proposed knowledge distillation method from large pre-trained network to small network and adaptive quantization method according to the parameter size. The proposed model was evaluated using SUN RGB-D Val and self-made apple tree data set. Finally, it achieved the accuracy performance of 62.04% at mAP@0.25 and 47.1% at mAP@0.5, and the inference speed was 120.5 scenes per sec, showing a fast real-time processing speed.

Utilizing Mean Teacher Semi-Supervised Learning for Robust Pothole Image Classification

  • Inki Kim;Beomjun Kim;Jeonghwan Gwak
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.5
    • /
    • pp.17-28
    • /
    • 2023
  • Potholes that occur on paved roads can have fatal consequences for vehicles traveling at high speeds and may even lead to fatalities. While manual detection of potholes using human labor is commonly used to prevent pothole-related accidents, it is economically and temporally inefficient due to the exposure of workers on the road and the difficulty in predicting potholes in certain categories. Therefore, completely preventing potholes is nearly impossible, and even preventing their formation is limited due to the influence of ground conditions closely related to road environments. Additionally, labeling work guided by experts is required for dataset construction. Thus, in this paper, we utilized the Mean Teacher technique, one of the semi-supervised learning-based knowledge distillation methods, to achieve robust performance in pothole image classification even with limited labeled data. We demonstrated this using performance metrics and GradCAM, showing that when using semi-supervised learning, 15 pre-trained CNN models achieved an average accuracy of 90.41%, with a minimum of 2% and a maximum of 9% performance difference compared to supervised learning.

Lightweight Deep Learning Model for Heart Rate Estimation from Facial Videos (얼굴 영상 기반의 심박수 추정을 위한 딥러닝 모델의 경량화 기법)

  • Gyutae Hwang;Myeonggeun Park;Sang Jun Lee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.2
    • /
    • pp.51-58
    • /
    • 2023
  • This paper proposes a deep learning method for estimating the heart rate from facial videos. Our proposed method estimates remote photoplethysmography (rPPG) signals to predict the heart rate. Although there have been proposed several methods for estimating rPPG signals, most previous methods can not be utilized in low-power single board computers due to their computational complexity. To address this problem, we construct a lightweight student model and employ a knowledge distillation technique to reduce the performance degradation of a deeper network model. The teacher model consists of 795k parameters, whereas the student model only contains 24k parameters, and therefore, the inference time was reduced with the factor of 10. By distilling the knowledge of the intermediate feature maps of the teacher model, we improved the accuracy of the student model for estimating the heart rate. Experiments were conducted on the UBFC-rPPG dataset to demonstrate the effectiveness of the proposed method. Moreover, we collected our own dataset to verify the accuracy and processing time of the proposed method on a real-world dataset. Experimental results on a NVIDIA Jetson Nano board demonstrate that our proposed method can infer the heart rate in real time with the mean absolute error of 2.5183 bpm.

Cooperative Multi-agent Reinforcement Learning on Sparse Reward Battlefield Environment using QMIX and RND in Ray RLlib

  • Minkyoung Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.1
    • /
    • pp.11-19
    • /
    • 2024
  • Multi-agent systems can be utilized in various real-world cooperative environments such as battlefield engagements and unmanned transport vehicles. In the context of battlefield engagements, where dense reward design faces challenges due to limited domain knowledge, it is crucial to consider situations that are learned through explicit sparse rewards. This paper explores the collaborative potential among allied agents in a battlefield scenario. Utilizing the Multi-Robot Warehouse Environment(RWARE) as a sparse reward environment, we define analogous problems and establish evaluation criteria. Constructing a learning environment with the QMIX algorithm from the reinforcement learning library Ray RLlib, we enhance the Agent Network of QMIX and integrate Random Network Distillation(RND). This enables the extraction of patterns and temporal features from partial observations of agents, confirming the potential for improving the acquisition of sparse reward experiences through intrinsic rewards.

Recent R&D Trends for Lightweight Deep Learning (경량 딥러닝 기술 동향)

  • Lee, Y.J.;Moon, Y.H.;Park, J.Y.;Min, O.G.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.2
    • /
    • pp.40-50
    • /
    • 2019
  • Considerable accuracy improvements in deep learning have recently been achieved in many applications that require large amounts of computation and expensive memory. However, recent advanced techniques for compacting and accelerating the deep learning model have been developed for deployment in lightweight devices with constrained resources. Lightweight deep learning techniques can be categorized into two schemes: lightweight deep learning algorithms (model simplification and efficient convolutional filters) in nature and transferring models into compact/small ones (model compression and knowledge distillation). In this report, we briefly summarize various lightweight deep learning techniques and possible research directions.