• Title/Summary/Keyword: Knowledge distillation

Search Result 49, Processing Time 0.026 seconds

Satellite Building Segmentation using Deformable Convolution and Knowledge Distillation (변형 가능한 컨볼루션 네트워크와 지식증류 기반 위성 영상 빌딩 분할)

  • Choi, Keunhoon;Lee, Eungbean;Choi, Byungin;Lee, Tae-Young;Ahn, JongSik;Sohn, Kwanghoon
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.7
    • /
    • pp.895-902
    • /
    • 2022
  • Building segmentation using satellite imagery such as EO (Electro-Optical) and SAR (Synthetic-Aperture Radar) images are widely used due to their various uses. EO images have the advantage of having color information, and they are noise-free. In contrast, SAR images can identify the physical characteristics and geometrical information that the EO image cannot capture. This paper proposes a learning framework for efficient building segmentation that consists of a teacher-student-based privileged knowledge distillation and deformable convolution block. The teacher network utilizes EO and SAR images simultaneously to produce richer features and provide them to the student network, while the student network only uses EO images. To do this, we present objective functions that consist of Kullback-Leibler divergence loss and knowledge distillation loss. Furthermore, we introduce deformable convolution to avoid pixel-level noise and efficiently capture hard samples such as small and thin buildings at the global level. Experimental result shows that our method outperforms other methods and efficiently captures complex samples such as a small or narrow building. Moreover, Since our method can be applied to various methods.

Determining Whether to Enter a Hazardous Area Using Pedestrian Trajectory Prediction Techniques and Improving the Training of Small Models with Knowledge Distillation (보행자 경로 예측 기법을 이용한 위험구역 진입 여부 결정과 Knowledge Distillation을 이용한 작은 모델 학습 개선)

  • Choi, In-Kyu;Lee, Young Han;Song, Hyok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.9
    • /
    • pp.1244-1253
    • /
    • 2021
  • In this paper, we propose a method for predicting in advance whether pedestrians will enter the hazardous area after the current time using the pedestrian trajectory prediction method and an efficient simplification method of the trajectory prediction network. In addition, we propose a method to apply KD(Knowledge Distillation) to a small network for real-time operation in an embedded environment. Using the correlation between predicted future paths and hazard zones, we determined whether to enter or not, and applied efficient KD when learning small networks to minimize performance degradation. Experimentally, it was confirmed that the model applied with the simplification method proposed improved the speed by 37.49% compared to the existing model, but led to a slight decrease in accuracy. As a result of learning a small network with an initial accuracy of 91.43% using KD, It was confirmed that it has improved accuracy of 94.76%.

Neural Network Model Compression Algorithms for Image Classification in Embedded Systems (임베디드 시스템에서의 객체 분류를 위한 인공 신경망 경량화 연구)

  • Shin, Heejung;Oh, Hyondong
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.2
    • /
    • pp.133-141
    • /
    • 2022
  • This paper introduces model compression algorithms which make a deep neural network smaller and faster for embedded systems. The model compression algorithms can be largely categorized into pruning, quantization and knowledge distillation. In this study, gradual pruning, quantization aware training, and knowledge distillation which learns the activation boundary in the hidden layer of the teacher neural network are integrated. As a large deep neural network is compressed and accelerated by these algorithms, embedded computing boards can run the deep neural network much faster with less memory usage while preserving the reasonable accuracy. To evaluate the performance of the compressed neural networks, we evaluate the size, latency and accuracy of the deep neural network, DenseNet201, for image classification with CIFAR-10 dataset on the NVIDIA Jetson Xavier.

Knowledge Distillation for Unsupervised Depth Estimation (비지도학습 기반의 뎁스 추정을 위한 지식 증류 기법)

  • Song, Jimin;Lee, Sang Jun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.4
    • /
    • pp.209-215
    • /
    • 2022
  • This paper proposes a novel approach for training an unsupervised depth estimation algorithm. The objective of unsupervised depth estimation is to estimate pixel-wise distances from camera without external supervision. While most previous works focus on model architectures, loss functions, and masking methods for considering dynamic objects, this paper focuses on the training framework to effectively use depth cue. The main loss function of unsupervised depth estimation algorithms is known as the photometric error. In this paper, we claim that direct depth cue is more effective than the photometric error. To obtain the direct depth cue, we adopt the technique of knowledge distillation which is a teacher-student learning framework. We train a teacher network based on a previous unsupervised method, and its depth predictions are utilized as pseudo labels. The pseudo labels are employed to train a student network. In experiments, our proposed algorithm shows a comparable performance with the state-of-the-art algorithm, and we demonstrate that our teacher-student framework is effective in the problem of unsupervised depth estimation.

Area-wise relational knowledge distillation

  • Sungchul Cho;Sangje Park;Changwon Lim
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.5
    • /
    • pp.501-516
    • /
    • 2023
  • Knowledge distillation (KD) refers to extracting knowledge from a large and complex model (teacher) and transferring it to a relatively small model (student). This can be done by training the teacher model to obtain the activation function values of the hidden or the output layers and then retraining the student model using the same training data with the obtained values. Recently, relational KD (RKD) has been proposed to extract knowledge about relative differences in training data. This method improved the performance of the student model compared to conventional KDs. In this paper, we propose a new method for RKD by introducing a new loss function for RKD. The proposed loss function is defined using the area difference between the teacher model and the student model in a specific hidden layer, and it is shown that the model can be successfully compressed, and the generalization performance of the model can be improved. We demonstrate that the accuracy of the model applying the method proposed in the study of model compression of audio data is up to 1.8% higher than that of the existing method. For the study of model generalization, we demonstrate that the model has up to 0.5% better performance in accuracy when introducing the RKD method to self-KD using image data.

Compressed Ensemble of Deep Convolutional Neural Networks with Global and Local Facial Features for Improved Face Recognition (얼굴인식 성능 향상을 위한 얼굴 전역 및 지역 특징 기반 앙상블 압축 심층합성곱신경망 모델 제안)

  • Yoon, Kyung Shin;Choi, Jae Young
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.8
    • /
    • pp.1019-1029
    • /
    • 2020
  • In this paper, we propose a novel knowledge distillation algorithm to create an compressed deep ensemble network coupled with the combined use of local and global features of face images. In order to transfer the capability of high-level recognition performances of the ensemble deep networks to a single deep network, the probability for class prediction, which is the softmax output of the ensemble network, is used as soft target for training a single deep network. By applying the knowledge distillation algorithm, the local feature informations obtained by training the deep ensemble network using facial subregions of the face image as input are transmitted to a single deep network to create a so-called compressed ensemble DCNN. The experimental results demonstrate that our proposed compressed ensemble deep network can maintain the recognition performance of the complex ensemble deep networks and is superior to the recognition performance of a single deep network. In addition, our proposed method can significantly reduce the storage(memory) space and execution time, compared to the conventional ensemble deep networks developed for face recognition.

Face Super-Resolution using Adversarial Distillation of Multi-Scale Facial Region Dictionary (다중 스케일 얼굴 영역 딕셔너리의 적대적 증류를 이용한 얼굴 초해상화)

  • Jo, Byungho;Park, In Kyu;Hong, Sungeun
    • Journal of Broadcast Engineering
    • /
    • v.26 no.5
    • /
    • pp.608-620
    • /
    • 2021
  • Recent deep learning-based face super-resolution (FSR) works showed significant performances by utilizing facial prior knowledge such as facial landmark and dictionary that reflects structural or semantic characteristics of the human face. However, most of these methods require additional processing time and memory. To solve this issue, this paper propose an efficient FSR models using knowledge distillation techniques. The intermediate features of teacher network which contains dictionary information based on major face regions are transferred to the student through adversarial multi-scale features distillation. Experimental results show that the proposed model is superior to other SR methods, and its effectiveness compare to teacher model.

Optimizing SR-GAN for Resource-Efficient Single-Image Super-Resolution via Knowledge Distillation

  • Sajid Hussain;Jung-Hun Shin;Kum-Won Cho
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.479-481
    • /
    • 2023
  • Generative Adversarial Networks (GANs) have facilitated substantial improvement in single-image super-resolution (SR) by enabling the generation of photo-realistic images. However, the high memory requirements of GAN-based SRs (mainly generators) lead to reduced performance and increased energy consumption, making it difficult to implement them onto resource-constricted devices. In this study, we propose an efficient and compressed architecture for the SR-GAN (generator) model using the model compression technique Knowledge Distillation. Our approach involves the transmission of knowledge from a heavy network to a lightweight one, which reduces the storage requirement of the model by 58% with also an increase in their performance. Experimental results on various benchmarks indicate that our proposed compressed model enhances performance with an increase in PSNR, SSIM, and image quality respectively for x4 super-resolution tasks.

A Comparative Study of Knowledge Distillation Methods in Lightening a Super-Resolution Model (초해상화 모델 경량화를 위한 지식 증류 방법의 비교 연구)

  • Yeojin Lee;Hanhoon Park
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.1
    • /
    • pp.21-26
    • /
    • 2023
  • Knowledge distillation (KD) is a model lightening technology that transfers the knowledge of deep models to light models. Most KD methods have been developed for classification models, and there have been few KD studies in the field of super-resolution (SR). In this paper, various KD methods are applied to an SR model and their performance is compared. Specifically, we modified the loss function to apply each KD method to the SR model and conducted an experiment to learn a student model that was about 27 times lighter than the teacher model and to double the image resolution. Through the experiment, it was confirmed that some KD methods were not valid when applied to SR models, and that the performance was the highest when the relational KD and the traditional KD methods were combined.

State-of-the-Art Knowledge Distillation for Recommender Systems in Explicit Feedback Settings: Methods and Evaluation (익스플리싯 피드백 환경에서 추천 시스템을 위한 최신 지식증류기법들에 대한 성능 및 정확도 평가)

  • Hong-Kyun Bae;Jiyeon Kim;Sang-Wook Kim
    • Smart Media Journal
    • /
    • v.12 no.9
    • /
    • pp.89-94
    • /
    • 2023
  • Recommender systems provide users with the most favorable items by analyzing explicit or implicit feedback of users on items. Recently, as the size of deep-learning-based models employed in recommender systems has increased, many studies have focused on reducing inference time while maintaining high recommendation accuracy. As one of them, a study on recommender systems with a knowledge distillation (KD) technique is actively conducted. By KD, a small-sized model (i.e., student) is trained through knowledge extracted from a large-sized model (i.e., teacher), and then the trained student is used as a recommendation model. Existing studies on KD for recommender systems have been mainly performed only for implicit feedback settings. Thus, in this paper, we try to investigate the performance and accuracy when applied to explicit feedback settings. To this end, we leveraged a total of five state-of-the-art KD methods and three real-world datasets for recommender systems.