• Title/Summary/Keyword: distillation

Search Result 1,091, Processing Time 0.024 seconds

Satellite Building Segmentation using Deformable Convolution and Knowledge Distillation (변형 가능한 컨볼루션 네트워크와 지식증류 기반 위성 영상 빌딩 분할)

  • Choi, Keunhoon;Lee, Eungbean;Choi, Byungin;Lee, Tae-Young;Ahn, JongSik;Sohn, Kwanghoon
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.7
    • /
    • pp.895-902
    • /
    • 2022
  • Building segmentation using satellite imagery such as EO (Electro-Optical) and SAR (Synthetic-Aperture Radar) images are widely used due to their various uses. EO images have the advantage of having color information, and they are noise-free. In contrast, SAR images can identify the physical characteristics and geometrical information that the EO image cannot capture. This paper proposes a learning framework for efficient building segmentation that consists of a teacher-student-based privileged knowledge distillation and deformable convolution block. The teacher network utilizes EO and SAR images simultaneously to produce richer features and provide them to the student network, while the student network only uses EO images. To do this, we present objective functions that consist of Kullback-Leibler divergence loss and knowledge distillation loss. Furthermore, we introduce deformable convolution to avoid pixel-level noise and efficiently capture hard samples such as small and thin buildings at the global level. Experimental result shows that our method outperforms other methods and efficiently captures complex samples such as a small or narrow building. Moreover, Since our method can be applied to various methods.

Effect of membrane deformation on performance of vacuum assisted air gap membrane distillation (V-AGMD)

  • Kim, Yusik;Choi, Jihyeok;Choi, Yongjun;Lee, Sangho
    • Membrane and Water Treatment
    • /
    • v.13 no.1
    • /
    • pp.51-62
    • /
    • 2022
  • Vacuum-assisted air gap membrane distillation (V-AGMD) has the potential to achieve higher flux and productivity than conventional air gap membrane distillation (AGMD). Nevertheless, there is not much information on technical aspects of V-AGMD operation. Accordingly, this study aims to analyze the effect of membrane deformation on flux in V-AGMD operation. Experiments were carried out using a bench-scale V-AGMD system. Statistical models were applied to understand the flux behaviors. Statistical models based on MLR, GNN, and MLFNN techniques were developed to describe the experimental data. Results showed that the flux increased by up to 4 times with the application of vacuum in V-AGMD compared with conventional AGMD. The flux in both AGMD and V-AGMD is affected by the difference between the air gap pressure and the saturation pressure of water vapor, but their dependences were different. In V-AGMD, the membranes were found to be deformed due to the vacuum pressure because they were not fully supported by the spacer. As a result, the deformation reduced the effective air gap width. Nevertheless, the rejection and LEP were not changed even if the deformation occurred. The flux behaviors in V-AGMD were successfully interpreted by the GNN and MLFNN models. According to the model calculations, the relative impact of the membrane deformation ranges from 10.3% to 16.1%.

Anchor Free Object Detection Continual Learning According to Knowledge Distillation Layer Changes (Knowledge Distillation 계층 변화에 따른 Anchor Free 물체 검출 Continual Learning)

  • Gang, Sumyung;Chung, Daewon;Lee, Joon Jae
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.4
    • /
    • pp.600-609
    • /
    • 2022
  • In supervised learning, labeling of all data is essential, and in particular, in the case of object detection, all objects belonging to the image and to be learned have to be labeled. Due to this problem, continual learning has recently attracted attention, which is a way to accumulate previous learned knowledge and minimize catastrophic forgetting. In this study, a continaul learning model is proposed that accumulates previously learned knowledge and enables learning about new objects. The proposed method is applied to CenterNet, which is a object detection model of anchor-free manner. In our study, the model is applied the knowledge distillation algorithm to be enabled continual learning. In particular, it is assumed that all output layers of the model have to be distilled in order to be most effective. Compared to LWF, the proposed method is increased by 23.3%p mAP in 19+1 scenarios, and also rised by 28.8%p in 15+5 scenarios.

Ensemble Knowledge Distillation for Classification of 14 Thorax Diseases using Chest X-ray Images (흉부 X-선 영상을 이용한 14 가지 흉부 질환 분류를 위한 Ensemble Knowledge Distillation)

  • Ho, Thi Kieu Khanh;Jeon, Younghoon;Gwak, Jeonghwan
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.313-315
    • /
    • 2021
  • Timely and accurate diagnosis of lung diseases using Chest X-ray images has been gained much attention from the computer vision and medical imaging communities. Although previous studies have presented the capability of deep convolutional neural networks by achieving competitive binary classification results, their models were seemingly unreliable to effectively distinguish multiple disease groups using a large number of x-ray images. In this paper, we aim to build an advanced approach, so-called Ensemble Knowledge Distillation (EKD), to significantly boost the classification accuracies, compared to traditional KD methods by distilling knowledge from a cumbersome teacher model into an ensemble of lightweight student models with parallel branches trained with ground truth labels. Therefore, learning features at different branches of the student models could enable the network to learn diverse patterns and improve the qualify of final predictions through an ensemble learning solution. Although we observed that experiments on the well-established ChestX-ray14 dataset showed the classification improvements of traditional KD compared to the base transfer learning approach, the EKD performance would be expected to potentially enhance classification accuracy and model generalization, especially in situations of the imbalanced dataset and the interdependency of 14 weakly annotated thorax diseases.

  • PDF

Separation of Electronic Grade Highly Pure Carbon Dioxide Using Combined Process of Membrane, LNG Cold Heat Assisted Cryogenic Distillation (분리막 공정과 LNG 냉열 및 심냉 증류를 이용한 전자급 고순도 이산화탄소의 분리)

  • YOUNGSOO KO;KYUNGRYONG JANG;JUNGHOON KIM;YOUNGJOO JO;JUNGHO CHO
    • Journal of Hydrogen and New Energy
    • /
    • v.35 no.1
    • /
    • pp.90-96
    • /
    • 2024
  • In this paper, a new technology to obtain electronic grade, highly pure carbon dioxide by using membrane and liquefied natural gas (LNG) cold heat assisted cryogenic distillation has been proposed. PRO/II with PROVISION release 2023.1 from AVEVA company was used, and Peng-Robinson equation of the state model with Twu's alpha function to predict pure component vapor pressure versus temperature more accurately was selected for the modeling of the membrane and cryogenic distillation process. Advantage of using membrane separation instead of selecting absorber-stripper configuration for the concentration of carbon dioxide was the reduction of carbon dioxide capture cost.

A Survey on Privacy Vulnerabilities through Logit Inversion in Distillation-based Federated Learning (증류 기반 연합 학습에서 로짓 역전을 통한 개인 정보 취약성에 관한 연구)

  • Subin Yun;Yungi Cho;Yunheung Paek
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2024.05a
    • /
    • pp.711-714
    • /
    • 2024
  • In the dynamic landscape of modern machine learning, Federated Learning (FL) has emerged as a compelling paradigm designed to enhance privacy by enabling participants to collaboratively train models without sharing their private data. Specifically, Distillation-based Federated Learning, like Federated Learning with Model Distillation (FedMD), Federated Gradient Encryption and Model Sharing (FedGEMS), and Differentially Secure Federated Learning (DS-FL), has arisen as a novel approach aimed at addressing Non-IID data challenges by leveraging Federated Learning. These methods refine the standard FL framework by distilling insights from public dataset predictions, securing data transmissions through gradient encryption, and applying differential privacy to mask individual contributions. Despite these innovations, our survey identifies persistent vulnerabilities, particularly concerning the susceptibility to logit inversion attacks where malicious actors could reconstruct private data from shared public predictions. This exploration reveals that even advanced Distillation-based Federated Learning systems harbor significant privacy risks, challenging the prevailing assumptions about their security and underscoring the need for continued advancements in secure Federated Learning methodologies.

An Experimental Study on Topic Distillation Using Web Site Structure (웹 사이트 구조를 이용한 토픽 검색 연구)

  • Lee, Jee-Suk;Chung, Yung-Mee
    • Journal of the Korean Society for information Management
    • /
    • v.24 no.3
    • /
    • pp.201-218
    • /
    • 2007
  • This study proposes a topic distillation algorithm that ranks the relevant sites selected from retrieved web pages, and evaluates the performance of the algorithm. The algorithm calculates the topic score of a site using its hierarchical structure. The TREC .GOV test collection and a set of TREC-2004 queries for topic distillation task are used for the experiment. The experimental results showed the algorithm returned at least 2 relevant sites in top ten retrieval results. We peformed an in-depth analysis of the relevant sites list provided by TREC-2004 to find out that the definition of topic distillation was not strictly applied in selecting relevant sites. When we re-evaluated the retrieved sites/sub-sites using the revised list of relevant sites, the performance of the proposed algorithm was improved significantly.

Performance Analysis of Hint-KD Training Approach for the Teacher-Student Framework Using Deep Residual Networks (딥 residual network를 이용한 선생-학생 프레임워크에서 힌트-KD 학습 성능 분석)

  • Bae, Ji-Hoon;Yim, Junho;Yu, Jaehak;Kim, Kwihoon;Kim, Junmo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.5
    • /
    • pp.35-41
    • /
    • 2017
  • In this paper, we analyze the performance of the recently introduced Hint-knowledge distillation (KD) training approach based on the teacher-student framework for knowledge distillation and knowledge transfer. As a deep neural network (DNN) considered in this paper, the deep residual network (ResNet), which is currently regarded as the latest DNN, is used for the teacher-student framework. Therefore, when implementing the Hint-KD training, we investigate the impact on the weight of KD information based on the soften factor in terms of classification accuracy using the widely used open deep learning frameworks, Caffe. As a results, it can be seen that the recognition accuracy of the student model is improved when the fixed value of the KD information is maintained rather than the gradual decrease of the KD information during training.

Flavor Components of Poncirus trifoliata (탱자(Poncirus trifoliata)의 향기성분 분석에 관한 연구)

  • Oh, Chang-Hwan;Kim, Jung-Han;Kim, Kyoung-Rae;Ahn, Hey-Joon
    • Korean Journal of Food Science and Technology
    • /
    • v.21 no.6
    • /
    • pp.749-754
    • /
    • 1989
  • The essential oil was prepared by a gas co-distillation method from flavedo of Poncirus trifoliata and was analyzed by GC/ retention index (RI) and GC/MS. The essential oil prepared by a gas co-distillation gave a whole fragrance of Poncirus trifoliata. The identification of the flavor components was performed by multi-dimensional analysis using GC/RI and GC/MS. GC/RI and GC/MS were complementary to each other. In applying GC/RI for identification, it was more effective when two columns of different polarities were used. Thirty volatile flavor constituents were identified in Poncirus trifoliata. Limonene, myrcene, ${\beta}-caryophyllene,\;trans-{\beta}-ocimene$, ${\beta}-pinene$, 3-thujene and 7-geranyloxycoumarin were the major constituents and cis-3-hexenyl acetate, n-hexyl acetate, 2-methyl acetophenone, elixene and elemicine had not been reported earlier as citrus components.

  • PDF

Development of Machine Learning Model for Predicting Distillation Column Temperature (증류공정 내부 온도 예측을 위한 머신 러닝 모델 개발)

  • Kwon, Hyukwon;Oh, Kwang Cheol;Chung, Yongchul G.;Cho, Hyungtae;Kim, Junghwan
    • Applied Chemistry for Engineering
    • /
    • v.31 no.5
    • /
    • pp.520-525
    • /
    • 2020
  • In this study, we developed a machine learning-based model for predicting the production stage temperature of distillation process. It is necessary to predict an accurate temperature for control because the control of the distillation process is done through the production stage temperature. The temperature in distillation process has a nonlinear complex relationship with other variables and time series data, so we used the recurrent neural network algorithms to predict temperature. In the model development process, by adjusting three recurrent neural network based algorithms, and batch size, we selected the most appropriate model for predicting the production stage temperature. LSTM128 was selected as the most appropriate model for predicting the production stage temperature. The prediction performance of selected model for the actual temperature is RMSE of 0.0791 and R2 of 0.924.