• Title/Summary/Keyword: Lightweight convolutional neural network

Search Result 32, Processing Time 0.015 seconds

A Study on Lightweight CNN-based Interpolation Method for Satellite Images (위성 영상을 위한 경량화된 CNN 기반의 보간 기술 연구)

  • Kim, Hyun-ho;Seo, Doochun;Jung, JaeHeon;Kim, Yongwoo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.2
    • /
    • pp.167-177
    • /
    • 2022
  • In order to obtain satellite image products using the image transmitted to the ground station after capturing the satellite images, many image pre/post-processing steps are involved. During the pre/post-processing, when converting from level 1R images to level 1G images, geometric correction is essential. An interpolation method necessary for geometric correction is inevitably used, and the quality of the level 1G images is determined according to the accuracy of the interpolation method. Also, it is crucial to speed up the interpolation algorithm by the level processor. In this paper, we proposed a lightweight CNN-based interpolation method required for geometric correction when converting from level 1R to level 1G. The proposed method doubles the resolution of satellite images and constructs a deep learning network with a lightweight deep convolutional neural network for fast processing speed. In addition, a feature map fusion method capable of improving the image quality of multispectral (MS) bands using panchromatic (PAN) band information was proposed. The images obtained through the proposed interpolation method improved by about 0.4 dB for the PAN image and about 4.9 dB for the MS image in the quantitative peak signal-to-noise ratio (PSNR) index compared to the existing deep learning-based interpolation methods. In addition, it was confirmed that the time required to acquire an image that is twice the resolution of the 36,500×36,500 input image based on the PAN image size is improved by about 1.6 times compared to the existing deep learning-based interpolation method.

Performance Evaluation of Efficient Vision Transformers on Embedded Edge Platforms (임베디드 엣지 플랫폼에서의 경량 비전 트랜스포머 성능 평가)

  • Minha Lee;Seongjae Lee;Taehyoun Kim
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.3
    • /
    • pp.89-100
    • /
    • 2023
  • Recently, on-device artificial intelligence (AI) solutions using mobile devices and embedded edge devices have emerged in various fields, such as computer vision, to address network traffic burdens, low-energy operations, and security problems. Although vision transformer deep learning models have outperformed conventional convolutional neural network (CNN) models in computer vision, they require more computations and parameters than CNN models. Thus, they are not directly applicable to embedded edge devices with limited hardware resources. Many researchers have proposed various model compression methods or lightweight architectures for vision transformers; however, there are only a few studies evaluating the effects of model compression techniques of vision transformers on performance. Regarding this problem, this paper presents a performance evaluation of vision transformers on embedded platforms. We investigated the behaviors of three vision transformers: DeiT, LeViT, and MobileViT. Each model performance was evaluated by accuracy and inference time on edge devices using the ImageNet dataset. We assessed the effects of the quantization method applied to the models on latency enhancement and accuracy degradation by profiling the proportion of response time occupied by major operations. In addition, we evaluated the performance of each model on GPU and EdgeTPU-based edge devices. In our experimental results, LeViT showed the best performance in CPU-based edge devices, and DeiT-small showed the highest performance improvement in GPU-based edge devices. In addition, only MobileViT models showed performance improvement on EdgeTPU. Summarizing the analysis results through profiling, the degree of performance improvement of each vision transformer model was highly dependent on the proportion of parts that could be optimized in the target edge device. In summary, to apply vision transformers to on-device AI solutions, either proper operation composition and optimizations specific to target edge devices must be considered.