• Title/Summary/Keyword: VGG Net

Search Result 89, Processing Time 0.02 seconds

Classification of Whole Body Bone Scan Image with Bone Metastasis using CNN-based Transfer Learning (CNN 기반 전이학습을 이용한 뼈 전이가 존재하는 뼈 스캔 영상 분류)

  • Yim, Ji Yeong;Do, Thanh Cong;Kim, Soo Hyung;Lee, Guee Sang;Lee, Min Hee;Min, Jung Joon;Bom, Hee Seung;Kim, Hyeon Sik;Kang, Sae Ryung;Yang, Hyung Jeong
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.8
    • /
    • pp.1224-1232
    • /
    • 2022
  • Whole body bone scan is the most frequently performed nuclear medicine imaging to evaluate bone metastasis in cancer patients. We evaluated the performance of a VGG16-based transfer learning classifier for bone scan images in which metastatic bone lesion was present. A total of 1,000 bone scans in 1,000 cancer patients (500 patients with bone metastasis, 500 patients without bone metastasis) were evaluated. Bone scans were labeled with abnormal/normal for bone metastasis using medical reports and image review. Subsequently, gradient-weighted class activation maps (Grad-CAMs) were generated for explainable AI. The proposed model showed AUROC 0.96 and F1-Score 0.90, indicating that it outperforms to VGG16, ResNet50, Xception, DenseNet121 and InceptionV3. Grad-CAM visualized that the proposed model focuses on hot uptakes, which are indicating active bone lesions, for classification of whole body bone scan images with bone metastases.

Knowledge Distillation based-on Internal/External Correlation Learning

  • Hun-Beom Bak;Seung-Hwan Bae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.4
    • /
    • pp.31-39
    • /
    • 2023
  • In this paper, we propose an Internal/External Knowledge Distillation (IEKD), which utilizes both external correlations between feature maps of heterogeneous models and internal correlations between feature maps of the same model for transferring knowledge from a teacher model to a student model. To achieve this, we transform feature maps into a sequence format and extract new feature maps suitable for knowledge distillation by considering internal and external correlations through a transformer. We can learn both internal and external correlations by distilling the extracted feature maps and improve the accuracy of the student model by utilizing the extracted feature maps with feature matching. To demonstrate the effectiveness of our proposed knowledge distillation method, we achieved 76.23% Top-1 image classification accuracy on the CIFAR-100 dataset with the "ResNet-32×4/VGG-8" teacher and student combination and outperformed the state-of-the-art KD methods.

Deep learning-based de-fogging method using fog features to solve the domain shift problem (Domain Shift 문제를 해결하기 위해 안개 특징을 이용한 딥러닝 기반 안개 제거 방법)

  • Sim, Hwi Bo;Kang, Bong Soon
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.10
    • /
    • pp.1319-1325
    • /
    • 2021
  • It is important to remove fog for accurate object recognition and detection during preprocessing because images taken in foggy adverse weather suffer from poor quality of images due to scattering and absorption of light, resulting in poor performance of various vision-based applications. This paper proposes an end-to-end deep learning-based single image de-fogging method using U-Net architecture. The loss function used in the algorithm is a loss function based on Mahalanobis distance with fog features, which solves the problem of domain shifts, and demonstrates superior performance by comparing qualitative and quantitative numerical evaluations with conventional methods. We also design it to generate fog through the VGG19 loss function and use it as the next training dataset.

Analysis of Cultural Context of Image Search with Deep Transfer Learning (심층 전이 학습을 이용한 이미지 검색의 문화적 특성 분석)

  • Kim, Hyeon-sik;Jeong, Jin-Woo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.5
    • /
    • pp.674-677
    • /
    • 2020
  • The cultural background of users utilizing image search engines has a significant impact on the satisfaction of the search results. Therefore, it is important to analyze and understand the cultural context of images for more accurate image search. In this paper, we investigate how the cultural context of images can affect the performance of image classification. To this end, we first collected various types of images (e.g,. food, temple, etc.) with various cultural contexts (e.g., Korea, Japan, etc.) from web search engines. Afterwards, a deep transfer learning approach using VGG19 and MobileNetV2 pre-trained with ImageNet was adopted to learn the cultural features of the collected images. Through various experiments we show the performance of image classification can be differently affected according to the cultural context of images.

Compression of CNN Using Low-Rank Approximation and CP Decomposition Methods (저계수행렬 근사 및 CP 분해 기법을 이용한 CNN 압축)

  • Moon, Hyeon-Cheol;Moon, Gi-Hwa;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.133-135
    • /
    • 2020
  • 최근 CNN(Convolutional Neural Network)은 영상 분류, 객체 인식 등 다양한 비전 분야에서 우수한 성능을 보여주고 있으나, CNN 모델의 계산량 및 메모리가 매우 커짐에 따라 모바일 또는 IoT(lnternet of Things) 장치와 같은 저전력 환경에 적용되기에는 제한이 따른다. 따라서, CNN 모델의 임무 성능을 유지하연서 네트워크 모델을 압축하는 기법들이 연구되고 있다. 본 논문에서는 행렬 분해 기술인 저계수행렬 근사(Low-rank approximation)와 CP(Canonical Polyadic) 분해 기법을 결합하여 CNN 모델을 압축하는 기법을 제안한다. 제안하는 기법은 계층의 유형에 상관없이 하나의 행렬분해 기법만을 적용하는 기존의 기법과 달리 압축 성능을 높이기 위하여 CNN의 계층 타입에 따라 두 가지 분해 기법을 선택적으로 적용한다. 제안기법의 성능검증을 위하여 영상 분류 CNN 모델인 VGG-16, ResNet50, 그리고 MobileNetV2 모델 압축에 적용하였고, 모델의 계층 유형에 따라 두 가지의 분해 기법을 선택적으로 적용함으로써 저계수행렬 근사 기법만 적용한 경우 보다 1.5~12.1 배의 동일한 압축율에서 분류 성능이 향상됨을 확인하였다.

  • PDF

A Lightweight Deep Learning Model for Line-Art Colorization Using Two Stage Generator Model (이중 생성자를 사용한 저용량 선화 자동채색 모델)

  • Lee, Yeongseop;Lee, Seongjin
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.01a
    • /
    • pp.19-20
    • /
    • 2020
  • 미디어 산업의 발전으로 스토리보드와 같은 선화 이미지의 자동채색 연구가 국내외에서 진행되고 있다. 하지만 자동채색 모델 용량에 초점을 두는 연구는 아직 진행되고 있지 않다. 기존 자동채색 연구는 모델 용량이 최소 567MB 이상으로 모델 용량이 큰 단점을 가지고 있다. 본 논문에서는 채색을 2단계로 나누는 이중 생성자 구조와 기존 U-Net을 개선한 생성자를 사용해 기존 U-Net에 비해 30%, VGG16/19를 사용한 기법과 비교해 최대 85% 작은 106MB 모델을 생성했고 FID(Fréchet Inception Distance)를 통한 이미지 평가결과 512x512px에서 153.69의 채색성능을 얻었다.

  • PDF

Processing-in-Memory Architecture for Enhanced Convolutional Neural Network Performance (합성곱 신경망 성능 향상을 위한 메모리 내 연산 구조)

  • Kun-Mo Jeong;Ho-Yun Youm;Han-Jun Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2024.05a
    • /
    • pp.61-64
    • /
    • 2024
  • 최근 고성능 컴퓨팅 장치의 수요 증가와 함께, 메모리 내에 연산을 가능하게 하는 하드웨어 구조가 새로이 발표되고 있다. 본 논문은 기존 DRAM 에 계산 유닛을 통합하는 새로운 메모리 내 연산 구조를 제안한다. 특히, 데이터 집약적인 합성곱 신경망 작업을 위해 최적화된 이 구조는 기존 메모리 구조를 사용하면서도 기존 구조에 분기를 추가함으로서 CNN 연산의 속도와 에너지 효율을 향상시킨다. VGG19, AlexNet, ResNet-50 과 같은 다양한 CNN 모델을 활용한 실험 결과, PINN 아키텍처는 기존 연구에 비해 최대 2.95 배까지의 성능 향상을 달성할 수 있음을 확인하였다. 이러한 결과는 PINN 기술이 저장 및 연산 성능의 한계를 극복하고, 머신 러닝과 같은 고급 어플리케이션의 요구를 충족시킬 수 있는 방안임을 시사한다.

  • PDF

Compression and Performance Evaluation of CNN Models on Embedded Board (임베디드 보드에서의 CNN 모델 압축 및 성능 검증)

  • Moon, Hyeon-Cheol;Lee, Ho-Young;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.25 no.2
    • /
    • pp.200-207
    • /
    • 2020
  • Recently, deep neural networks such as CNN are showing excellent performance in various fields such as image classification, object recognition, visual quality enhancement, etc. However, as the model size and computational complexity of deep learning models for most applications increases, it is hard to apply neural networks to IoT and mobile environments. Therefore, neural network compression algorithms for reducing the model size while keeping the performance have been being studied. In this paper, we apply few compression methods to CNN models and evaluate their performances in the embedded environment. For evaluate the performance, the classification performance and inference time of the original CNN models and the compressed CNN models on the image inputted by the camera are evaluated in the embedded board equipped with QCS605, which is a customized AI chip. In this paper, a few CNN models of MobileNetV2, ResNet50, and VGG-16 are compressed by applying the methods of pruning and matrix decomposition. The experimental results show that the compressed models give not only the model size reduction of 1.3~11.2 times at a classification performance loss of less than 2% compared to the original model, but also the inference time reduction of 1.2~2.21 times, and the memory reduction of 1.2~3.8 times in the embedded board.

Compression of CNN Using Low-Rank Approximation and CP Decomposition Methods (저계수 행렬 근사 및 CP 분해 기법을 이용한 CNN 압축)

  • Moon, HyeonCheol;Moon, Gihwa;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.26 no.2
    • /
    • pp.125-131
    • /
    • 2021
  • In recent years, Convolutional Neural Networks (CNNs) have achieved outstanding performance in the fields of computer vision such as image classification, object detection, visual quality enhancement, etc. However, as huge amount of computation and memory are required in CNN models, there is a limitation in the application of CNN to low-power environments such as mobile or IoT devices. Therefore, the need for neural network compression to reduce the model size while keeping the task performance as much as possible has been emerging. In this paper, we propose a method to compress CNN models by combining matrix decomposition methods of LR (Low-Rank) approximation and CP (Canonical Polyadic) decomposition. Unlike conventional methods that apply one matrix decomposition method to CNN models, we selectively apply two decomposition methods depending on the layer types of CNN to enhance the compression performance. To evaluate the performance of the proposed method, we use the models for image classification such as VGG-16, RestNet50 and MobileNetV2 models. The experimental results show that the proposed method gives improved classification performance at the same range of 1.5 to 12.1 times compression ratio than the existing method that applies only the LR approximation.

Food Detection by Fine-Tuning Pre-trained Convolutional Neural Network Using Noisy Labels

  • Alshomrani, Shroog;Aljoudi, Lina;Aljabri, Banan;Al-Shareef, Sarah
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.7
    • /
    • pp.182-190
    • /
    • 2021
  • Deep learning is an advanced technology for large-scale data analysis, with numerous promising cases like image processing, object detection and significantly more. It becomes customarily to use transfer learning and fine-tune a pre-trained CNN model for most image recognition tasks. Having people taking photos and tag themselves provides a valuable resource of in-data. However, these tags and labels might be noisy as people who annotate these images might not be experts. This paper aims to explore the impact of noisy labels on fine-tuning pre-trained CNN models. Such effect is measured on a food recognition task using Food101 as a benchmark. Four pre-trained CNN models are included in this study: InceptionV3, VGG19, MobileNetV2 and DenseNet121. Symmetric label noise will be added with different ratios. In all cases, models based on DenseNet121 outperformed the other models. When noisy labels were introduced to the data, the performance of all models degraded almost linearly with the amount of added noise.