• Title/Summary/Keyword: vision Transformer

Search Result 62, Processing Time 0.027 seconds

Lightening of Human Pose Estimation Algorithm Using MobileViT and Transfer Learning

  • Kunwoo Kim;Jonghyun Hong;Jonghyuk Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.9
    • /
    • pp.17-25
    • /
    • 2023
  • In this paper, we propose a model that can perform human pose estimation through a MobileViT-based model with fewer parameters and faster estimation. The based model demonstrates lightweight performance through a structure that combines features of convolutional neural networks with features of Vision Transformer. Transformer, which is a major mechanism in this study, has become more influential as its based models perform better than convolutional neural network-based models in the field of computer vision. Similarly, in the field of human pose estimation, Vision Transformer-based ViTPose maintains the best performance in all human pose estimation benchmarks such as COCO, OCHuman, and MPII. However, because Vision Transformer has a heavy model structure with a large number of parameters and requires a relatively large amount of computation, it costs users a lot to train the model. Accordingly, the based model overcame the insufficient Inductive Bias calculation problem, which requires a large amount of computation by Vision Transformer, with Local Representation through a convolutional neural network structure. Finally, the proposed model obtained a mean average precision of 0.694 on the MS COCO benchmark with 3.28 GFLOPs and 9.72 million parameters, which are 1/5 and 1/9 the number compared to ViTPose, respectively.

Design of Clustering CoaT Vision Model Based on Transformer (Transformer 기반의 Clustering CoaT 모델 설계)

  • Bang, Ji-Hyeon;Park, Jun;Jung, Se-Hoon;Sim, Chun-Bo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.546-548
    • /
    • 2022
  • 최근 컴퓨터 비전 분야에서 Transformer를 도입한 연구가 활발히 연구되고 있다. 이 모델들은 Transformer의 구조를 거의 그대로 사용하기 때문에 확장성이 좋으며 large 스케일 학습에서 매우 우수한 성능을 보여주었다. 하지만 Transformer를 적용한 비전 모델은 inductive bias의 부족으로 학습 시 많은 데이터와 시간을 필요로 하였다. 그로 인하여 현재 많은 Vision Transformer 개선 모델들이 연구되고 있다. 본 논문에서도 Vision Transformer의 문제점을 개선한 Clustering CoaT 모델을 제안한다.

Fine-tuning Neural Network for Improving Video Classification Performance Using Vision Transformer (Vision Transformer를 활용한 비디오 분류 성능 향상을 위한 Fine-tuning 신경망)

  • Kwang-Yeob Lee;Ji-Won Lee;Tae-Ryong Park
    • Journal of IKEEE
    • /
    • v.27 no.3
    • /
    • pp.313-318
    • /
    • 2023
  • This paper proposes a neural network applying fine-tuning as a way to improve the performance of Video Classification based on Vision Transformer. Recently, the need for real-time video image analysis based on deep learning has emerged. Due to the characteristics of the existing CNN model used in Image Classification, it is difficult to analyze the association of consecutive frames. We want to find and solve the optimal model by comparing and analyzing the Vision Transformer and Non-local neural network models with the Attention mechanism. In addition, we propose an optimal fine-tuning neural network model by applying various methods of fine-tuning as a transfer learning method. The experiment trained the model with the UCF101 dataset and then verified the performance of the model by applying a transfer learning method to the UTA-RLDD dataset.

Performance Analysis of Human Facial Age Classification Method Based on Vision Transformer (Vision Transformer 기반 얼굴 연령 분류 기법의 성능 분석)

  • Junhwi Park;Namjung Kim;Changjoon Park;Jaehyun Lee;Jeonghwan Gwak
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2024.01a
    • /
    • pp.343-345
    • /
    • 2024
  • 얼굴 연령 분류 기법은 신원 확인 시스템 고도화, 유동 인구 통계 자동화 시스템 구축, 연령 제한 콘텐츠 관리 시스템 고도화 등 다양한 분야에 적용할 수 있는 확장 가능성을 가진다. 넓은 확장 가능성을 가지는 만큼 적용된 시스템의 안정성을 위해서는 얼굴 연령 분류 기법의 높은 정확도는 필수적이다. 따라서, 본 논문에서는 Vision Transformer(ViT) 기반 분류 알고리즘의 얼굴 연령 분류 성능을 비교 분석한다. ViT 기반분류 알고리즘으로는 최근 널리 사용되고 있는 ViT, Swin Transformer(ST), Neighborhood Attention Transformer(NAT) 세 가지로 선정하였으며, ViT의 얼굴 연령 분류 정확도 65.19%의 성능을 확인하였다.

  • PDF

A Survey on Vision Transformers for Object Detection Task (객체 탐지 과업에서의 트랜스포머 기반 모델의 특장점 분석 연구)

  • Jungmin, Ha;Hyunjong, Lee;Jungmin, Eom;Jaekoo, Lee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.6
    • /
    • pp.319-327
    • /
    • 2022
  • Transformers are the most famous deep learning models that has achieved great success in natural language processing and also showed good performance on computer vision. In this survey, we categorized transformer-based models for computer vision, particularly object detection tasks and perform comprehensive comparative experiments to understand the characteristics of each model. Next, we evaluated the models subdivided into standard transformer, with key point attention, and adding attention with coordinates by performance comparison in terms of object detection accuracy and real-time performance. For performance comparison, we used two metrics: frame per second (FPS) and mean average precision (mAP). Finally, we confirmed the trends and relationships related to the detection and real-time performance of objects in several transformer models using various experiments.

From Masked Reconstructions to Disease Diagnostics: A Vision Transformer Approach for Fundus Images (마스크된 복원에서 질병 진단까지: 안저 영상을 위한 비전 트랜스포머 접근법)

  • Toan Duc Nguyen;Gyurin Byun;Hyunseung Choo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.557-560
    • /
    • 2023
  • In this paper, we introduce a pre-training method leveraging the capabilities of the Vision Transformer (ViT) for disease diagnosis in conventional Fundus images. Recognizing the need for effective representation learning in medical images, our method combines the Vision Transformer with a Masked Autoencoder to generate meaningful and pertinent image augmentations. During pre-training, the Masked Autoencoder produces an altered version of the original image, which serves as a positive pair. The Vision Transformer then employs contrastive learning techniques with this image pair to refine its weight parameters. Our experiments demonstrate that this dual-model approach harnesses the strengths of both the ViT and the Masked Autoencoder, resulting in robust and clinically relevant feature embeddings. Preliminary results suggest significant improvements in diagnostic accuracy, underscoring the potential of our methodology in enhancing automated disease diagnosis in fundus imaging.

Textile material classification in clothing images using deep learning (딥러닝을 이용한 의류 이미지의 텍스타일 소재 분류)

  • So Young Lee;Hye Seon Jeong;Yoon Sung Choi;Choong Kwon Lee
    • Smart Media Journal
    • /
    • v.12 no.7
    • /
    • pp.43-51
    • /
    • 2023
  • As online transactions increase, the image of clothing has a great influence on consumer purchasing decisions. The importance of image information for clothing materials has been emphasized, and it is important for the fashion industry to analyze clothing images and grasp the materials used. Textile materials used for clothing are difficult to identify with the naked eye, and much time and cost are consumed in sorting. This study aims to classify the materials of textiles from clothing images based on deep learning algorithms. Classifying materials can help reduce clothing production costs, increase the efficiency of the manufacturing process, and contribute to the service of recommending products of specific materials to consumers. We used machine vision-based deep learning algorithms ResNet and Vision Transformer to classify clothing images. A total of 760,949 images were collected and preprocessed to detect abnormal images. Finally, a total of 167,299 clothing images, 19 textile labels and 20 fabric labels were used. We used ResNet and Vision Transformer to classify clothing materials and compared the performance of the algorithms with the Top-k Accuracy Score metric. As a result of comparing the performance, the Vision Transformer algorithm outperforms ResNet.

Analysis of the effect of class classification learning on the saliency map of Self-Supervised Transformer (클래스분류 학습이 Self-Supervised Transformer의 saliency map에 미치는 영향 분석)

  • Kim, JaeWook;Kim, Hyeoncheol
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.67-70
    • /
    • 2022
  • NLP 분야에서 적극 활용되기 시작한 Transformer 모델을 Vision 분야에서 적용하기 시작하면서 object detection과 segmentation 등 각종 분야에서 기존 CNN 기반 모델의 정체된 성능을 극복하며 향상되고 있다. 또한, label 데이터 없이 이미지들로만 자기지도학습을 한 ViT(Vision Transformer) 모델을 통해 이미지에 포함된 여러 중요한 객체의 영역을 검출하는 saliency map을 추출할 수 있게 되었으며, 이로 인해 ViT의 자기지도학습을 통한 object detection과 semantic segmentation 연구가 활발히 진행되고 있다. 본 논문에서는 ViT 모델 뒤에 classifier를 붙인 모델에 일반 학습한 모델과 자기지도학습의 pretrained weight을 사용해서 전이학습한 모델의 시각화를 통해 각 saliency map들을 비교 분석하였다. 이를 통해, 클래스 분류 학습 기반 전이학습이 transformer의 saliency map에 미치는 영향을 확인할 수 있었다.

  • PDF

Unleashing the Potential of Vision Transformer for Automated Bone Age Assessment in Hand X-rays (자동 뼈 연령 평가를 위한 비전 트랜스포머와 손 X 선 영상 분석)

  • Kyunghee Jung;Sammy Yap Xiang Bang;Nguyen Duc Toan;Hyunseung Choo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.687-688
    • /
    • 2023
  • Bone age assessment is a crucial task in pediatric radiology for assessing growth and development in children. In this paper, we explore the potential of Vision Transformer, a state-of-the-art deep learning model, for bone age assessment using X-ray images. We generate heatmap outputs using a pre-trained Vision Transformer model on a publicly available dataset of hand X-ray images and show that the model tends to focus on the overall hand and only the bone part of the image, indicating its potential for accurately identifying the regions of interest for bone age assessment without the need for pre-processing to remove background noise. We also suggest two methods for extracting the region of interest from the heatmap output. Our study suggests that Vision Transformer holds great potential for bone age assessment using X-ray images, as it can provide accurate and interpretable output that may assist radiologists in identifying potential abnormalities or areas of interest in the X-ray image.

Diagnosis of the Rice Lodging for the UAV Image using Vision Transformer (Vision Transformer를 이용한 UAV 영상의 벼 도복 영역 진단)

  • Hyunjung Myung;Seojeong Kim;Kangin Choi;Donghoon Kim;Gwanghyeong Lee;Hvung geun Ahn;Sunghwan Jeong;Bvoungiun Kim
    • Smart Media Journal
    • /
    • v.12 no.9
    • /
    • pp.28-37
    • /
    • 2023
  • The main factor affecting the decline in rice yield is damage caused by localized heavy rains or typhoons. The method of analyzing the rice lodging area is difficult to obtain objective results based on visual inspection and judgment based on field surveys visiting the affected area. it requires a lot of time and money. In this paper, we propose the method of estimation and diagnosis for rice lodging areas using a Vision Transformer-based Segformer for RGB images, which are captured by unmanned aerial vehicles. The proposed method estimates the lodging, normal, and background area using the Segformer model, and the lodging rate is diagnosed through the rice field inspection criteria in the seed industry Act. The diagnosis result can be used to find the distribution of the rice lodging areas, to show the trend of lodging, and to use the quality management of certified seed in government. The proposed method of rice lodging area estimation shows 98.33% of mean accuracy and 96.79% of mIoU.