• Title/Summary/Keyword: vision Transformer

Search Result 62, Processing Time 0.029 seconds

Non-pneumatic Tire Design System based on Generative Adversarial Networks (적대적 생성 신경망 기반 비공기압 타이어 디자인 시스템)

  • JuYong Seong;Hyunjun Lee;Sungchul Lee
    • Journal of Platform Technology
    • /
    • v.11 no.6
    • /
    • pp.34-46
    • /
    • 2023
  • The design of non-pneumatic tires, which are created by filling the space between the wheel and the tread with elastomeric compounds or polygonal spokes, has become an important research topic in the automotive and aerospace industries. In this study, a system was designed for the design of non-pneumatic tires through the implementation of a generative adversarial network. We specifically examined factors that could impact the design, including the type of non-pneumatic tire, its intended usage environment, manufacturing techniques, distinctions from pneumatic tires, and how spoke design affects load distribution. Using OpenCV, various shapes and spoke configurations were generated as images, and a GAN model was trained on the projected GANs to generate shapes and spokes for non-pneumatic tire designs. The designed non-pneumatic tires were labeled as available or not, and a Vision Transformer image classification AI model was trained on these labels for classification purposes. Evaluation of the classification model show convergence to a near-zero loss and a 99% accuracy rate confirming the generation of non-pneumatic tire designs.

  • PDF

A Study on Utilization of Vision Transformer for CTR Prediction (CTR 예측을 위한 비전 트랜스포머 활용에 관한 연구)

  • Kim, Tae-Suk;Kim, Seokhun;Im, Kwang Hyuk
    • Knowledge Management Research
    • /
    • v.22 no.4
    • /
    • pp.27-40
    • /
    • 2021
  • Click-Through Rate (CTR) prediction is a key function that determines the ranking of candidate items in the recommendation system and recommends high-ranking items to reduce customer information overload and achieve profit maximization through sales promotion. The fields of natural language processing and image classification are achieving remarkable growth through the use of deep neural networks. Recently, a transformer model based on an attention mechanism, differentiated from the mainstream models in the fields of natural language processing and image classification, has been proposed to achieve state-of-the-art in this field. In this study, we present a method for improving the performance of a transformer model for CTR prediction. In order to analyze the effect of discrete and categorical CTR data characteristics different from natural language and image data on performance, experiments on embedding regularization and transformer normalization are performed. According to the experimental results, it was confirmed that the prediction performance of the transformer was significantly improved when the L2 generalization was applied in the embedding process for CTR data input processing and when batch normalization was applied instead of layer normalization, which is the default regularization method, to the transformer model.

Detection of video editing points using facial keypoints (얼굴 특징점을 활용한 영상 편집점 탐지)

  • Joshep Na;Jinho Kim;Jonghyuk Park
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.15-30
    • /
    • 2023
  • Recently, various services using artificial intelligence(AI) are emerging in the media field as well However, most of the video editing, which involves finding an editing point and attaching the video, is carried out in a passive manner, requiring a lot of time and human resources. Therefore, this study proposes a methodology that can detect the edit points of video according to whether person in video are spoken by using Video Swin Transformer. First, facial keypoints are detected through face alignment. To this end, the proposed structure first detects facial keypoints through face alignment. Through this process, the temporal and spatial changes of the face are reflected from the input video data. And, through the Video Swin Transformer-based model proposed in this study, the behavior of the person in the video is classified. Specifically, after combining the feature map generated through Video Swin Transformer from video data and the facial keypoints detected through Face Alignment, utterance is classified through convolution layers. In conclusion, the performance of the image editing point detection model using facial keypoints proposed in this paper improved from 87.46% to 89.17% compared to the model without facial keypoints.

A label-free high precision automated crack detection method based on unsupervised generative attentional networks and swin-crackformer

  • Shiqiao Meng;Lezhi Gu;Ying Zhou;Abouzar Jafari
    • Smart Structures and Systems
    • /
    • v.33 no.6
    • /
    • pp.449-463
    • /
    • 2024
  • Automated crack detection is crucial for structural health monitoring and post-earthquake rapid damage detection. However, realizing high precision automatic crack detection in the absence of corresponding manual labeling presents a formidable challenge. This paper presents a novel crack segmentation transfer learning method and a novel crack segmentation model called Swin-CrackFormer. The proposed method facilitates efficient crack image style transfer through a meticulously designed data preprocessing technique, followed by the utilization of a GAN model for image style transfer. Moreover, the proposed Swin-CrackFormer combines the advantages of Transformer and convolution operations to achieve effective local and global feature extraction. To verify the effectiveness of the proposed method, this study validates the proposed method on three unlabeled crack datasets and evaluates the Swin-CrackFormer model on the METU dataset. Experimental results demonstrate that the crack transfer learning method significantly improves the crack segmentation performance on unlabeled crack datasets. Moreover, the Swin-CrackFormer model achieved the best detection result on the METU dataset, surpassing existing crack segmentation models.

A Survey on Deep Learning-based Pre-Trained Language Models (딥러닝 기반 사전학습 언어모델에 대한 이해와 현황)

  • Sangun Park
    • The Journal of Bigdata
    • /
    • v.7 no.2
    • /
    • pp.11-29
    • /
    • 2022
  • Pre-trained language models are the most important and widely used tools in natural language processing tasks. Since those have been pre-trained for a large amount of corpus, high performance can be expected even with fine-tuning learning using a small number of data. Since the elements necessary for implementation, such as a pre-trained tokenizer and a deep learning model including pre-trained weights, are distributed together, the cost and period of natural language processing has been greatly reduced. Transformer variants are the most representative pre-trained language models that provide these advantages. Those are being actively used in other fields such as computer vision and audio applications. In order to make it easier for researchers to understand the pre-trained language model and apply it to natural language processing tasks, this paper describes the definition of the language model and the pre-learning language model, and discusses the development process of the pre-trained language model and especially representative Transformer variants.

Development of a Robotic System for Measuring Hole Displacement Using Contact-Type Displacement Sensors (접촉식 변위센서를 이용한 홀 변위 측정 로봇시스템 개발)

  • Kang, Hee-Jun;Kweon, Min-Ho;Suh, Young-Soo;Ro, Young-Shick
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.25 no.1
    • /
    • pp.79-84
    • /
    • 2008
  • For the precision measurement of industrial products, the location of holes inside the products, if they exist, are often selected as feature points. The measurement of hole location would be performed by vision and laser-vision sensor. However, the usage of those sensors is limited in case of big change of light intensity and reflective shiny surface of the products. In order to overcome the difficulties, we have developed a hole displacement measuring device using contact-type displacement sensors (LVDTs). The developed measurement device attached to a robot measures small displacement of a hole by allowing its X-Y movement due to the contact forces between the hole and its own circular cone. The developed device consists of three plates which are connected in series for its own function. The first plate is used for the attachment to an industrial robot with ball-bush joints and springs. The second and third plates allow X-Y direction as LM guides. The bottom of the third plate is designed that various circular cones can be easily attached according to the shape of the hole. The developed system was implemented for its effectiveness that its measurement accuracy is less than 0.05mm.

A Study of ZVS Two-Switch Forward Converter Using Auxiliary Switch (보조 스위치를 사용한 ZVS Two-Switch 포워드 컨버터에 대한 연구)

  • Jung, Min-Hyuk;Kim, Yong;Um, Tae-Min;Lee, Kyu-Hun;Lee, Dong-Hyun
    • Proceedings of the KIEE Conference
    • /
    • 2009.07a
    • /
    • pp.965_966
    • /
    • 2009
  • In this paper, a new soft-switching Two-switch Forward converter topology has been proposed. Compared with conventional two-switch forward converter, the proposed converter employs an auxiliary switch and a clamp capacitor to instead of two reset diodes, not only its duty cycle can exceed 0.5 to achieve wide range input voltage, but also soft switching can be achieved for all switches. Especially, voltage stress across main switches can be clamped at $1/2V_{in}$, voltage stress across auxiliary switch can be clamped at $V_{in}$. In addition, due to clamp capacitor series with the transformer, duty ratio can be extended with equation $V_o=\frac{V_{in}(1-D}D{N}$. Therefore, as a kind of better cost-effective approach, it is very attractive for high input, wide range and high efficiency application.

  • PDF

Small Marker Detection with Attention Model in Robotic Applications (로봇시스템에서 작은 마커 인식을 하기 위한 사물 감지 어텐션 모델)

  • Kim, Minjae;Moon, Hyungpil
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.4
    • /
    • pp.425-430
    • /
    • 2022
  • As robots are considered one of the mainstream digital transformations, robots with machine vision becomes a main area of study providing the ability to check what robots watch and make decisions based on it. However, it is difficult to find a small object in the image mainly due to the flaw of the most of visual recognition networks. Because visual recognition networks are mostly convolution neural network which usually consider local features. So, we make a model considering not only local feature, but also global feature. In this paper, we propose a detection method of a small marker on the object using deep learning and an algorithm that considers global features by combining Transformer's self-attention technique with a convolutional neural network. We suggest a self-attention model with new definition of Query, Key and Value for model to learn global feature and simplified equation by getting rid of position vector and classification token which cause the model to be heavy and slow. Finally, we show that our model achieves higher mAP than state of the art model YOLOr.

A Research Trends on Robustness in ViT-based Models (ViT 기반 모델의 강건성 연구동향)

  • Shin, Yeong-Jae;Hong, Yoon-Young;Kim, Ho-Won
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.510-512
    • /
    • 2022
  • 컴퓨터 비전 분야에서 오랫동안 사용되었던 CNN(Convolution Neural Network)은 오분류를 일으키기 위해 악의적으로 추가된 섭동에 매우 취약하다. ViT(Vision Transformer)는 입력 이미지의 전체적인 특징을 탐색하는 어텐션 구조를 적용함으로 CNN의 국소적 특징 탐색보다 특성 픽셀에 섭동을 추가하는 적대적 공격에 강건한 특성을 보이지만 최근 어텐션 구조에 대한 강건성 분석과 다양한 공격 기법의 발달로 보안 취약성 문제가 제기되고 있다. 본 논문은 ViT가 CNN 대비 강건성을 가지는 구조적인 특징을 분석하는 연구와 어텐션 구조에 대한 최신 공격기법을 소개함으로 향후 등장할 ViT 파생 모델의 강건성을 유지하기 위해 중점적으로 다루어야 할 부분이 무엇인지 소개한다.

Survey of the Model Inversion Attacks and Defenses to ViT (ViT 기반 모델 역전 공격 및 방어 기법들에 대한 연구)

  • Miseon Yu;Yunheung Peak
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.15-17
    • /
    • 2023
  • ViT(Vision Transformer)는 트랜스포머 구조에 이미지를 패치들로 나눠 한꺼번에 인풋으로 입력하는 모델이다. CNN 기반 모델보다 더 적은 훈련 계산량으로 다양한 이미지 인식 작업에서 SOTA(State-of-the-art) 성능을 보이면서 다양한 비전 작업에 ViT 를 적용하는 연구가 활발히 진행되고 있다. 하지만, ViT 모델도 AI 모델 훈련시에 생성된 그래디언트(Gradients)를 이용해 원래 사용된 훈련 데이터를 복원할 수 있는 모델 역전 공격(Model Inversion Attacks)에 안전하지 않음이 증명되고 있다. CNN 기반의 모델 역전 공격 및 방어 기법들은 많이 연구되어 왔지만, ViT 에 대한 관련 연구들은 이제 시작 단계이고, CNN 기반의 모델과 다른 특성이 있기에 공격 및 방어 기법도 새롭게 연구될 필요가 있다. 따라서, 본 연구는 ViT 모델에 특화된 모델 역전 공격 및 방어 기법들의 특징을 서술한다.