• 제목/요약/키워드: vision transformer

Search Result 61, Processing Time 0.025 seconds

Analysis of Research Trends in Deep Learning-Based Video Captioning (딥러닝 기반 비디오 캡셔닝의 연구동향 분석)

  • Lyu Zhi;Eunju Lee;Youngsoo Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.13 no.1
    • /
    • pp.35-49
    • /
    • 2024
  • Video captioning technology, as a significant outcome of the integration between computer vision and natural language processing, has emerged as a key research direction in the field of artificial intelligence. This technology aims to achieve automatic understanding and language expression of video content, enabling computers to transform visual information in videos into textual form. This paper provides an initial analysis of the research trends in deep learning-based video captioning and categorizes them into four main groups: CNN-RNN-based Model, RNN-RNN-based Model, Multimodal-based Model, and Transformer-based Model, and explain the concept of each video captioning model. The features, pros and cons were discussed. This paper lists commonly used datasets and performance evaluation methods in the video captioning field. The dataset encompasses diverse domains and scenarios, offering extensive resources for the training and validation of video captioning models. The model performance evaluation method mentions major evaluation indicators and provides practical references for researchers to evaluate model performance from various angles. Finally, as future research tasks for video captioning, there are major challenges that need to be continuously improved, such as maintaining temporal consistency and accurate description of dynamic scenes, which increase the complexity in real-world applications, and new tasks that need to be studied are presented such as temporal relationship modeling and multimodal data integration.

Korean Text Image Super-Resolution for Improving Text Recognition Accuracy (텍스트 인식률 개선을 위한 한글 텍스트 이미지 초해상화)

  • Junhyeong Kwon;Nam Ik Cho
    • Journal of Broadcast Engineering
    • /
    • v.28 no.2
    • /
    • pp.178-184
    • /
    • 2023
  • Finding texts in general scene images and recognizing their contents is a very important task that can be used as a basis for robot vision, visual assistance, and so on. However, for the low-resolution text images, the degradations, such as noise or blur included in text images, are more noticeable, which leads to severe performance degradation of text recognition accuracy. In this paper, we propose a new Korean text image super-resolution based on a Transformer-based model, which generally shows higher performance than convolutional neural networks. In the experiments, we show that text recognition accuracy for Korean text images can be improved when our proposed text image super-resolution method is used. We also propose a new Korean text image dataset for training our model, which contains massive HR-LR Korean text image pairs.

200[W] Half-Bridge LLC Series Resonant Converter for driving LED Lamp (LED 조명장치 구동용 200[W]급 하프브리지 LLC 직렬공진형 컨버터)

  • Han, Woo-Yong;Park, Hyo-Sik
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.11
    • /
    • pp.4483-4488
    • /
    • 2010
  • In comparison with some other light sources, LED has merits such as long lifetime, pollution free, and high energy efficiency. Lately, due to development of LED with high brightness and capacity, LED, which has been applied in display system only, has applied in the field of lighting system. Driving current of power LED has to be controlled below the designed value. In this paper, half-bridge LLC series resonant converter, which has the current limiting function, has been described. Half-bridge LLC series resonant converter allows in relatively wide input voltage and output load range when compared to the other resonant converter. Also, it is possible to reduce a magnetic component, because leakage inductance of transformer is used as a resonant inductance. It has been validated by designing and testing 200[W] half-bridge LLC converter of DC24[V] output voltage for LED lamp driver, which includes a current limiting function and power factor correction(PFC) function.

Updated Primer on Generative Artificial Intelligence and Large Language Models in Medical Imaging for Medical Professionals

  • Kiduk Kim;Kyungjin Cho;Ryoungwoo Jang;Sunggu Kyung;Soyoung Lee;Sungwon Ham;Edward Choi;Gil-Sun Hong;Namkug Kim
    • Korean Journal of Radiology
    • /
    • v.25 no.3
    • /
    • pp.224-242
    • /
    • 2024
  • The emergence of Chat Generative Pre-trained Transformer (ChatGPT), a chatbot developed by OpenAI, has garnered interest in the application of generative artificial intelligence (AI) models in the medical field. This review summarizes different generative AI models and their potential applications in the field of medicine and explores the evolving landscape of Generative Adversarial Networks and diffusion models since the introduction of generative AI models. These models have made valuable contributions to the field of radiology. Furthermore, this review also explores the significance of synthetic data in addressing privacy concerns and augmenting data diversity and quality within the medical domain, in addition to emphasizing the role of inversion in the investigation of generative models and outlining an approach to replicate this process. We provide an overview of Large Language Models, such as GPTs and bidirectional encoder representations (BERTs), that focus on prominent representatives and discuss recent initiatives involving language-vision models in radiology, including innovative large language and vision assistant for biomedicine (LLaVa-Med), to illustrate their practical application. This comprehensive review offers insights into the wide-ranging applications of generative AI models in clinical research and emphasizes their transformative potential.

SEL-RefineMask: A Seal Segmentation and Recognition Neural Network with SEL-FPN

  • Dun, Ze-dong;Chen, Jian-yu;Qu, Mei-xia;Jiang, Bin
    • Journal of Information Processing Systems
    • /
    • v.18 no.3
    • /
    • pp.411-427
    • /
    • 2022
  • Digging historical and cultural information from seals in ancient books is of great significance. However, ancient Chinese seal samples are scarce and carving methods are diverse, and traditional digital image processing methods based on greyscale have difficulty achieving superior segmentation and recognition performance. Recently, some deep learning algorithms have been proposed to address this problem; however, current neural networks are difficult to train owing to the lack of datasets. To solve the afore-mentioned problems, we proposed an SEL-RefineMask which combines selector of feature pyramid network (SEL-FPN) with RefineMask to segment and recognize seals. We designed an SEL-FPN to intelligently select a specific layer which represents different scales in the FPN and reduces the number of anchor frames. We performed experiments on some instance segmentation networks as the baseline method, and the top-1 segmentation result of 64.93% is 5.73% higher than that of humans. The top-1 result of the SEL-RefineMask network reached 67.96% which surpassed the baseline results. After segmentation, a vision transformer was used to recognize the segmentation output, and the accuracy reached 91%. Furthermore, a dataset of seals in ancient Chinese books (SACB) for segmentation and small seal font (SSF) for recognition were established which are publicly available on the website.

A study on the effectiveness of intermediate features in deep learning on facial expression recognition

  • KyeongTeak Oh;Sun K. Yoo
    • International journal of advanced smart convergence
    • /
    • v.12 no.2
    • /
    • pp.25-33
    • /
    • 2023
  • The purpose of this study is to evaluate the impact of intermediate features on FER performance. To achieve this objective, intermediate features were extracted from the input images at specific layers (FM1~FM4) of the pre-trained network (Resnet-18). These extracted intermediate features and original images were used as inputs to the vision transformer (ViT), and the FER performance was compared. As a result, when using a single image as input, using intermediate features extracted from FM2 yielded the best performance (training accuracy: 94.35%, testing accuracy: 75.51%). When using the original image as input, the training accuracy was 91.32% and the testing accuracy was 74.68%. However, when combining the original image with intermediate features as input, the best FER performance was achieved by combining the original image with FM2, FM3, and FM4 (training accuracy: 97.88%, testing accuracy: 79.21%). These results imply that incorporating intermediate features alongside the original image can lead to superior performance. The findings can be referenced and utilized when designing the preprocessing stages of a deep learning model in FER. By considering the effectiveness of using intermediate features, practitioners can make informed decisions to enhance the performance of FER systems.

Efficient Recognition of Easily-confused Chinese Herbal Slices Images Using Enhanced ResNeSt

  • Qi Zhang;Jinfeng Ou;Huaying Zhou
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.8
    • /
    • pp.2103-2118
    • /
    • 2024
  • Chinese herbal slices (CHS) automated recognition based on computer vision plays a critical role in the practical application of intelligent Chinese medicine. Due to the complexity and similarity of herbal images, identifying Chinese herbal slices is still a challenging task. Especially, easily-confused CHS have higher inter-class and intra-class complexity and similarity issues, the existing deep learning models are less adaptable to identify them efficiently. To comprehensively address these problems, a novel tiny easily-confused CHS dataset has been built firstly, which includes six pairs of twelve categories with about 2395 samples. Furthermore, we propose a ResNeSt-CHS model that combines multilevel perception fusion (MPF) and perceptive sparse fusion (PSF) blocks for efficiently recognizing easilyconfused CHS images. To verify the superiority of the ResNeSt-CHS and the effectiveness of our dataset, experiments have been employed, validating that the ResNeSt-CHS is optimal for easily-confused CHS recognition, with 2.1% improvement of the original ResNeSt model. Additionally, the results indicate that ResNeSt-CHS is applied on a relatively small-scale dataset yet high accuracy. This model has obtained state-of-the-art easily-confused CHS classification performance, with accuracy of 90.8%, far beyond other models (EfficientNet, Transformer, and ResNeSt, etc) in terms of evaluation criteria.

Twin models for high-resolution visual inspections

  • Seyedomid Sajedi;Kareem A. Eltouny;Xiao Liang
    • Smart Structures and Systems
    • /
    • v.31 no.4
    • /
    • pp.351-363
    • /
    • 2023
  • Visual structural inspections are an inseparable part of post-earthquake damage assessments. With unmanned aerial vehicles (UAVs) establishing a new frontier in visual inspections, there are major computational challenges in processing the collected massive amounts of high-resolution visual data. We propose twin deep learning models that can provide accurate high-resolution structural components and damage segmentation masks efficiently. The traditional approach to cope with high memory computational demands is to either uniformly downsample the raw images at the price of losing fine local details or cropping smaller parts of the images leading to a loss of global contextual information. Therefore, our twin models comprising Trainable Resizing for high-resolution Segmentation Network (TRS-Net) and DmgFormer approaches the global and local semantics from different perspectives. TRS-Net is a compound, high-resolution segmentation architecture equipped with learnable downsampler and upsampler modules to minimize information loss for optimal performance and efficiency. DmgFormer utilizes a transformer backbone and a convolutional decoder head with skip connections on a grid of crops aiming for high precision learning without downsizing. An augmented inference technique is used to boost performance further and reduce the possible loss of context due to grid cropping. Comprehensive experiments have been performed on the 3D physics-based graphics models (PBGMs) synthetic environments in the QuakeCity dataset. The proposed framework is evaluated using several metrics on three segmentation tasks: component type, component damage state, and global damage (crack, rebar, spalling). The models were developed as part of the 2nd International Competition for Structural Health Monitoring.

Improvement of Face Recognition Algorithm for Residential Area Surveillance System Based on Graph Convolution Network (그래프 컨벌루션 네트워크 기반 주거지역 감시시스템의 얼굴인식 알고리즘 개선)

  • Tan Heyi;Byung-Won Min
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.2
    • /
    • pp.1-15
    • /
    • 2024
  • The construction of smart communities is a new method and important measure to ensure the security of residential areas. In order to solve the problem of low accuracy in face recognition caused by distorting facial features due to monitoring camera angles and other external factors, this paper proposes the following optimization strategies in designing a face recognition network: firstly, a global graph convolution module is designed to encode facial features as graph nodes, and a multi-scale feature enhancement residual module is designed to extract facial keypoint features in conjunction with the global graph convolution module. Secondly, after obtaining facial keypoints, they are constructed as a directed graph structure, and graph attention mechanisms are used to enhance the representation power of graph features. Finally, tensor computations are performed on the graph features of two faces, and the aggregated features are extracted and discriminated by a fully connected layer to determine whether the individuals' identities are the same. Through various experimental tests, the network designed in this paper achieves an AUC index of 85.65% for facial keypoint localization on the 300W public dataset and 88.92% on a self-built dataset. In terms of face recognition accuracy, the proposed network achieves an accuracy of 83.41% on the IBUG public dataset and 96.74% on a self-built dataset. Experimental results demonstrate that the network designed in this paper exhibits high detection and recognition accuracy for faces in surveillance videos.

Deep Learning-Enabled Detection of Pneumoperitoneum in Supine and Erect Abdominal Radiography: Modeling Using Transfer Learning and Semi-Supervised Learning

  • Sangjoon Park;Jong Chul Ye;Eun Sun Lee;Gyeongme Cho;Jin Woo Yoon;Joo Hyeok Choi;Ijin Joo;Yoon Jin Lee
    • Korean Journal of Radiology
    • /
    • v.24 no.6
    • /
    • pp.541-552
    • /
    • 2023
  • Objective: Detection of pneumoperitoneum using abdominal radiography, particularly in the supine position, is often challenging. This study aimed to develop and externally validate a deep learning model for the detection of pneumoperitoneum using supine and erect abdominal radiography. Materials and Methods: A model that can utilize "pneumoperitoneum" and "non-pneumoperitoneum" classes was developed through knowledge distillation. To train the proposed model with limited training data and weak labels, it was trained using a recently proposed semi-supervised learning method called distillation for self-supervised and self-train learning (DISTL), which leverages the Vision Transformer. The proposed model was first pre-trained with chest radiographs to utilize common knowledge between modalities, fine-tuned, and self-trained on labeled and unlabeled abdominal radiographs. The proposed model was trained using data from supine and erect abdominal radiographs. In total, 191212 chest radiographs (CheXpert data) were used for pre-training, and 5518 labeled and 16671 unlabeled abdominal radiographs were used for fine-tuning and self-supervised learning, respectively. The proposed model was internally validated on 389 abdominal radiographs and externally validated on 475 and 798 abdominal radiographs from the two institutions. We evaluated the performance in diagnosing pneumoperitoneum using the area under the receiver operating characteristic curve (AUC) and compared it with that of radiologists. Results: In the internal validation, the proposed model had an AUC, sensitivity, and specificity of 0.881, 85.4%, and 73.3% and 0.968, 91.1, and 95.0 for supine and erect positions, respectively. In the external validation at the two institutions, the AUCs were 0.835 and 0.852 for the supine position and 0.909 and 0.944 for the erect position. In the reader study, the readers' performances improved with the assistance of the proposed model. Conclusion: The proposed model trained with the DISTL method can accurately detect pneumoperitoneum on abdominal radiography in both the supine and erect positions.