• Title/Summary/Keyword: Image Captioning

Search Result 15, Processing Time 0.055 seconds

Using similarity based image caption to aid visual question answering (유사도 기반 이미지 캡션을 이용한 시각질의응답 연구)

  • Kang, Joonseo;Lim, Changwon
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.2
    • /
    • pp.191-204
    • /
    • 2021
  • Visual Question Answering (VQA) and image captioning are tasks that require understanding of the features of images and linguistic features of text. Therefore, co-attention may be the key to both tasks, which can connect image and text. In this paper, we propose a model to achieve high performance for VQA by image caption generated using a pretrained standard transformer model based on MSCOCO dataset. Captions unrelated to the question can rather interfere with answering, so some captions similar to the question were selected to use based on a similarity to the question. In addition, stopwords in the caption could not affect or interfere with answering, so the experiment was conducted after removing stopwords. Experiments were conducted on VQA-v2 data to compare the proposed model with the deep modular co-attention network (MCAN) model, which showed good performance by using co-attention between images and text. As a result, the proposed model outperformed the MCAN model.

Meme Analysis using Image Captioning Model and GPT-4

  • Marvin John Ignacio;Thanh Tin Nguyen;Jia Wang;Yong-Guk Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.628-631
    • /
    • 2023
  • We present a new approach to evaluate the generated texts by Large Language Models (LLMs) for meme classification. Analyzing an image with embedded texts, i.e. meme, is challenging, even for existing state-of-the-art computer vision models. By leveraging large image-to-text models, we can extract image descriptions that can be used in other tasks, such as classification. In our methodology, we first generate image captions using BLIP-2 models. Using these captions, we use GPT-4 to evaluate the relationship between the caption and the meme text. The results show that OPT6.7B provides a better rating than other LLMs, suggesting that the proposed method has a potential for meme classification.

Sports Video Position Retrival System Using Frame Merging (프레임 병합을 이용한 스포츠 동영상 위치 검색 시스템)

  • 이지현;임정훈;이양원
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2002.11a
    • /
    • pp.619-623
    • /
    • 2002
  • We can speak caption as information that can not except caption on sports video. The sports highlight were composed that we recognize captioning. This paper is the necessary work to the middle-step to analysis the caption through the retrieval and discrimination from the position of caption. This paper improve at first and simplify the image through the excellent threshold value algorithm in the preprocessing and then use method that can analysis caption through the multiplex frame merging algorithm. Its speed performing shows up higher and simplier than the region growing process.

  • PDF

Action Recognition Reference Image Captioning (행동 인식 참조 이미지 캡셔닝)

  • Park, Eun-Soo;Kim, Seunghwan;Ryu, Jaesung;Kim, Seondae;Mujtaba, Ghulam;Ryu, Eun-Seok
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.21-24
    • /
    • 2019
  • 본 논문에서 기존의 이미지 캡셔닝의 문제점인 행동 인식 관련한 문제를 해결한다. 이미지 캡셔닝 모델의 학습 데이터의 행동 부분 즉, 동사 부분으로 행동 인식 데이터 셋을 만들었을 경우 많은 클래스, 각 클래스에는 적은 데이터로 구성됨을 보였다. 따라서, 본 논문에서 행동 인식 모델을 추가하고, 임계값을 두어 이미지 캡셔닝의 동사 부분의 정확도가 낮을 경우, 그리고 행동 인식 모델의 정확도가 높을 경우 두 결과물을 교체하는 방식으로 이미지 캡셔닝의 문제점을 해결한다. 본 논문에서 제안하는 모델에 대한 설명과 구현 과정 및 행동 인식에 강인한 이미지 캡셔닝 실험 결과를 보인다.

  • PDF

Deep Learning in Radiation Oncology

  • Cheon, Wonjoong;Kim, Haksoo;Kim, Jinsung
    • Progress in Medical Physics
    • /
    • v.31 no.3
    • /
    • pp.111-123
    • /
    • 2020
  • Deep learning (DL) is a subset of machine learning and artificial intelligence that has a deep neural network with a structure similar to the human neural system and has been trained using big data. DL narrows the gap between data acquisition and meaningful interpretation without explicit programming. It has so far outperformed most classification and regression methods and can automatically learn data representations for specific tasks. The application areas of DL in radiation oncology include classification, semantic segmentation, object detection, image translation and generation, and image captioning. This article tries to understand what is the potential role of DL and what can be more achieved by utilizing it in radiation oncology. With the advances in DL, various studies contributing to the development of radiation oncology were investigated comprehensively. In this article, the radiation treatment process was divided into six consecutive stages as follows: patient assessment, simulation, target and organs-at-risk segmentation, treatment planning, quality assurance, and beam delivery in terms of workflow. Studies using DL were classified and organized according to each radiation treatment process. State-of-the-art studies were identified, and the clinical utilities of those researches were examined. The DL model could provide faster and more accurate solutions to problems faced by oncologists. While the effect of a data-driven approach on improving the quality of care for cancer patients is evidently clear, implementing these methods will require cultural changes at both the professional and institutional levels. We believe this paper will serve as a guide for both clinicians and medical physicists on issues that need to be addressed in time.