• Title/Summary/Keyword: 캡션

Search Result 57, Processing Time 0.024 seconds

Efficient Caption Detection Algorithm Using Temporal Information in Video (시간적 정보를 이용한 비디오에서의 효과적인 캡션 검출 알고리즘)

  • Kim, Su-Yeon;Shin, Chung-Ho;Kwon, Chul-Hyun;Park, Sang-Hui
    • Proceedings of the KIEE Conference
    • /
    • 2003.07d
    • /
    • pp.2720-2722
    • /
    • 2003
  • 이 논문은 연속적인 비디오 영상에서 시간적인 정보를 최대한 이용하는 새로운 캡션검출과 인식알고리즘을 제안하였다. 누적된 차영상 정보로부터 비디오에서 캡션의 시공간적인 위치를 찾아내기 위하여 구문등록 기술을 이용하였다. 그리고 복잡한 배경 영상의 문제를 해결하기 위하여 새로운 다중 프레임 인티그레이션 방법을 이용하였다. 기존 논문과는 달리 빠른 속도의 수행을 위하여 복잡한 계산 과정을 포함하지 않는다. 본 논문에서 제안한 방법은 다양한 뉴스 데이터 영상에서 적용되었고, 그 결과는 아주 정확하고 효과적이었다.

  • PDF

Image captioning and video captioning using Transformer (Transformer를 사용한 이미지 캡셔닝 및 비디오 캡셔닝)

  • Gi-Duk Kim;Geun-Hoo Lee
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.01a
    • /
    • pp.303-305
    • /
    • 2023
  • 본 논문에서는 트랜스포머를 사용한 이미지 캡셔닝 방법과 비디오 캡셔닝 방법을 제안한다. 트랜스포머의 입력으로 사전 학습된 이미지 클래스 분류모델을 거쳐 추출된 특징을 트랜스포머의 입력으로 넣고 인코더-디코더를 통해 이미지와 비디오의 캡션을 출력한다. 이미지 캡셔닝의 경우 한글 데이터 세트를 학습하여 한글 캡션을 출력하도록 학습하였으며 비디오 캡셔닝의 경우 MSVD 데이터 세트를 학습하여 학습 후 출력 캡션의 성능을 다른 비디오 캡셔닝 모델의 성능과 비교하였다. 비디오 캡셔닝에서 성능향상을 위해 트랜스포머의 디코더를 변형한 GPT-2를 사용하였을 때 BLEU-1 점수가 트랜스포머의 경우 0.62, GPT-2의 경우 0.80으로 성능이 향상됨을 확인하였다

  • PDF

Design of a Deep Neural Network Model for Image Caption Generation (이미지 캡션 생성을 위한 심층 신경망 모델의 설계)

  • Kim, Dongha;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.4
    • /
    • pp.203-210
    • /
    • 2017
  • In this paper, we propose an effective neural network model for image caption generation and model transfer. This model is a kind of multi-modal recurrent neural network models. It consists of five distinct layers: a convolution neural network layer for extracting visual information from images, an embedding layer for converting each word into a low dimensional feature, a recurrent neural network layer for learning caption sentence structure, and a multi-modal layer for combining visual and language information. In this model, the recurrent neural network layer is constructed by LSTM units, which are well known to be effective for learning and transferring sequence patterns. Moreover, this model has a unique structure in which the output of the convolution neural network layer is linked not only to the input of the initial state of the recurrent neural network layer but also to the input of the multimodal layer, in order to make use of visual information extracted from the image at each recurrent step for generating the corresponding textual caption. Through various comparative experiments using open data sets such as Flickr8k, Flickr30k, and MSCOCO, we demonstrated the proposed multimodal recurrent neural network model has high performance in terms of caption accuracy and model transfer effect.

Automated Story Generation with Image Captions and Recursiva Calls (이미지 캡션 및 재귀호출을 통한 스토리 생성 방법)

  • Isle Jeon;Dongha Jo;Mikyeong Moon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.1
    • /
    • pp.42-50
    • /
    • 2023
  • The development of technology has achieved digital innovation throughout the media industry, including production techniques and editing technologies, and has brought diversity in the form of consumer viewing through the OTT service and streaming era. The convergence of big data and deep learning networks automatically generated text in format such as news articles, novels, and scripts, but there were insufficient studies that reflected the author's intention and generated story with contextually smooth. In this paper, we describe the flow of pictures in the storyboard with image caption generation techniques, and the automatic generation of story-tailored scenarios through language models. Image caption using CNN and Attention Mechanism, we generate sentences describing pictures on the storyboard, and input the generated sentences into the artificial intelligence natural language processing model KoGPT-2 in order to automatically generate scenarios that meet the planning intention. Through this paper, the author's intention and story customized scenarios are created in large quantities to alleviate the pain of content creation, and artificial intelligence participates in the overall process of digital content production to activate media intelligence.

A Study on the Indexing Method of Moving Image for TV News (TV뉴스 영상 색인에 관한 연구)

  • 장재화
    • Proceedings of the Korean Society for Information Management Conference
    • /
    • 1999.08a
    • /
    • pp.35-38
    • /
    • 1999
  • TV뉴스 영상은 다루고 있는 대상의 범위가 넓어 뉴스의 주제를 나타냄과 동시에 그 이외의 내용도 포함한다. 또한 소재의 재이용이 가능하기 때문에 뉴스의 내용에서뿐만 아니라 영상 자체가 표현하는 내용에서도 검색이 가능해야 한다. 본 연구에서는 TV뉴스의 영상을 색인하기 위해 종래 사용되던 뉴스의 나레이션, 캡션뿐만 다니라 영상 자체를 대상으로 하여 색인어를 추출하는 방안에 대해 논하였다. 연구에서는 $\ulcorner$KBS 뉴스 9$\lrcorner$을 예로 들어 영상내용정보, 나레이션과 캡션정보, 방송정보. 촬영정보로 나누어 색인어를 부여하였다.

  • PDF

A Study on the Content-Based Video Information Indexing and Retrieval Using Closed Caption and Speech Recognition (캡션정보 및 음성인식을 이용한 내용기반 비디오 정보 색인 및 검색에 관한 연구)

  • 손종목;김진웅;배건성
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.11b
    • /
    • pp.141-145
    • /
    • 1999
  • 뉴스나 드라마, 영화 등의 비디오에 대한 검색 시 일반 사용자의 요구에 가장 잘 부합되는 결과를 얻기 위해 비디오 데이터의 의미적 분석과 색인을 만드는 것이 필요하다. 일반적으로 음성신호가 비디오 데이터의 내용을 잘 나타내고 비디오와 동기가 이루어져 있으므로, 내용기반 검색을 위한 비디오 데이터 분할에 효율적으로 이용될 수 있다 본 논문에서는 캡션 정보가 주어지는 방송뉴스 프로그램을 대상으로 효율적인 검색, 색인을 위한 비디오 데이터의 분할에 음성인식기술을 적용하는 방법을 제안하고 그에 따른 실험결과를 제시한다.

  • PDF

A general-purpose model capable of image captioning in Korean and Englishand a method to generate text suitable for the purpose (한국어 및 영어 이미지 캡션이 가능한 범용적 모델 및 목적에 맞는 텍스트를 생성해주는 기법)

  • Cho, Su Hyun;Oh, Hayoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.8
    • /
    • pp.1111-1120
    • /
    • 2022
  • Image Capturing is a matter of viewing images and describing images in language. The problem is an important problem that can be solved by keeping, understanding, and bringing together two areas of image processing and natural language processing. In addition, by automatically recognizing and describing images in text, images can be converted into text and then into speech for visually impaired people to help them understand their surroundings, and important issues such as image search, art therapy, sports commentary, and real-time traffic information commentary. So far, the image captioning research approach focuses solely on recognizing and texturing images. However, various environments in reality must be considered for practical use, as well as being able to provide image descriptions for the intended purpose. In this work, we limit the universally available Korean and English image captioning models and text generation techniques for the purpose of image captioning.

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.

Efficient Object Classification Scheme for Scanned Educational Book Image (교육용 도서 영상을 위한 효과적인 객체 자동 분류 기술)

  • Choi, Young-Ju;Kim, Ji-Hae;Lee, Young-Woon;Lee, Jong-Hyeok;Hong, Gwang-Soo;Kim, Byung-Gyu
    • Journal of Digital Contents Society
    • /
    • v.18 no.7
    • /
    • pp.1323-1331
    • /
    • 2017
  • Despite the fact that the copyright has grown into a large-scale business, there are many constant problems especially in image copyright. In this study, we propose an automatic object extraction and classification system for the scanned educational book image by combining document image processing and intelligent information technology like deep learning. First, the proposed technology removes noise component and then performs a visual attention assessment-based region separation. Then we carry out grouping operation based on extracted block areas and categorize each block as a picture or a character area. Finally, the caption area is extracted by searching around the classified picture area. As a result of the performance evaluation, it can be seen an average accuracy of 83% in the extraction of the image and caption area. For only image region detection, up-to 97% of accuracy is verified.