• Title/Summary/Keyword: 이미지 캡션

Search Result 24, Processing Time 0.022 seconds

The Development of Image Caption Generating Software for Auditory Disabled (청각장애인을 위한 동영상 이미지캡션 생성 소프트웨어 개발)

  • Lim, Kyung-Ho;Yoon, Joon-Sung
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.1069-1074
    • /
    • 2007
  • 청각장애인이 PC환경에서 영화, 방송, 애니메이션 등의 동영상 콘텐츠를 이용할 때 장애의 정도에 따라 콘텐츠의 접근성에 있어서 시각적 수용 이외의 부분적 장애가 발생한다. 이러한 장애의 극복을 위해 수화 애니메이션이나 독화 교육과 같은 청각장애인의 정보 접근성 향상을 위한 콘텐츠와 기술이 개발된 사례가 있었으나 다소 한계점을 가지고 있다. 따라서 본 논문에서는 현대 뉴미디어 예술 작품의 예술적 표현 방법을 구성요소로서 추출하여, 기술과 감성의 조화가 어우러진 독창적인 콘텐츠를 생산할 수 있는 기술을 개발함으로써 PC환경에서 청각장애인의 동영상 콘텐츠에 대한 접근성 향상 방법을 추출하고, 실질적으로 청각적 효과의 시각적 변환 인터페이스 개발 및 이미지 캡션 생성 소프트웨어 개발을 통해 청각장애인의 동영상 콘텐츠 사용성을 극대화시킬 수 있는 방법론을 제시하고자 한다. 본 논문에서는 첫째, 청각장애인의 동영상 콘텐츠 접근성 분석, 둘째, 미디어아트 작품의 선별적 분석 및 유동요소 추출, 셋째, 인터페이스 및 콘텐츠 제작의 순서로 단계별 방법론을 제시하고 있다. 이 세번 째 단계에서 이미지 캡션 생성 소프트웨어가 개발되고, 비트맵 아이콘 형태의 이미지 캡션 콘텐츠가 생성된다. 개발한 이미지 캡션 생성 소프트웨어는 사용성에 입각한 일상의 언어적 요소와 예술 작품으로부터 추출한 청각 요소의 시각적요소로의 전환을 위한 인터페이스인 것이다. 이러한 기술의 개발은 기술적 측면으로는 청각장애인의 다양한 웹콘텐츠 접근 장애를 개선하는 독창적인 인터페이스 추출 환경을 확립하여 응용영역을 확대하고, 공학적으로 단언된 기술 영역을 콘텐츠 개발 기술이라는 새로운 영역으로 확장함으로써 간학제적 시도를 통한 기술영역을 유기적으로 확대하며, 문자와 오디오를 이미지와 시각적 효과로 전환하여 다각적인 미디어의 교차 활용 방안을 제시하여 콘텐츠를 형상화시키는 기술을 활성화 시키는 효과를 거둘 수 있다. 또한 청각장애인의 접근성 개선이라는 한정된 영역을 뛰어넘어 국가간 언어적인 장벽을 초월할 수 있는 다각적인 부가 동영상 콘텐츠에 대한 시도, 접근, 생산을 통해 글로벌 시대에 부응하는 새로운 방법론으로 발전 할 수 있다.

  • PDF

High-Quality Multimodal Dataset Construction Methodology for ChatGPT-Based Korean Vision-Language Pre-training (ChatGPT 기반 한국어 Vision-Language Pre-training을 위한 고품질 멀티모달 데이터셋 구축 방법론)

  • Jin Seong;Seung-heon Han;Jong-hun Shin;Soo-jong Lim;Oh-woog Kwon
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.603-608
    • /
    • 2023
  • 본 연구는 한국어 Vision-Language Pre-training 모델 학습을 위한 대규모 시각-언어 멀티모달 데이터셋 구축에 대한 필요성을 연구한다. 현재, 한국어 시각-언어 멀티모달 데이터셋은 부족하며, 양질의 데이터 획득이 어려운 상황이다. 따라서, 본 연구에서는 기계 번역을 활용하여 외국어(영문) 시각-언어 데이터를 한국어로 번역하고 이를 기반으로 생성형 AI를 활용한 데이터셋 구축 방법론을 제안한다. 우리는 다양한 캡션 생성 방법 중, ChatGPT를 활용하여 자연스럽고 고품질의 한국어 캡션을 자동으로 생성하기 위한 새로운 방법을 제안한다. 이를 통해 기존의 기계 번역 방법보다 더 나은 캡션 품질을 보장할 수 있으며, 여러가지 번역 결과를 앙상블하여 멀티모달 데이터셋을 효과적으로 구축하는데 활용한다. 뿐만 아니라, 본 연구에서는 의미론적 유사도 기반 평가 방식인 캡션 투영 일치도(Caption Projection Consistency) 소개하고, 다양한 번역 시스템 간의 영-한 캡션 투영 성능을 비교하며 이를 평가하는 기준을 제시한다. 최종적으로, 본 연구는 ChatGPT를 이용한 한국어 멀티모달 이미지-텍스트 멀티모달 데이터셋 구축을 위한 새로운 방법론을 제시하며, 대표적인 기계 번역기들보다 우수한 영한 캡션 투영 성능을 증명한다. 이를 통해, 우리의 연구는 부족한 High-Quality 한국어 데이터 셋을 자동으로 대량 구축할 수 있는 방향을 보여주며, 이 방법을 통해 딥러닝 기반 한국어 Vision-Language Pre-training 모델의 성능 향상에 기여할 것으로 기대한다.

  • PDF

Automated Story Generation with Image Captions and Recursiva Calls (이미지 캡션 및 재귀호출을 통한 스토리 생성 방법)

  • Isle Jeon;Dongha Jo;Mikyeong Moon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.1
    • /
    • pp.42-50
    • /
    • 2023
  • The development of technology has achieved digital innovation throughout the media industry, including production techniques and editing technologies, and has brought diversity in the form of consumer viewing through the OTT service and streaming era. The convergence of big data and deep learning networks automatically generated text in format such as news articles, novels, and scripts, but there were insufficient studies that reflected the author's intention and generated story with contextually smooth. In this paper, we describe the flow of pictures in the storyboard with image caption generation techniques, and the automatic generation of story-tailored scenarios through language models. Image caption using CNN and Attention Mechanism, we generate sentences describing pictures on the storyboard, and input the generated sentences into the artificial intelligence natural language processing model KoGPT-2 in order to automatically generate scenarios that meet the planning intention. Through this paper, the author's intention and story customized scenarios are created in large quantities to alleviate the pain of content creation, and artificial intelligence participates in the overall process of digital content production to activate media intelligence.

Design of a Deep Neural Network Model for Image Caption Generation (이미지 캡션 생성을 위한 심층 신경망 모델의 설계)

  • Kim, Dongha;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.4
    • /
    • pp.203-210
    • /
    • 2017
  • In this paper, we propose an effective neural network model for image caption generation and model transfer. This model is a kind of multi-modal recurrent neural network models. It consists of five distinct layers: a convolution neural network layer for extracting visual information from images, an embedding layer for converting each word into a low dimensional feature, a recurrent neural network layer for learning caption sentence structure, and a multi-modal layer for combining visual and language information. In this model, the recurrent neural network layer is constructed by LSTM units, which are well known to be effective for learning and transferring sequence patterns. Moreover, this model has a unique structure in which the output of the convolution neural network layer is linked not only to the input of the initial state of the recurrent neural network layer but also to the input of the multimodal layer, in order to make use of visual information extracted from the image at each recurrent step for generating the corresponding textual caption. Through various comparative experiments using open data sets such as Flickr8k, Flickr30k, and MSCOCO, we demonstrated the proposed multimodal recurrent neural network model has high performance in terms of caption accuracy and model transfer effect.

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.

The Signification of Words and Photography in Photojournalism (포토저널리즘 사진과 캡션의 의미작용에 대한 연구)

  • Chung, Hong-Gi
    • Korean journal of communication and information
    • /
    • v.18
    • /
    • pp.231-268
    • /
    • 2002
  • This is a study about the audience's aspect of decoding the photography and words comprising photojournalism. For this study, the way of audience's decoding five pictures depicting a foreign worker's death and the captions for each were analyzed. Ethnographic method was used to find distinctive features of the audiences in decoding the pictures. Picture images without any title and caption shown to audiences bring quite different decodings from those of picture images titled and captioned. This led to the following three conclusions. First, picture image added by caption transforms the way the audience decode. Second, the cultural background of an audience works as a major variable, which means how the audience decode the picture image depends on what sort of cultural background the audience has. Third, it is also learned that picture image not captioned would not be able to represent the reality faithfully. The author made an experimental trial in this thesis how the communication process works successfully in a reality that photojournalists tried to represent and the way the audience decode it for the successful communication in photojournalism.

  • PDF

The Evaluation Structure of Auditory Images on the Streetscapes - The Semantic Issues of Soundscape based on the Students' Fieldwork - (거리경관에 대한 청각적 이미지의 평가구조 - 대학생들의 음풍경 체험을 통한 의미론적 고찰 -)

  • Han Myung-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.8
    • /
    • pp.481-491
    • /
    • 2005
  • The purpose of this study is to interpret the evaluation structure of auditory images about streetscapes in urban area on the basis of the semantic view of soundscapes. Using the caption evaluation method. which is a new method, from 2001 to 2005, a total of 45 college students participated in a fieldwork to find out the images of sounds while walking on the main streets of Namwon city. It was able get various data which include elements, features, impressions, and preferences about auditory scene. In Namwon city, the elements of the formation of auditory images are classified into natural sound and artificial sound which include machinery sounds, community sounds. and signal sounds. Also, the features of the auditory scene are classified by kind of sound, behavior, condition, character, relationship of circumference and image. Finally, the impression of auditory scene is classified into three categories, which are the emotions of humans, atmosphere of the streets, and the characteristics of the sound itself. From the relationship between auditory scene and estimation, the elements, features and impressions of auditory scene consist of the items which are positive, neutral, and negative images. Also, it was able to grasp the characteristics of auditory image of place or space through the evaluation model of streetscapes in Namwon city.

Anomaly Detection Methodology Based on Multimodal Deep Learning (멀티모달 딥 러닝 기반 이상 상황 탐지 방법론)

  • Lee, DongHoon;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.101-125
    • /
    • 2022
  • Recently, with the development of computing technology and the improvement of the cloud environment, deep learning technology has developed, and attempts to apply deep learning to various fields are increasing. A typical example is anomaly detection, which is a technique for identifying values or patterns that deviate from normal data. Among the representative types of anomaly detection, it is very difficult to detect a contextual anomaly that requires understanding of the overall situation. In general, detection of anomalies in image data is performed using a pre-trained model trained on large data. However, since this pre-trained model was created by focusing on object classification of images, there is a limit to be applied to anomaly detection that needs to understand complex situations created by various objects. Therefore, in this study, we newly propose a two-step pre-trained model for detecting abnormal situation. Our methodology performs additional learning from image captioning to understand not only mere objects but also the complicated situation created by them. Specifically, the proposed methodology transfers knowledge of the pre-trained model that has learned object classification with ImageNet data to the image captioning model, and uses the caption that describes the situation represented by the image. Afterwards, the weight obtained by learning the situational characteristics through images and captions is extracted and fine-tuning is performed to generate an anomaly detection model. To evaluate the performance of the proposed methodology, an anomaly detection experiment was performed on 400 situational images and the experimental results showed that the proposed methodology was superior in terms of anomaly detection accuracy and F1-score compared to the existing traditional pre-trained model.

Efficient Object Classification Scheme for Scanned Educational Book Image (교육용 도서 영상을 위한 효과적인 객체 자동 분류 기술)

  • Choi, Young-Ju;Kim, Ji-Hae;Lee, Young-Woon;Lee, Jong-Hyeok;Hong, Gwang-Soo;Kim, Byung-Gyu
    • Journal of Digital Contents Society
    • /
    • v.18 no.7
    • /
    • pp.1323-1331
    • /
    • 2017
  • Despite the fact that the copyright has grown into a large-scale business, there are many constant problems especially in image copyright. In this study, we propose an automatic object extraction and classification system for the scanned educational book image by combining document image processing and intelligent information technology like deep learning. First, the proposed technology removes noise component and then performs a visual attention assessment-based region separation. Then we carry out grouping operation based on extracted block areas and categorize each block as a picture or a character area. Finally, the caption area is extracted by searching around the classified picture area. As a result of the performance evaluation, it can be seen an average accuracy of 83% in the extraction of the image and caption area. For only image region detection, up-to 97% of accuracy is verified.

A Study on Image Indexing Method based on Content (내용에 기반한 이미지 인덱싱 방법에 관한 연구)

  • Yu, Won-Gyeong;Jeong, Eul-Yun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.2 no.6
    • /
    • pp.903-917
    • /
    • 1995
  • In most database systems images have been indexed indirectly using related texts such as captions, annotations and image attributes. But there has been an increasing requirement for the image database system supporting the storage and retrieval of images directly by content using the information contained in the images. There has been a few indexing methods based on contents. Among them, Pertains proposed an image indexing method considering spatial relationships and properties of objects forming the images. This is the expansion of the other studies based on '2-D string. But this method needs too much storage space and lacks flexibility. In this paper, we propose a more flexible index structure based on kd-tree using paging techniques. We show an example of extracting keys using normalization from the from the raw image. Simulation results show that our method improves in flexibility and needs much less storage space.

  • PDF