• Title/Summary/Keyword: Caption

Search Result 167, Processing Time 0.028 seconds

A Novel Image Captioning based Risk Assessment Model (이미지 캡셔닝 기반의 새로운 위험도 측정 모델)

  • Jeon, Min Seong;Ko, Jae Pil;Cheoi, Kyung Joo
    • The Journal of Information Systems
    • /
    • v.32 no.4
    • /
    • pp.119-136
    • /
    • 2023
  • Purpose We introduce a groundbreaking surveillance system explicitly designed to overcome the limitations typically associated with conventional surveillance systems, which often focus primarily on object-centric behavior analysis. Design/methodology/approach The study introduces an innovative approach to risk assessment in surveillance, employing image captioning to generate descriptive captions that effectively encapsulate the interactions among objects, actions, and spatial elements within observed scenes. To support our methodology, we developed a distinctive dataset comprising pairs of [image-caption-danger score] for training purposes. We fine-tuned the BLIP-2 model using this dataset and utilized BERT to decipher the semantic content of the generated captions for assessing risk levels. Findings In a series of experiments conducted with our self-constructed datasets, we illustrate that these datasets offer a wealth of information for risk assessment and display outstanding performance in this area. In comparison to models pre-trained on established datasets, our generated captions thoroughly encompass the necessary object attributes, behaviors, and spatial context crucial for the surveillance system. Additionally, they showcase adaptability to novel sentence structures, ensuring their versatility across a range of contexts.

Membership Inference Attack against Text-to-Image Model Based on Generating Adversarial Prompt Using Textual Inversion (Textual Inversion을 활용한 Adversarial Prompt 생성 기반 Text-to-Image 모델에 대한 멤버십 추론 공격)

  • Yoonju Oh;Sohee Park;Daeseon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1111-1123
    • /
    • 2023
  • In recent years, as generative models have developed, research that threatens them has also been actively conducted. We propose a new membership inference attack against text-to-image model. Existing membership inference attacks on Text-to-Image models produced a single image as captions of query images. On the other hand, this paper uses personalized embedding in query images through Textual Inversion. And we propose a membership inference attack that effectively generates multiple images as a method of generating Adversarial Prompt. In addition, the membership inference attack is tested for the first time on the Stable Diffusion model, which is attracting attention among the Text-to-Image models, and achieve an accuracy of up to 1.00.

A Study on Image Indexing Method based on Content (내용에 기반한 이미지 인덱싱 방법에 관한 연구)

  • Yu, Won-Gyeong;Jeong, Eul-Yun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.2 no.6
    • /
    • pp.903-917
    • /
    • 1995
  • In most database systems images have been indexed indirectly using related texts such as captions, annotations and image attributes. But there has been an increasing requirement for the image database system supporting the storage and retrieval of images directly by content using the information contained in the images. There has been a few indexing methods based on contents. Among them, Pertains proposed an image indexing method considering spatial relationships and properties of objects forming the images. This is the expansion of the other studies based on '2-D string. But this method needs too much storage space and lacks flexibility. In this paper, we propose a more flexible index structure based on kd-tree using paging techniques. We show an example of extracting keys using normalization from the from the raw image. Simulation results show that our method improves in flexibility and needs much less storage space.

  • PDF

Design of OpenScenario Structure for Content Creation Service Based on User Defined Story (사용자 정의 스토리 기반 콘텐츠 제작 서비스를 위한 오픈 시나리오 언어 구조 설계)

  • Lee, Hyejoo;Kwon, Ki-Ryong;Lee, Suk-Hwan;Park, Yun-Kyong;Moon, Kyong Deok
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.2
    • /
    • pp.170-179
    • /
    • 2016
  • It is a story-based content creation service that provides any user with some proper contents based on a story written by the user in order to utilize a lot of contents accumulated on Internet. For this service, the story has to be described in computer-readable representation. In this paper, analyzing the structure of scenario, as known as screenplay or scripts, a structure of story representation, which is referred to as OpenScenario, is defined. We intend users to produce their own contents by using massive contents on Internet by the proposed method. The proposed method's OpenScenario consists two main parts, OSD (OpenScenario Descriptors) which is a set of descriptors to describe various objects of shots such as visual, aural and textual objects and OSS (OpenScenario Scripts) which is a set of scripts to add some effects such as image, caption, transition between shots, and background music. As an usecase of proposed method, we describe how to create new content using OpenScenario and discuss some required technologies to apply the proposed method effectively.

The Examination of Reliability of Lower Limb Joint Angles with Free Software ImageJ

  • Kim, Heung Youl
    • Journal of the Ergonomics Society of Korea
    • /
    • v.34 no.6
    • /
    • pp.583-595
    • /
    • 2015
  • Objective: The purpose of this study was to determine the reliability of lower limb joint angles computed with the software ImageJ during jumping movements. Background: Kinematics is the study of bodies in motion without regard to the forces or torques that may produce the motion. The most common method for collecting motion data uses an imaging and motion-caption system to record the 2D or 3D coordinates of markers attached to a moving object, followed by manual or automatic digitizing software. Above all, passive optical motion capture systems (e.g. Vicon system) have been regarded as the gold standards for collecting motion data. On the other hand, ImageJ is used widely for an image analysis as free software, and can collect the 2D coordinates of markers. Although much research has been carried out into the utilizations of the ImageJ software, little is known about their reliability. Method: Seven healthy female students participated as the subject in this study. Seventeen reflective markers were attached on the right and left lower limbs to measure two and three-dimensional joint angular motions. Jump performance was recorded by ten-vicon camera systems (250Hz) and one digital video camera (240Hz). The joint angles of the ankle and knee joints were calculated using 2D (ImageJ) and 3D (Vicon-MX) motion data, respectively. Results: Pearson's correlation coefficients between the two methods were calculated, and significance tests were conducted (${\alpha}=1%$). Correlation coefficients between the two were over 0.98. In Vicon-MX and ImageJ, there is no systematic error by examination of the validity using the Bland-Altman method, and all data are in the 95% limits of agreement. Conclusion: In this study, correlation coefficients are generally high, and the regression line is near the identical line. Therefore, it is considered that motion analysis using ImageJ is a useful tool for evaluation of human movements in various research areas. Application: This result can be utilized as a practical tool to analyze human performance in various fields.

Improving Watching HDTV Environment by Analyzing Visual Perception of Character Graphics (문자그래픽 시각인지도 분석에 따른 HDTV시청환경 개선 연구)

  • Lee, Kook-Se;Moon, Nam-Mee
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.583-589
    • /
    • 2009
  • The new HDTV technologies provide crystal clear images and surrounding sounds for broadcasting screen in order to serve higher quality of broadcasting. They have four times the resolution of conventional TV and handle 16:9 wide screen aspect ratio images. Due to such technological shift, the function of Character Graphic has been particularly revaluated: it used to be only a subsidiary method to literally deliver visual images, but nowadays it is considered one of the essential elements capable of giving higher values to broadcasting programs. And there is an urgent need for changing its attributes such as fonts, sizes, colors, moving speeds to fit to bigger screen ratio and much more qualified images of HDTV. To meet such a need, Delphi surveys are made twice with three groups of TV production staffs: Art Directors, CG Designers and Production & Transmission team, these groups which are divided on the basis of their roles in broadcasting production process. With the results of these surveys, this article analyzes how all of the attributes of Character Graphic have affected the media users' Visual Perception, and then, suggests a new format designed in OSMU(One Source Multi Use) by which TV character graphics can be properly transmitted to various media formats.

  • PDF

Efficient Content-Based Image Retrieval Method using Shape and Color feature (형태와 칼러성분을 이용한 효율적인 내용 기반의 이미지 검색 방법)

  • Youm, Sung-Ju;Kim, Woo-Saeng
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.4
    • /
    • pp.733-744
    • /
    • 1996
  • Content-based image retrieval(CBIR) is an image data retrieval methodology using characteristic values of image data those are generated by system automatically without any caption or text information. In this paper, we propose a content-based image data retrieval method using shape and color features of image data as characteristic values. For this, we present some image processing techniques used for feature extraction and indexing techniques based on trie and R tree for fast image data retrieval. In our approach, image query result is more reliable because both shape and color features are considered. Also, we how an image database which implemented according to our approaches and sample retrieval results which are selected by our system from 200 sample images, and an analysis about the result by considering the effect of characteristic values of shape and color.

  • PDF

The Development of Image Caption Generating Software for Auditory Disabled (청각장애인을 위한 동영상 이미지캡션 생성 소프트웨어 개발)

  • Lim, Kyung-Ho;Yoon, Joon-Sung
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.1069-1074
    • /
    • 2007
  • 청각장애인이 PC환경에서 영화, 방송, 애니메이션 등의 동영상 콘텐츠를 이용할 때 장애의 정도에 따라 콘텐츠의 접근성에 있어서 시각적 수용 이외의 부분적 장애가 발생한다. 이러한 장애의 극복을 위해 수화 애니메이션이나 독화 교육과 같은 청각장애인의 정보 접근성 향상을 위한 콘텐츠와 기술이 개발된 사례가 있었으나 다소 한계점을 가지고 있다. 따라서 본 논문에서는 현대 뉴미디어 예술 작품의 예술적 표현 방법을 구성요소로서 추출하여, 기술과 감성의 조화가 어우러진 독창적인 콘텐츠를 생산할 수 있는 기술을 개발함으로써 PC환경에서 청각장애인의 동영상 콘텐츠에 대한 접근성 향상 방법을 추출하고, 실질적으로 청각적 효과의 시각적 변환 인터페이스 개발 및 이미지 캡션 생성 소프트웨어 개발을 통해 청각장애인의 동영상 콘텐츠 사용성을 극대화시킬 수 있는 방법론을 제시하고자 한다. 본 논문에서는 첫째, 청각장애인의 동영상 콘텐츠 접근성 분석, 둘째, 미디어아트 작품의 선별적 분석 및 유동요소 추출, 셋째, 인터페이스 및 콘텐츠 제작의 순서로 단계별 방법론을 제시하고 있다. 이 세번 째 단계에서 이미지 캡션 생성 소프트웨어가 개발되고, 비트맵 아이콘 형태의 이미지 캡션 콘텐츠가 생성된다. 개발한 이미지 캡션 생성 소프트웨어는 사용성에 입각한 일상의 언어적 요소와 예술 작품으로부터 추출한 청각 요소의 시각적요소로의 전환을 위한 인터페이스인 것이다. 이러한 기술의 개발은 기술적 측면으로는 청각장애인의 다양한 웹콘텐츠 접근 장애를 개선하는 독창적인 인터페이스 추출 환경을 확립하여 응용영역을 확대하고, 공학적으로 단언된 기술 영역을 콘텐츠 개발 기술이라는 새로운 영역으로 확장함으로써 간학제적 시도를 통한 기술영역을 유기적으로 확대하며, 문자와 오디오를 이미지와 시각적 효과로 전환하여 다각적인 미디어의 교차 활용 방안을 제시하여 콘텐츠를 형상화시키는 기술을 활성화 시키는 효과를 거둘 수 있다. 또한 청각장애인의 접근성 개선이라는 한정된 영역을 뛰어넘어 국가간 언어적인 장벽을 초월할 수 있는 다각적인 부가 동영상 콘텐츠에 대한 시도, 접근, 생산을 통해 글로벌 시대에 부응하는 새로운 방법론으로 발전 할 수 있다.

  • PDF

Overlay Text Graphic Region Extraction for Video Quality Enhancement Application (비디오 품질 향상 응용을 위한 오버레이 텍스트 그래픽 영역 검출)

  • Lee, Sanghee;Park, Hansung;Ahn, Jungil;On, Youngsang;Jo, Kanghyun
    • Journal of Broadcast Engineering
    • /
    • v.18 no.4
    • /
    • pp.559-571
    • /
    • 2013
  • This paper has presented a few problems when the 2D video superimposed the overlay text was converted to the 3D stereoscopic video. To resolve the problems, it proposes the scenario which the original video is divided into two parts, one is the video only with overlay text graphic region and the other is the video with holes, and then processed respectively. And this paper focuses on research only to detect and extract the overlay text graphic region, which is a first step among the processes in the proposed scenario. To decide whether the overlay text is included or not within a frame, it is used the corner density map based on the Harris corner detector. Following that, the overlay text region is extracted using the hybrid method of color and motion information of the overlay text region. The experiment shows the results of the overlay text region detection and extraction process in a few genre video sequence.

An Improved Method for Detecting Caption in image using DCT-coefficient and Transition-map Analysis (DCT계수와 천이지도 분석을 이용한 개선된 영상 내 자막영역 검출방법)

  • An, Kwon-Jae;Joo, Sung-Il;Kim, Gye-Young;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.4
    • /
    • pp.61-71
    • /
    • 2011
  • In this paper, we proposed the method for detecting text region on image using DCT-coefficient and transition-map analysis. The detecting rate of traditional method for detecting text region using DCT-coefficient analysis is high, but false positive detecting rate also is high and the method using transition-map often reject true text region in step of verification because of sticky threshold. To overcome these problems, we generated PTRmap(Promising Text Region map) through DCT-coefficient analysis and applied PTRmap to method for detecting text region using transition map. As the result, the false positive detecting rate decreased as compared with the method using DCT-coefficient analysis, and the detecting rate increased as compared with the method using transition map.