• Title/Summary/Keyword: 자막 위치

Search Result 30, Processing Time 0.024 seconds

Methods for Video Caption Extraction and Extracted Caption Image Enhancement (영화 비디오 자막 추출 및 추출된 자막 이미지 향상 방법)

  • Kim, So-Myung;Kwak, Sang-Shin;Choi, Yeong-Woo;Chung, Kyu-Sik
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.4
    • /
    • pp.235-247
    • /
    • 2002
  • For an efficient indexing and retrieval of digital video data, research on video caption extraction and recognition is required. This paper proposes methods for extracting artificial captions from video data and enhancing their image quality for an accurate Hangul and English character recognition. In the proposed methods, we first find locations of beginning and ending frames of the same caption contents and combine those multiple frames in each group by logical operation to remove background noises. During this process an evaluation is performed for detecting the integrated results with different caption images. After the multiple video frames are integrated, four different image enhancement techniques are applied to the image: resolution enhancement, contrast enhancement, stroke-based binarization, and morphological smoothing operations. By applying these operations to the video frames we can even improve the image quality of phonemes with complex strokes. Finding the beginning and ending locations of the frames with the same caption contents can be effectively used for the digital video indexing and browsing. We have tested the proposed methods with the video caption images containing both Hangul and English characters from cinema, and obtained the improved results of the character recognition.

Development of Video Caption Editor with Kinetic Typography (글자가 움직이는 동영상 자막 편집 어플리케이션 개발)

  • Ha, Yea-Young;Kim, So-Yeon;Park, In-Sun;Lim, Soon-Bum
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.3
    • /
    • pp.385-392
    • /
    • 2014
  • In this paper, we developed an Android application named VIVID where users can edit the moving captions easily on smartphone videos. This makes it convenient to set the time range, text, location and motion of caption text on the video. The editing result is uploaded to web server in html and can be shared with other users.

Automatic sentence segmentation of subtitles generated by STT (STT로 생성된 자막의 자동 문장 분할)

  • Kim, Ki-Hyun;Kim, Hong-Ki;Oh, Byoung-Doo;Kim, Yu-Seop
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.559-560
    • /
    • 2018
  • 순환 신경망(RNN) 기반의 Long Short-Term Memory(LSTM)는 자연어처리 분야에서 우수한 성능을 보이는 모델이다. 음성을 문자로 변환해주는 Speech to Text (STT)를 이용해 자막을 생성하고, 생성된 자막을 다른 언어로 동시에 번역을 해주는 서비스가 활발히 진행되고 있다. STT를 사용하여 자막을 추출하는 경우에는 마침표가 없이 전부 연결된 문장이 생성되기 때문에 정확한 번역이 불가능하다. 본 논문에서는 영어자막의 자동 번역 시, 정확도를 높이기 위해 텍스트를 문장으로 분할하여 마침표를 생성해주는 방법을 제안한다. 이 때, LSTM을 이용하여 데이터를 학습시킨 후 테스트한 결과 62.3%의 정확도로 마침표의 위치를 예측했다.

  • PDF

Korea Information Science Society Caption position retrieval system for sports video (스포츠 비디오를 위한 자막 위치검색 시스템)

  • 임정훈;곽순영;국나영;이지현;이양원
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.10d
    • /
    • pp.628-630
    • /
    • 2002
  • 하이라이트를 구성하는데 종전에는 사람의 수작업에 의해서 이루어졌다. 요즘은 이런점을 연구를 통해 계속 자동화시키고 있는 추세이고 많은 논문들이 나오고 있다. 이 논문은 낮은 해상도의 동영상을 향상시키기 위해 Shannon Upsampling을 수행하고 적당한 임계치를 찾아내 이진영상을 만들어 전처리를 수행하고 수평 수직 히스토그램 기법과 다중프레임조함을 혼합해 자막위치를 찾는 방법을 제안한다. 이는 기존의 에지를 사용하는 방법들에 비해 간단하고 비교적 빠른 성능을 보인다.

  • PDF

A Study on Efficient Positioning of Subtitles in 360 VR (360 VR 영상에서 효율적인 자막 위치 선정에 관한 연구)

  • Kim, Hyeong-Gyun
    • Journal of Digital Convergence
    • /
    • v.18 no.6
    • /
    • pp.93-98
    • /
    • 2020
  • In this paper, we proposed a technique in which subtitles are followed according to changes in the user's viewpoint in 360 VR. Create a Sphere object in Unity's Scene and insert a 360-degree image on the surface of the Sphere object. At this time, the ReverseNormals script is used to convert the viewpoint to the inside. The SightOrbitproved script is used to modify the camera view. Use this script to set the environment in which subtitles can move depending on the viewpoint. Next, add the 3D text (subtitle) that the user wants to the lower layer of the main camera and build a 360 VR object. The 3D text subtitles implemented through this study were compared according to the change of the user's viewpoint. As a result, as the viewpoint changes, normal subtitles flow out of line of sight according to the user's point of view, but 3D Text subtitles move according to the user's point of view, and it can be seen that the user can always view the subtitles.

Text Region Extraction from Videos using the Harris Corner Detector (해리스 코너 검출기를 이용한 비디오 자막 영역 추출)

  • Kim, Won-Jun;Kim, Chang-Ick
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.7
    • /
    • pp.646-654
    • /
    • 2007
  • In recent years, the use of text inserted into TV contents has grown to provide viewers with better visual understanding. In this paper, video text is defined as superimposed text region located of the bottom of video. Video text extraction is the first step for video information retrieval and video indexing. Most of video text detection and extraction methods in the previous work are based on text color, contrast between text and background, edge, character filter, and so on. However, the video text extraction has big problems due to low resolution of video and complex background. To solve these problems, we propose a method to extract text from videos using the Harris corner detector. The proposed algorithm consists of four steps: corer map generation using the Harris corner detector, extraction of text candidates considering density of comers, text region determination using labeling, and post-processing. The proposed algorithm is language independent and can be applied to texts with various colors. Text region update between frames is also exploited to reduce the processing time. Experiments are performed on diverse videos to confirm the efficiency of the proposed method.

A new approach for overlay text detection from complex video scene (새로운 비디오 자막 영역 검출 기법)

  • Kim, Won-Jun;Kim, Chang-Ick
    • Journal of Broadcast Engineering
    • /
    • v.13 no.4
    • /
    • pp.544-553
    • /
    • 2008
  • With the development of video editing technology, there are growing uses of overlay text inserted into video contents to provide viewers with better visual understanding. Since the content of the scene or the editor's intention can be well represented by using inserted text, it is useful for video information retrieval and indexing. Most of the previous approaches are based on low-level features, such as edge, color, and texture information. However, existing methods experience difficulties in handling texts with various contrasts or inserted in a complex background. In this paper, we propose a novel framework to localize the overlay text in a video scene. Based on our observation that there exist transient colors between inserted text and its adjacent background a transition map is generated. Then candidate regions are extracted by using the transition map and overlay text is finally determined based on the density of state in each candidate. The proposed method is robust to color, size, position, style, and contrast of overlay text. It is also language free. Text region update between frames is also exploited to reduce the processing time. Experiments are performed on diverse videos to confirm the efficiency of the proposed method.

Permeation and diffusion of gases through polytetrafluoroethylene membrane (Polytetrafluoroethylene막을 통한 기체의 투과 및 확산)

  • 김형민;김남인;이우태
    • Proceedings of the Membrane Society of Korea Conference
    • /
    • 1994.10a
    • /
    • pp.34-35
    • /
    • 1994
  • 기체혼합물의 분리및 정제기술은 에너지 절약의 관점과 새로운 기능성 고분자의 개발로 고분자막에 의한 분리법이 관심을 끌게되었다. 공기로부터 산소부화, 방사성 크세논 및 크립론의 제거, 제련소 폐가스증의 수소분리, 천연가스로부터 헬륨의 회수분야등은 실제로 산업적으로 실용화되고 있다. 그러나 고분자막은 일반적으로 투과성과 선택성이 서로 상반되는 경향을 나타내므로, 투과성과 분리성이 좋은 기능성 고분자막의 개발에 다양한 연구가 필요로 하고있다. 본 연구에서 사용한 PTFE(polytetrafluoroethylene)는 결정성 고분자로서 넓은 온도범위에서 낮은 마찰계수, 우수한 전기적 절연특성, 강한 Carbon-fluorine 겹합에 기인한 높은 열적 안정성, 화확적 불활성때문에 공업용 고분자 재료로서 독특한 위치를 차지하고 있다. 최근에 미국과 일본을 주축으로 상용화딘 공기전지(Zinc-air battery)는 PTFE막의 뛰어난 소수성과 화학적 저항성으로 수은 전지의 대체품으로 주목받고 있는데, 장기 방전시 성능 저하가 따르므로 막을 통한 산소투과성을 방전에 필요한 최소값으로 감소시키는 것이 중요한 과제가 되고있다.

  • PDF

Effects of Caption-Utilized English Classes on Primary School Students' Character Recognition and Vocabulary Ability (자막을 활용한 영어수업이 초등학생의 문자인지 능력과 어휘력에 미치는 효과)

  • So, Suk;Lee, Je-Young;Hwang, Chee-Bok
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.7
    • /
    • pp.423-431
    • /
    • 2018
  • The purpose of the present study was to investigate the effect of caption-embedded video on character recognition and vocabulary ability of the primary school students, The subjects of this study were the students in two elementary schools in G city, Jeonbuk province. They were divided into two groups including a control group which used utilization video materials without captions, and a experimental group which used utilization video materials with captions. Each group was tested over the course of two months (10 classes). And then a statistical analysis was conducted to find out the effects of captions on character recognition and vocabulary ability through independent samples t-test and paired samples t-test. There were no significant differences in a comparison between the groups, but significant differences were found within the groups. Pedagogical implications based on the research findings and suggestions for further research were also discussed.

다채널 표면 플라즈몬 공명 영상장치를 이용한 자기조립 단분자막의 표면 분석

  • Pyo, Hyeon-Bong;Sin, Yong-Beom;Yun, Hyeon-Cheol
    • 한국생물공학회:학술대회논문집
    • /
    • 2003.04a
    • /
    • pp.74-78
    • /
    • 2003
  • Multi-channel images of 11-MUA and 11-MUOH self-assembled monolayers were obtained by using two-dimensional surface plasmon resonance (SPR) absorption. Patterning process was simplified by exploiting direct photo-oxidation of thiol bonding (photolysis) instead of conventional photolithography. Sharper images were resolved by using a white light source in combination with a narrow bandpass filter in the visible region, minimizing the diffraction patterns on the images. The line profile calibration of the image contrast caused by different resonance conditions at each points on the sensor surface (at a fixed incident angle) enables us to discriminate the monolayer thickness in sub-nanometer scale. Furthermore, there is no signal degradation such as photo bleaching or quenching which are common in the detection methods based on the fluorescence.

  • PDF