• Title/Summary/Keyword: Video Sequence

Search Result 504, Processing Time 0.022 seconds

Efficient Representation and Matching of Object Movement using Shape Sequence Descriptor (모양 시퀀스 기술자를 이용한 효과적인 동작 표현 및 검색 방법)

  • Choi, Min-Seok
    • The KIPS Transactions:PartB
    • /
    • v.15B no.5
    • /
    • pp.391-396
    • /
    • 2008
  • Motion of object in a video clip often plays an important role in characterizing the content of the clip. A number of methods have been developed to analyze and retrieve video contents using motion information. However, most of these methods focused more on the analysis of direction or trajectory of motion but less on the analysis of the movement of an object itself. In this paper, we propose the shape sequence descriptor to describe and compare the movement based on the shape deformation caused by object motion along the time. A movement information is first represented a sequence of 2D shape of object extracted from input image sequence, and then 2D shape information is converted 1D shape feature using the shape descriptor. The shape sequence descriptor is obtained from the shape descriptor sequence by frequency transform along the time. Our experiment results show that the proposed method can be very simple and effective to describe the object movement and can be applicable to semantic applications such as content-based video retrieval and human movement recognition.

Annotation Method based on Face Area for Efficient Interactive Video Authoring (효과적인 인터랙티브 비디오 저작을 위한 얼굴영역 기반의 어노테이션 방법)

  • Yoon, Ui Nyoung;Ga, Myeong Hyeon;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.83-98
    • /
    • 2015
  • Many TV viewers use mainly portal sites in order to retrieve information related to broadcast while watching TV. However retrieving information that people wanted needs a lot of time to retrieve the information because current internet presents too much information which is not required. Consequentially, this process can't satisfy users who want to consume information immediately. Interactive video is being actively investigated to solve this problem. An interactive video provides clickable objects, areas or hotspots to interact with users. When users click object on the interactive video, they can see additional information, related to video, instantly. The following shows the three basic procedures to make an interactive video using interactive video authoring tool: (1) Create an augmented object; (2) Set an object's area and time to be displayed on the video; (3) Set an interactive action which is related to pages or hyperlink; However users who use existing authoring tools such as Popcorn Maker and Zentrick spend a lot of time in step (2). If users use wireWAX then they can save sufficient time to set object's location and time to be displayed because wireWAX uses vision based annotation method. But they need to wait for time to detect and track object. Therefore, it is required to reduce the process time in step (2) using benefits of manual annotation method and vision-based annotation method effectively. This paper proposes a novel annotation method allows annotator to easily annotate based on face area. For proposing new annotation method, this paper presents two steps: pre-processing step and annotation step. The pre-processing is necessary because system detects shots for users who want to find contents of video easily. Pre-processing step is as follow: 1) Extract shots using color histogram based shot boundary detection method from frames of video; 2) Make shot clusters using similarities of shots and aligns as shot sequences; and 3) Detect and track faces from all shots of shot sequence metadata and save into the shot sequence metadata with each shot. After pre-processing, user can annotates object as follow: 1) Annotator selects a shot sequence, and then selects keyframe of shot in the shot sequence; 2) Annotator annotates objects on the relative position of the actor's face on the selected keyframe. Then same objects will be annotated automatically until the end of shot sequence which has detected face area; and 3) User assigns additional information to the annotated object. In addition, this paper designs the feedback model in order to compensate the defects which are wrong aligned shots, wrong detected faces problem and inaccurate location problem might occur after object annotation. Furthermore, users can use interpolation method to interpolate position of objects which is deleted by feedback. After feedback user can save annotated object data to the interactive object metadata. Finally, this paper shows interactive video authoring system implemented for verifying performance of proposed annotation method which uses presented models. In the experiment presents analysis of object annotation time, and user evaluation. First, result of object annotation average time shows our proposed tool is 2 times faster than existing authoring tools for object annotation. Sometimes, annotation time of proposed tool took longer than existing authoring tools, because wrong shots are detected in the pre-processing. The usefulness and convenience of the system were measured through the user evaluation which was aimed at users who have experienced in interactive video authoring system. Recruited 19 experts evaluates of 11 questions which is out of CSUQ(Computer System Usability Questionnaire). CSUQ is designed by IBM for evaluating system. Through the user evaluation, showed that proposed tool is useful for authoring interactive video than about 10% of the other interactive video authoring systems.

Frame Rearrangement Method by Time Information Remarked on Recovered Image (복원된 영상에 표기된 시간 정보에 의한 프레임 재정렬 기법)

  • Kim, Yong Jin;Lee, Jung Hwan;Byun, Jun Seok;Park, Nam In
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.12
    • /
    • pp.1641-1652
    • /
    • 2021
  • To analyze the crime scene, the role of digital evidence such as CCTV and black box is very important. Such digital evidence is often damaged due to device defects or intentional deletion. In this case, the deleted video can be restored by well-known techniques like the frame-based recovery method. Especially, the data such as the video can be generally fragmented and saved in the case of the memory used almost fully. If the fragmented video were recovered in units of images, the sequence of the recovered images may not be continuous. In this paper, we proposed a new video restoration method to match the sequence of recovered images. First, the images are recovered through a frame-based recovery technique. Then, after analyzing the time information marked on the images, the time information was extracted and recognized via optical character recognition (OCR). Finally, the recovered images are rearranged based on the time information obtained by OCR. For performance evaluation, we evaluate the recovery rate of our proposed video restoration method. As a result, it was shown that the recovery rate for the fragmented video was recovered from a minimum of about 47% to a maximum of 98%.

ViStoryNet: Neural Networks with Successive Event Order Embedding and BiLSTMs for Video Story Regeneration (ViStoryNet: 비디오 스토리 재현을 위한 연속 이벤트 임베딩 및 BiLSTM 기반 신경망)

  • Heo, Min-Oh;Kim, Kyung-Min;Zhang, Byoung-Tak
    • KIISE Transactions on Computing Practices
    • /
    • v.24 no.3
    • /
    • pp.138-144
    • /
    • 2018
  • A video is a vivid medium similar to human's visual-linguistic experiences, since it can inculcate a sequence of situations, actions or dialogues that can be told as a story. In this study, we propose story learning/regeneration frameworks from videos with successive event order supervision for contextual coherence. The supervision induces each episode to have a form of trajectory in the latent space, which constructs a composite representation of ordering and semantics. In this study, we incorporated the use of kids videos as a training data. Some of the advantages associated with the kids videos include omnibus style, simple/explicit storyline in short, chronological narrative order, and relatively limited number of characters and spatial environments. We build the encoder-decoder structure with successive event order embedding, and train bi-directional LSTMs as sequence models considering multi-step sequence prediction. Using a series of approximately 200 episodes of kids videos named 'Pororo the Little Penguin', we give empirical results for story regeneration tasks and SEOE. In addition, each episode shows a trajectory-like shape on the latent space of the model, which gives the geometric information for the sequence models.

Temporal Color Rolling Suppression Algorithm Considering Time-varying Illuminant (조도 변화를 고려한 동영상 색 유동성 저감 알고리즘)

  • Oh, Hyun-Mook;Kang, Moon-Gi
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.5
    • /
    • pp.55-62
    • /
    • 2011
  • In this paper, a temporal color and luminance variation suppression algorithm for a digital video sequence is proposed by considering time-varying light source. When a video sequence is sampled with the periodically emitting illuminant and with a short exposure time, the color rolling phenomenon occurs, where the color and the luminance of the image periodically change from field to field. In conventional signal processing techniques, the luminance variation remaining in the resultant video sequence degrades the constancy of the image sequence. In the proposed method, we obtain video sequences with constant luminance and color by compensating for the inter-field luminance variation. Based on a motion detection technique, the amount of the luminance variation for each channel is estimated on the background of the sequence without the effects of moving objects. The experimental results clearly show that our strategy efficiently estimated the illuminant change without being affected by moving objects, and the variations were efficiently reduced.

Pattern Similarity Retrieval of Data Sequences for Video Retrieval System (비디오 검색 시스템을 위한 데이터 시퀀스 패턴 유사성 검색)

  • Lee Seok-Lyong
    • The KIPS Transactions:PartD
    • /
    • v.13D no.3 s.106
    • /
    • pp.347-356
    • /
    • 2006
  • A video stream can be represented by a sequence of data points in a multidimensional space. In this paper, we introduce a trend vector that approximates values of data points in a sequence and represents the moving trend of points in the sequence, and present a pattern similarity matching method for data sequences using the trend vector. A sequence is partitioned into multiple segments, each of which is represented by a trend vector. The query processing is based on the comparison of these vectors instead of scanning data elements of entire sequences. Using the trend vector, our method is designed to filter out irrelevant sequences from a database and to find similar sequences with respect to a query. We have performed an extensive experiment on synthetic sequences as well as video streams. Experimental results show that the precision of our method is up to 2.1 times higher and the processing time is up to 45% reduced, compared with an existing method.

Real-Time 2D-to-3D Conversion for 3DTV using Time-Coherent Depth-Map Generation Method

  • Nam, Seung-Woo;Kim, Hye-Sun;Ban, Yun-Ji;Chien, Sung-Il
    • International Journal of Contents
    • /
    • v.10 no.3
    • /
    • pp.9-16
    • /
    • 2014
  • Depth-image-based rendering is generally used in real-time 2D-to-3D conversion for 3DTV. However, inaccurate depth maps cause flickering issues between image frames in a video sequence, resulting in eye fatigue while viewing 3DTV. To resolve this flickering issue, we propose a new 2D-to-3D conversion scheme based on fast and robust depth-map generation from a 2D video sequence. The proposed depth-map generation algorithm divides an input video sequence into several cuts using a color histogram. The initial depth of each cut is assigned based on a hypothesized depth-gradient model. The initial depth map of the current frame is refined using color and motion information. Thereafter, the depth map of the next frame is updated using the difference image to reduce depth flickering. The experimental results confirm that the proposed scheme performs real-time 2D-to-3D conversions effectively and reduces human eye fatigue.

Fractal Depth Map Sequence Coding Algorithm with Motion-vector-field-based Motion Estimation

  • Zhu, Shiping;Zhao, Dongyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.1
    • /
    • pp.242-259
    • /
    • 2015
  • Three-dimensional video coding is one of the main challenges restricting the widespread applications of 3D video and free viewpoint video. In this paper, a novel fractal coding algorithm with motion-vector-field-based motion estimation for depth map sequence is proposed. We firstly add pre-search restriction to rule the improper domain blocks out of the matching search process so that the number of blocks involved in the search process can be restricted to a smaller size. Some improvements for motion estimation including initial search point prediction, threshold transition condition and early termination condition are made based on the feature of fractal coding. The motion-vector-field-based adaptive hexagon search algorithm on the basis of center-biased distribution characteristics of depth motion vector is proposed to accelerate the search. Experimental results show that the proposed algorithm can reach optimum levels of quality and save the coding time. The PSNR of synthesized view is increased by 0.56 dB with 36.97% bit rate decrease on average compared with H.264 Full Search. And the depth encoding time is saved by up to 66.47%. Moreover, the proposed fractal depth map sequence codec outperforms the recent alternative codecs by improving the H.264/AVC, especially in much bitrate saving and encoding time reduction.

Error Resilience Method of MPEG-2 Header Parameters by using LSB Coding for Robust DTV Video Transmission (견실한 DTV 영상 전송을 위해 LSB 부호화를 이용한 MPEG-2 헤러 정보의 오류 복원 방법)

  • Lim Tae-gyun;Lee Sang-hak
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.5
    • /
    • pp.1019-1024
    • /
    • 2005
  • MPEG-2 achieves high compression radio, by exploiting the temporal and spatial correlations in real image sequence, using the motion compensated prediction and the transform coding, respectively. However, as the image sequence is more highly compressed, the encoded bitstream becomes more vulnerable to transmission error over the noisy channels. Furthermore, er개rs in the headers are fatal to decoding processes, because the header parameters in the video coding standard include a lot of important information connected to the syntax elements, fables, and decoding process. In this paper, we propose a new error resilience method using LSB coding for header parameters in MPEG-2 coded video transmissions. The experimental results for football and susie video sequence demonstrate that the proposed error resilience method for header parameters in MPEG-2 bitstream has good performance.

Video Haze Removal Method in HLS Color Space (HLS 색상 공간에서 동영상의 안개제거 기법)

  • An, Jae Won;Ko, Yun-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.1
    • /
    • pp.32-42
    • /
    • 2017
  • This paper proposes a new haze removal method for moving image sequence. Since the conventional dark channel prior haze removal method adjusts each color component separately in RGB color space, there can be severe color distortion in the haze removed output image. In order to resolve this problem, this paper proposes a new haze removal scheme that adjusts luminance and saturation components in HLS color space while retaining hue component. Also the conventional dark channel prior haze removal method is developed to obtain best haze removal performance for a single image. Therefore, if it is applied to a moving image sequence, the estimated parameter values change rapidly and the haze removed output image sequence shows unnatural glitter defects. To overcome this problem, a new parameter estimation method using Kalman filter is proposed for moving image sequence. Experimental results demonstrate that the haze removal performance of the proposed method is better than that of the conventional dark channel prior method.