• Title/Summary/Keyword: Video sequence

Search Result 507, Processing Time 0.027 seconds

A New Adaptive Window Size-based Three Step Search Scheme (적응형 윈도우 크기 기반 NTSS (New Three-Step Search Algorithm) 알고리즘)

  • Yu Jonghoon;Oh Seoung-Jun;Ahn Chang-bum;Park Ho-Chong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.1 s.307
    • /
    • pp.75-84
    • /
    • 2006
  • With considering center-biased characteristic, NTSS(New Three-Step Search Algorithm) can improve the performance of TSS(Three-Step Search Algorithm) which is one of the most popular fast block matching algorithms(BMA) to search a motion vector in a video sequence. Although NTSS has generally better Quality than TSS for a small motion sequence, it is hard to say that NTSS can provide better quality than TSS for a large motion sequence. It even deteriorates the quality to increase a search window size using NTSS. In order to address this drawback, this paper aims to develop a new adaptive window size-based three step search scheme, called AWTSS, which can improve quality at various window sizes in both the small and the large motion video sequences. In this scheme, the search window size is dynamically changed to improve coding efficiency according to the characteristic of motion vectors. AWTSS can improve the video quality more than 0.5dB in case of large motion with keeping the same quality in case of small motion.

Content-Based Video Retrieval System Using Color and Motion Features (색상과 움직임 정보를 이용한 내용기반 동영상 검색 시스템)

  • 김소희;김형준;정연구;김회율
    • Proceedings of the IEEK Conference
    • /
    • 2001.06c
    • /
    • pp.133-136
    • /
    • 2001
  • Numerous challenges have been made to retrieve video using the contents. Recently MPEG-7 had set up a set of visual descriptors for such purpose of searching and retrieving multimedia data. Among them, color and motion descriptors are employed to develop a content-based video retrieval system to search for videos that have similar characteristics in terms of color and motion features of the video sequence. In this paper, the performance of the proposed system is analyzed and evaluated. Experimental results indicate that the processing time required for a retrieval using MPEG-7 descriptors is relatively short at the expense of the retrieval accuracy.

  • PDF

SUPER RESOLUTION RECONSTRUCTION FROM IMAGE SEQUENCE

  • Park Jae-Min;Kim Byung-Guk
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.197-200
    • /
    • 2005
  • Super resolution image reconstruction method refers to image processing algorithms that produce a high resolution(HR) image from observed several low resolution(LR) images of the same scene. This method is proved to be useful in many practical cases where multiple frames of the same scene can be obtained, such as satellite imaging, video surveillance, video enhancement and restoration, digital mosaicking, and medical imaging. In this paper we applied super resolution reconstruction method in spatial domain to video sequences. Test images are adjacently sampled images from continuous video sequences and overlapped for high rate. We constructed the observation model between the HR images and LR images applied by the Maximum A Posteriori(MAP) reconstruction method that is one of the major methods in the super resolution grid construction. Based on this method, we reconstructed high resolution images from low resolution images and compared the results with those from other known interpolation methods.

  • PDF

Face Detection and Tracking using Skin Color Information and Haar-Like Features in Real-Time Video (실시간 영상에서 피부색상 정보와 Haar-Like Feature를 이용한 얼굴 검출 및 추적)

  • Kim, Dong-Hyeon;Im, Jae-Hyun;Kim, Dae-Hee;Kim, Tae-Kyung;Paik, Joon-Ki
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.146-149
    • /
    • 2009
  • Face detection and recognition in real-time video constitutes one of the recent topics in the field of computer vision. In this paper, we propose face detection and tracking algorithm using the skin color and haar-like feature in real-time video sequence. The proposed algorithm further includes color space to enhance the result using haar-like feature and skin color. Experiment results reveal the real-time video processing speed and improvement in the rate of tracking.

  • PDF

Key Frame Assignment for Compr essed Video Based on DC Image Activity

  • Kim, Kang-Wook;Lee, Jae-Seung;Kwon, Seong-Geun
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.9
    • /
    • pp.1109-1116
    • /
    • 2011
  • In this paper, we propose a new and fast method for assigning the number of key frames to each shot. At first we segment the entire video sequence into elementary content unit called shots and then the key frame allocation is performed by calculating the accumulated value of AF(activity function). The proposed algorithm is based on the amount of content variation using DC images extracted from compressed video. By assigning the number of key frames to the shot that has the largest value of content function, one key frame is assigned at a time until you run out of given all key frames. The main advantage of our proposed method is that we do not need to use time-exhaustive computations in allocating the key frames over the shot and can perform it fully automatically.

NO REFERENCE QUALITY ASSESSMENT OVER PACKET VIDEO NETWORK

  • Sung, Duk-Gu;Hong, Seung-Seok;Kim, Yo-Han;Kim, Yong-Gyoo;Park, Tae-Sung;Shin, Ji-Tae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.250-253
    • /
    • 2009
  • This paper presents NR (No Reference) Quality assessment method for IPTV or mobile IPTV. Because No Reference quality assessment method does not access the original signal so it is suitable for the real-time streaming service. Our proposed method use decoding parameters, such as quantization parameter, motion vector, and packet loss as a major network parameter. To evaluate performance of the proposed algorithm, we carried out subjective test of video quality with the ITU-T P.910 ACR (Absolute Category Rating) method and obtained the mean opinion score (MOS) value for QVGA 180 video sequence coded by H.264/AVC encoder. Experimental results show the proposed quality metric has a high correlation (84%) to subjective quality.

  • PDF

Three-Dimensional Subband Coding of Video using Wavelet Packet Algorithm (웨이브릿 패킷 알고리즘을 이용한 3차원 비디오 서브밴드 코딩)

  • Chu, Hyung Suk;An, Chong Koo
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.54 no.11
    • /
    • pp.673-679
    • /
    • 2005
  • This Paper presents the 3D wavelet transformation based video compression system, which possesses the capability of progressive transmission by increasing resolution and increasing rate for multimedia applications. The 3D wavelet packet based video compression system removes the temporal correlation of the input sequences using the motion compensation filter and decomposes the spatio-temporal subband using the spatial wavelet packet transformation. The proposed system allocates the higher bit rate to the low frequency image of the 3D wavelet sequences and improves the 0.49dB PSNR performance of the reconstructed image in comparison with that of H.263. In addition to the limitation on the propagation of the motion compensation error by the 3D wavelet transformation, the proposed system progressively transmits the input sequence according to the resolution and rate scalability.

Digital Hologram Coding Technique using Block Matching of Localized Region and MCTF (로컬영역의 정합기법 및 MCTF를 이용한 디지털 홀로그램 부호화 기술)

  • Seo, Young-Ho;Choi, Hyun-Jun;Kim, Dong-Wook
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.415-416
    • /
    • 2006
  • In this paper, we proposed a new coding technique of digital hologram video using 3D scanning method and video compression technique. The proposed coding consists of capturing a digital hologram to separate into RGB color space components, localization by segmenting the fringe pattern, frequency transform using $M{\tiems}N$ (segment size) 2D DCT (2 Dimensional Discrete Cosine Transform) for extracting redundancy, 3D scan of segment to form a video sequence, motion compensated temporal filtering (MCTF) and modified video coding which uses H.264/AVC.

  • PDF

Robust Video-Based Barcode Recognition via Online Sequential Filtering

  • Kim, Minyoung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.14 no.1
    • /
    • pp.8-16
    • /
    • 2014
  • We consider the visual barcode recognition problem in a noisy video data setup. Unlike most existing single-frame recognizers that require considerable user effort to acquire clean, motionless and blur-free barcode signals, we eliminate such extra human efforts by proposing a robust video-based barcode recognition algorithm. We deal with a sequence of noisy blurred barcode image frames by posing it as an online filtering problem. In the proposed dynamic recognition model, at each frame we infer the blur level of the frame as well as the digit class label. In contrast to a frame-by-frame based approach with heuristic majority voting scheme, the class labels and frame-wise noise levels are propagated along the frame sequences in our model, and hence we exploit all cues from noisy frames that are potentially useful for predicting the barcode label in a probabilistically reasonable sense. We also suggest a visual barcode tracking approach that efficiently localizes barcode areas in video frames. The effectiveness of the proposed approaches is demonstrated empirically on both synthetic and real data setup.

Video-Based Augmented Reality without Euclidean Camera Calibration (유클리드 카메라 보정을 하지 않는 비디오 기반 증강현실)

  • Seo, Yong-Deuk
    • Journal of the Korea Computer Graphics Society
    • /
    • v.9 no.3
    • /
    • pp.15-21
    • /
    • 2003
  • An algorithm is developed for augmenting a real video with virtual graphics objects without computing Euclidean information. Real motion of the camera is obtained in affine space by a direct linear method using image matches. Then, virtual camera is provided by determining the locations of four basis points in two input images as initialization process. The four pairs of 2D location and its 3D affine coordinates provide Euclidean orthographic projection camera through the whole video sequence. Our method has the capability of generating views of objects shaded by virtual light sources, because we can make use of all the functions of the graphics library written on the basis of Euclidean geometry. Our novel formulation and experimental results with real video sequences are presented.

  • PDF