• Title/Summary/Keyword: video signal processing

Search Result 291, Processing Time 0.018 seconds

Face Detection and Matching for Video Indexing (비디오 인덱싱을 위한 얼굴 검출 및 매칭)

  • Islam Mohammad Khairul;Lee Sun-Tak;Yun Jae-Yoong;Baek Joong-Hwan
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2006.06a
    • /
    • pp.45-48
    • /
    • 2006
  • This paper presents an approach to visual information based temporal indexing of video sequences. The objective of this work is the integration of an automatic face detection and a matching system for video indexing. The face detection is done using color information. The matching stage is based on the Principal Component Analysis (PCA) followed by the Minimax Probability Machine (MPM). Using PCA one feature vector is calculated for each face which is detected at the previous stage from the video sequence and MPM is applied to these feature vectors for matching with the training faces which are manually indexed after extracting from video sequences. The integration of the two stages gives good results. The rate of 86.3% correctly classified frames shows the efficiency of our system.

  • PDF

Post-processing of 3D Video Extension of H.264/AVC for a Quality Enhancement of Synthesized View Sequences

  • Bang, Gun;Hur, Namho;Lee, Seong-Whan
    • ETRI Journal
    • /
    • v.36 no.2
    • /
    • pp.242-252
    • /
    • 2014
  • Since July of 2012, the 3D video extension of H.264/AVC has been under development to support the multi-view video plus depth format. In 3D video applications such as multi-view and free-view point applications, synthesized views are generated using coded texture video and coded depth video. Such synthesized views can be distorted by quantization noise and inaccuracy of 3D wrapping positions, thus it is important to improve their quality where possible. To achieve this, the relationship among the depth video, texture video, and synthesized view is investigated herein. Based on this investigation, an edge noise suppression filtering process to preserve the edges of the depth video and a method based on a total variation approach to maximum a posteriori probability estimates for reducing the quantization noise of the coded texture video. The experiment results show that the proposed methods improve the peak signal-to-noise ratio and visual quality of a synthesized view compared to a synthesized view without post processing methods.

A Beamforming-Based Video-Zoom Driven Audio-Zoom Algorithm for Portable Digital Imaging Devices

  • Park, Nam In;Kim, Seon Man;Kim, Hong Kook;Kim, Myeong Bo;Kim, Sang Ryong
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.2 no.1
    • /
    • pp.11-19
    • /
    • 2013
  • A video-zoom driven audio-zoom algorithm is proposed to provide audio zooming effects according to the degree of video-zoom. The proposed algorithm is designed based on a super-directive beamformer operating with a 4-channel microphone array in conjunction with a soft masking process that uses the phase differences between microphones. The audio-zoom processed signal is obtained by multiplying the audio gain derived from the video-zoom level by the masked signal. The proposed algorithm is then implemented on a portable digital imaging device with a clock speed of 600 MHz after different levels of optimization, such as algorithmic level, C-code and memory optimization. As a result, the processing time of the proposed audio-zoom algorithm occupies 14.6% or less of the clock speed of the device. The performance evaluation conducted in a semi-anechoic chamber shows that the signals from the front direction can be amplified by approximately 10 dB compared to the other directions.

  • PDF

Robust Extraction of Heartbeat Signals from Mobile Facial Videos (모바일 얼굴 비디오로부터 심박 신호의 강건한 추출)

  • Lomaliza, Jean-Pierre;Park, Hanhoon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.20 no.1
    • /
    • pp.51-56
    • /
    • 2019
  • This paper proposes an improved heartbeat signal extraction method for ballistocardiography(BCG)-based heart-rate measurement on mobile environment. First, from a mobile facial video, a handshake-free head motion signal is extracted by tracking facial features and background features at the same time. Then, a novel signal periodicity computation method is proposed to accurately separate out the heartbeat signal from the head motion signal. The proposed method could robustly extract heartbeat signals from mobile facial videos, and enabled more accurate heart rate measurement (measurement errors were reduced by 3-4 bpm) compared to the existing method.

Signal Synchronization Using a Flicker Reduction and Denoising Algorithm for Video-Signal Optical Interconnect

  • Sangirov, Jamshid;Ukaegbu, Ikechi Augustine;Lee, Tae-Woo;Cho, Mu-Hee;Park, Hyo-Hoon
    • ETRI Journal
    • /
    • v.34 no.1
    • /
    • pp.122-125
    • /
    • 2012
  • A video signal through a high-density optical link has been demonstrated to show the reliability of optical link for high-data-rate transmission. To reduce optical point-to-point links, an electrical link has been utilized for control and clock signaling. The latency and flicker with background noise occurred during the transferring of data across the optical link due to electrical-to-optical with optical-to-electrical conversions. The proposed synchronization technology combined with a flicker and denoising algorithm has given good results and can be applied in high-definition serial data interface (HD-SDI), ultra-HD-SDI, and HD multimedia interface transmission system applications.

An Efficient Video Clip Matching Algorithm Using the Cauchy Function (커쉬함수를 이용한 효율적인 비디오 클립 정합 알고리즘)

  • Kim Sang-Hyul
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.4
    • /
    • pp.294-300
    • /
    • 2004
  • According to the development of digital media technologies various algorithms for video clip matching have been proposed to match the video sequences efficiently. A large number of video search methods have focused on frame-wise query, whereas a relatively few algorithms have been presented for video clip matching or video shot matching. In this paper, we propose an efficient algorithm to index the video sequences and to retrieve the sequences for video clip query. To improve the accuracy and performance of video sequence matching, we employ the Cauchy function as a similarity measure between histograms of consecutive frames, which yields a high performance compared with conventional measures. The key frames extracted from segmented video shots can be used not only for video shot clustering but also for video sequence matching or browsing, where the key frame is defined by the frame that is significantly different from the previous frames. Experimental results with color video sequences show that the proposed method yields the high matching performance and accuracy with a low computational load compared with conventional algorithms.

  • PDF

Fusion of Background Subtraction and Clustering Techniques for Shadow Suppression in Video Sequences

  • Chowdhury, Anuva;Shin, Jung-Pil;Chong, Ui-Pil
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.14 no.4
    • /
    • pp.231-234
    • /
    • 2013
  • This paper introduces a mixture of background subtraction technique and K-Means clustering algorithm for removing shadows from video sequences. Lighting conditions cause an issue with segmentation. The proposed method can successfully eradicate artifacts associated with lighting changes such as highlight and reflection, and cast shadows of moving object from segmentation. In this paper, K-Means clustering algorithm is applied to the foreground, which is initially fragmented by background subtraction technique. The estimated shadow region is then superimposed on the background to eliminate the effects that cause redundancy in object detection. Simulation results depict that the proposed approach is capable of removing shadows and reflections from moving objects with an accuracy of more than 95% in every cases considered.

The Implementation of DSP-Based Real-Time Video Transmission System using In-Vehicle Multimedia Network (차량 내 멀티미디어 네트워크를 이용한 DSP 기반 실시간 영상 전송 시스템의 구현)

  • Jeon, Young-Joon;Kim, Jin-II
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.14 no.1
    • /
    • pp.62-69
    • /
    • 2013
  • This paper proposes real-time video transmission system by the car-mounted cameras based on MOST Network. Existing vehicles transmit videos by connecting the car-mounted cameras in the form of analog. However, the increase in the number of car-mounted cameras leads to development of the network to connect the cameras. In this paper, DSP is applied to process MPEG 2 encoding/decoding for real-time video transmission in a short period of time. MediaLB is employed to transfer data stream between DSP and MOST network controller. During this procedure, DSP cannot transport data stream directly from MediaLB. Therefore, FPGA is used to deliver data stream transmitting MediaLB to DSP. MediaLB is designed to streamline hardware/software application development for MOST Network and to support all MOST Network data transportation methods. As seen in this paper, the test results verify that real-time video transmission using proposed system operates in a normal matter.

Content-Based Video Search Using Eigen Component Analysis and Intensity Component Flow (고유성분 분석과 휘도성분 흐름 특성을 이용한 내용기반 비디오 검색)

  • 전대홍;강대성
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.3 no.3
    • /
    • pp.47-53
    • /
    • 2002
  • In this paper, we proposed a content-based video search method using the eigen value of key frame and intensity component. We divided the video stream into shot units to extract key frame representing each shot, and get the intensity distribution of the shot from the database generated by using ECA(Eigen Component Analysis). The generated codebook, their index value for each key frame, and the intensity values were used for database. The query image is utilized to find video stream that has the most similar frame by using the euclidean distance measure among the codewords in the codebook. The experimental results showed that the proposed algorithm is superior to any other methols in the search outcome since it makes use of eigen value and intensity elements, and reduces the processing time etc.

  • PDF

A Study on the Development of Radar Signal Detecting & Processor (Radar Signal Detecting & Processing 장치의 개발에 관한 연구)

  • 송재욱
    • Journal of the Korean Institute of Navigation
    • /
    • v.24 no.5
    • /
    • pp.435-441
    • /
    • 2000
  • This paper deals with the development of RACOM(Radar Signal Detecting & Processing Computer). RACOM is a radar display system specially designed for radar scan conversion, signal processing and PCI radar image display. RACOM contains two components; i )RSP(Radar Signal Processor) board which is a PCI based board for receiving video, trigger, heading & bearing signals from radar scanner & tranceiver units and processing these signals to generate high resolution radar image, and ⅱ)Applications which perform ordinary radar display functions such as EBL, VRM and so on. Since RACOM is designed to meet a wide variety of specifications(type of output signal from tranceiver unit), to record radar images and to distribute those images in real time to everywhere in a networked environment, it can be applicable to AIS(Automatic Identification System) and VDR(Voyage Data Recorder).

  • PDF