• Title/Summary/Keyword: Video matching

Search Result 445, Processing Time 0.026 seconds

Implementation of the Integrated Navigation Parameter Extraction from the Aerial Image Sequence Using TMS320C80 MVP (TMS320C80 MVP 상에서의 연속항공영상으리 이용한 통합 항법 변수 추출 시스템 구현)

  • Sin, Sang-Yun;Park, In-Jun;Lee, Yeong-Sam;Lee, Min-Gyu;Kim, Gwan-Seok;Jeong, Dong-Uk;Kim, In-Cheol;Park, Rae-Hong;Lee, Sang-Uk
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.49-57
    • /
    • 2002
  • In this paper, we deal with a real time implementation of the integrated image-based navigation parameter extraction system using the TMS320C80 MVP(multimedia video processor). Our system consists of relative position estimation and absolute position compensation, which is further divided into high-resolution aerial image matching, DEM(Digital elevation model) matching, and IRS (Indian remote sensing) satellite image matching. Those algorithms are implemented in real time using the MVP. To achieve a real-time operation, an attempt is made to partition the aerial image and process the partitioned images in parallel using the four parallel processors in the MVP. We also examine the performance of the implemented integrated system in terms of the estimation accuracy, confirming a proper operation of the our system.

A New Block-based Gradient Descent Search Algorithm for a Fast Block Matching (고속 블록 정합을 위한 새로운 블록 기반 경사 하강 탐색 알고리즘)

  • 곽성근
    • Journal of the Korea Computer Industry Society
    • /
    • v.4 no.10
    • /
    • pp.731-740
    • /
    • 2003
  • Since motion estimation remove the redundant data to employ the temporal correlations between adjacent frames in a video sequence, it plays an important role in digital video coding. And in the block matching algorithm, search patterns of different shapes or sizes and the distribution of motion vectors have a large impact on both the searching speed and the image quality. In this paper, we propose a new fast block matching algorithm using the small-cross search pattern and the block-based gradient descent search pattern. Our algorithm first finds the motion vectors that are close to the center of search window using the small-cross search pattern, and then quickly finds the other motion vectors that are not close to the center of search window using the block-based gradient descent search pattern. Through experiments, compared with the block-based gradient descent search algorithm(BBGDS), the proposed search algorithm improves as high as 26-40% in terms of average number of search point per motion vector estimation.

  • PDF

Quasi-Lossless Fast Motion Estimation Algorithm using Distribution of Motion Vector and Adaptive Search Pattern and Matching Criterion (움직임벡터의 분포와 적응적인 탐색 패턴 및 매칭기준을 이용한 유사 무손실 고속 움직임 예측 알고리즘)

  • Park, Seong-Mo;Ryu, Tae-Kyung;Jung, Yong-Jae;Moon, Kwang-Seok;Kim, Jong-Nam
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.7
    • /
    • pp.991-999
    • /
    • 2010
  • In this paper, we propose a fast motion estimation algorithm for video encoding. Conventional fast motion estimation algorithms have a serious problem of low prediction quality in some frames. However, full search based fast algorithms have low computational reduction ratio. In the paper, we propose an algorithm that significantly reduces unnecessary computations, while keeping prediction quality almost similar to that of the full search. The proposed algorithm uses distribution probability of motion vectors and adaptive search patterns and block matching criteria. By taking different search patterns and error criteria of block matching according to distribution probability of motion vectors, we can reduces only unnecessary computations efficiently. Our algorithm takes only 20~30% in computational amount and has decreased prediction quality about 0~0.02dB compared with the fast full search of the H.264 reference software. Our algorithm will be useful to real-time video coding applications using MPEG-2 or MPEG-4 AVC standards.

A Block Matching Algorithm using Motion Vector Predictor Candidates and Adaptive Search Pattern (움직임 벡터 예측 후보들과 적응적인 탐색 패턴을 이용하는 블록 정합 알고리즘)

  • Kwak, Sung-Keun;Wee, Young-Cheul;Kim, Ha-JIne
    • The KIPS Transactions:PartB
    • /
    • v.11B no.3
    • /
    • pp.247-256
    • /
    • 2004
  • In this paper, we propose the prediction search algorithm for block matching using the temporal/spatial correlation of the video sequence and the renter-biased property of motion vectors The proposed algorithm determines the location of a better starting point for the search of an exact motion vector using the point of the smallest SAD(Sum of Absolute Difference) value by the predicted motion vector from the same block of the previous frame and the predictor candidate pint in each search region and the predicted motion vector from the neighbour blocks of the current frame. And the searching process after moving the starting point is processed a adaptive search pattern according to the magnitude of motion vector Simulation results show that PSNR(Peak-to-Signal Noise Ratio) values are improved up to the 0.75dB as depend on the video sequences and improved about 0.05∼0.34dB on an average except the FS (Full Search) algorithm.

A Temporal Error Concealment Technique Using The Adaptive Boundary Matching Algorithm (적응적 경계 정합을 이용한 시간적 에러 은닉 기법)

  • 김원기;이두수;정제창
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.5C
    • /
    • pp.683-691
    • /
    • 2004
  • To transmit MPEG-2 video on an errorneous channel, a number of error control techniques are needed. Especially, error concealment techniques which can be implemented on receivers independent of transmitters are essential to obtain good video quality. In this paper, prediction of motion vector and an adaptive boundary matching algorithm are presented for temporal error concealment. Before the complex BMA, we perform error concealment by a motion vector prediction using neighboring motion vectors. If the candidate of error concealment is not satisfied, search range and reliable boundary pixels are selected by the temporal activity or motion vectors and a damaged macroblock is concealed by applying an adaptive BMA. This error concealment technique reduces the complexity and maintains a PSNR gain of 0.3∼0.7㏈ compared to conventional BMA.

Soccer Ball Tracking Robust Against Occlusion (가려짐에 강인한 축구공 추적)

  • Lee, Kwon;Lee, Chulhee
    • Journal of Broadcast Engineering
    • /
    • v.17 no.6
    • /
    • pp.1040-1047
    • /
    • 2012
  • In this paper, we propose a ball tracking algorithm robust against occlusion in broadcasting soccer video sequences. Soccer ball tracking is a challenging task due to occlusion, fast motion and fast direction changes. Many works have been proposed based on ball trajectory. However, this approach requires heavy computational complexity. We propose a ball tracking algorithm with occlusion handling capability. Initial ball location is calculated using the circular hough transform. Then, the ball is tracked using template matching. Occlusion is handled by matching score. In occlusion cases, we generate a set of ball candidates. The ball candidates which exist in the previous frame were removed. On the other hand, the new appearing candidate is determined as the ball. Experiments with several broadcasting soccer video sequences show that the proposed method efficiently handles the occlusion cases.

Model-based Body Motion Tracking of a Walking Human (모델 기반의 보행자 신체 추적 기법)

  • Lee, Woo-Ram;Ko, Han-Seok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.6
    • /
    • pp.75-83
    • /
    • 2007
  • A model based approach of tracking the limbs of a walking human subject is proposed in this paper. The tracking process begins by building a data base composed of conditional probabilities of motions between the limbs of a walking subject. With a suitable amount of video footage from various human subjects included in the database, a probabilistic model characterizing the relationships between motions of limbs is developed. The motion tracking of a test subject begins with identifying and tracking limbs from the surveillance video image using the edge and silhouette detection methods. When occlusion occurs in any of the limbs being tracked, the approach uses the probabilistic motion model in conjunction with the minimum cost based edge and silhouette tracking model to determine the motion of the limb occluded in the image. The method has shown promising results of tracking occluded limbs in the validation tests.

Video Signature using Spatio-Temporal Information for Video Copy Detection (동영상 복사본 검출을 위한 시공간 정보를 이용한 동영상 서명 - 동심원 구획 기반 서술자를 이용한 동영상 복사본 검출 기술)

  • Cho, Ik-Hwan;Oh, Weon-Geun;Jeong, Dong-Seok
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.607-611
    • /
    • 2008
  • This paper proposes new video signature using spatio-temporal information for copy detection. The proposed video copy detection method is based on concentric circle partitioning method for each key frame. Firstly, key frames are extracted from whole video using temporal bilinear interpolation periodically and each frame is partitioned as a shape of concentric circle. For the partitioned sub-regions, 4 feature distributions of average intensity, its difference, symmetric difference and circular difference distributions are obtained by using the relation between the sub-regions. Finally these feature distributions are converted into binary signature by using simple hash function and merged together. For the proposed video signature, the similarity distance is calculated by simple Hamming distance so that its matching speed is very fast. From experiment results, the proposed method shows high detection success ratio of average 97.4% for various modifications. Therefore it is expected that the proposed method can be utilized for video copy detection widely.

  • PDF

Natural Language based Video Retrieval System with Event Analysis of Multi-camera Image Sequence in Office Environment (사무실 환경 내 다중카메라 영상의 이벤트분석을 통한 자연어 기반 동영상 검색시스템)

  • Lim, Soo-Jung;Hong, Jin-Hyuk;Cho, Sung-Bae
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.384-389
    • /
    • 2008
  • Recently, the necessity of systems which effectively store and retrieve video data has increased. Conventional video retrieval systems retrieve data using menus or text based keywords. Due to the lack of information, many video clips are simultaneously searched, and the user must have a certain level of knowledge to utilize the system. In this paper, we suggest a natural language based conversational video retrieval system that reflects users' intentions and includes more information than keyword based queries. This system can also retrieve from events or people to their movements. First, an event database is constructed based on meta-data which are generated by domain analysis for collected video in an office environment. Then, a script database is also constructed based on the query pre-processing and analysis. From that, a method to retrieve a video through a matching technique between natural language queries and answers is suggested and validated through performance and process evaluation for 10 users The natural language based retrieval system has shown its better efficiency in performance and user satisfaction than the menu based retrieval system.

  • PDF

Video Reality Improvement Using Measurement of Emotion for Olfactory Information (후각정보의 감성측정을 이용한 영상실감향상)

  • Lee, Guk-Hee;Kim, ShinWoo
    • Science of Emotion and Sensibility
    • /
    • v.18 no.3
    • /
    • pp.3-16
    • /
    • 2015
  • Will orange scent enhance video reality if it is presented with a video which vividly illustrates orange juice? Or, will romantic scent improve video reality if it is presented along with a date scene? Whereas the former is related to reality improvement when concrete objects or places are present in a video, the latter is related to a case when they are absent. This paper reviews previous research which tested diverse videos and scents in order to answer the above two different questions, and discusses implications, limitations, and future research directions. In particular, this paper focuses on measurement methods and results regarding acceptability of olfactory information, perception of scent similarity, olfactory vividness and video reality, matching between scent vs. color (or color temperature), and description of various scents using emotional adjectives. We expect this paper to help researchers or engineers who are interested in using scents for video reality.