• Title/Summary/Keyword: 줌 움직임 추정

Search Result 6, Processing Time 0.029 seconds

Zoom Motion Estimation Method by Using Depth Information (깊이 정보를 이용한 줌 움직임 추정 방법)

  • Kwon, Soon-Kak;Park, Yoo-Hyun;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.2
    • /
    • pp.131-137
    • /
    • 2013
  • Zoom motion estimation of video sequence is very complicated for implementation. In this paper, we propose a method to implement the zoom motion estimation using together the depth camera and color camera. Depth camera obtains the distance information between current block and reference block, then zoom ratio between both blocks is calculated from this distance information. As the reference block is appropriately zoomed by the zoom ratio, the motion estimated difference signal can be reduced. Therefore, the proposed method is possible to increase the accuracy of motion estimation with keeping zoom motion estimation complexity not greater. Simulation was to measure the motion estimation accuracy of the proposed method, we can see the motion estimation error was decreased significantly compared to conventional block matching method.

A Study on The Tracking and Analysis of Moving Object in MPEG Compressed domain (MPEG 압축 영역에서의 움직이는 객체 추적 및 해석)

  • 문수정;이준환;박동선
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2001.11b
    • /
    • pp.103-106
    • /
    • 2001
  • 본 논문에서는 MPEG2비디오 스트림에서 직접 얻을 수 있는 정보들을 활용하여 카메라의 움직임을 추정하여 이를 기반으로 하여 움직이는 객체를 추정하고자 한다. 이를 위해, 먼저 MPEG2의 움직임 벡터는 압축의 효율성 때문에 움직임의 예측이 순서적이지 못한데, 예측 프레임들의 속성을 이용하여 이를 광 플로우(Optical Flow)를 갖는 움직임 벡터(Motion Vector)로 변환하였다. 그리고 이러한 벡터들을 이용하여 카메라의 기본적인 움직임인 팬(Fan), 틸트(Tilt). 줌(Zoom) 등을 정의하였다. 이를 위하여 팬, 틸트-줌 카메라 모델의 매개변수와 같은 의미의 $\Delta$x, $\Delta$y, $\alpha$값을 정의하고자 움직임 벡터 성분의 Hough변환을 이용하여 $\Delta$x, $\Delta$y, $\alpha$값들을 구하였다. 또한 이러한 카메라 움직임(Camera Operation)은 시간적으로 연속적으로 발생하는 특징을 이용하여 각 프레임마다 구한 카메라의 움직임을 보정하였다. 마지막으로 움직이는 객체의 추정은 우선 사용자가 원하는 객체를 바운딩박스 형태로 정의한 후 카메라 움직임이 보정된 객체의 움직임 벡터를 한 GOF(Group of Pictures) 단위로 면적 기여도에 따라 누적하여 객체를 추적하고 해석하였으며 DCT 질감 정보를 이용하여 객체의 영역을 재설정 하였다. 물론 압축된 MFEG2비디오에서 얻을 수 있는 정보들은 최대 블록 단위이므로 객체의 정의도 블록단위 이상의 객체로 제한하였다. 제안된 방법은 비디오 스트림에서 직접 정보를 얻음으로써 계산속도의 향상은 물론 카메라의 움직임특성과 움직이는 객체의 추적들을 활용하여 기존의 내용기반의 검색 및 분석에도 많이 응용될 수 있다. 이러한 개발 기술들은 압축된 데이터의 검색 및 분석에 유용하게 사용되리라고 기대되며 , 특히 검색 툴이나 비디오 편집 툴 또는 교통량 감시 시스템, 혹은 무인 감시시스템 등에서 압축된 영상의 저장과 빠른 분석을 요구시 필요하리라고 기대된다.

  • PDF

A Study on Frame Interpolation and Nonlinear Moving Vector Estimation Using GRNN (GRNN 알고리즘을 이용한 비선형적 움직임 벡터 추정 및 프레임 보간연구)

  • Lee, Seung-Joo;Bang, Min-Suk;Yun, Kee-Bang;Kim, Ki-Doo
    • Journal of IKEEE
    • /
    • v.17 no.4
    • /
    • pp.459-468
    • /
    • 2013
  • Under nonlinear characteristics of frames, we propose the frame interpolation using GRNN to enhance the visual picture quality. By full search with block size of 128x128~1x1 to reduce blocky artifact and image overlay, we select the frame having block of minimum error and re-estimate the nonlinear moving vector using GRNN. We compare our scheme with forward(backward) motion compensation, bidirectional motion compensation when the object movement is large or the object image includes zoom-in and zoom-out or camera focus has changed. Experimental results show that the proposed method provides better performance in subjective image quality compared to conventional MCFI methods.

Analysis of Camera Operation in MPEG2 Compressed Domain Using Generalized Hough Transform Technique (일반화된 Hough 변환기법을 이용한 MPEG2 압축영역에서의 카메라의 움직임 해석)

  • Yoo, Won-Young;Choi, Jeong-Il;Lee, Joon-Whoan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.11
    • /
    • pp.3566-3575
    • /
    • 2000
  • In this paper, we propose an simple and efficient method to estunate the camera operation by using compressed information, which is extracted diracily from MPEG2 stream without complete decoding. In the method, the motion vector is converted into approximate optical flow by using the feature of predicted frame, because the motion vector in MPEG2 video stream is not regular sequene. And they are used to estimate the camera operation, which consist of pan, and zoom by Hough transform technique. The method provided better results than the least square method for video stream of basketball and socer games. The proposed method can have a reduced computational complexity because the information is directiv abtained in compressed domain. Additionally it can be a useful technology in content-based searching and analysis of video information. Also, the estimatd cameral operationis applicable in searching or tracking objects in MPEG2 video stream without decoding.

  • PDF

Object Tracking Using Weighted Average Maximum Likelihood Neural Network (최대우도 가중평균 신경망을 이용한 객체 위치 추적)

  • Sun-Bae Park;Do-Sik Yoo
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.1
    • /
    • pp.43-49
    • /
    • 2023
  • Object tracking is being studied with various techniques such as Kalman filter and Luenberger tracker. Even in situations, such as the one in which the system model is not well specified, to which existing signal processing techniques are not successfully applicable, it is possible to design artificial neural networks to track objects. In this paper, we propose an artificial neural network, which we call 'maximum-likelihood weighted-average neural network', to continuously track unpredictably moving objects. This neural network does not directly estimate the locations of an object but obtains location estimates by making weighted average combining various results of maximum likelihood tracking with different data lengths. We compare the performance of the proposed system with those of Kalman filter and maximum likelihood object trackers and show that the proposed scheme exhibits excellent performance well adapting the change of object moving characteristics.

A Real-Time Head Tracking Algorithm Using Mean-Shift Color Convergence and Shape Based Refinement (Mean-Shift의 색 수렴성과 모양 기반의 재조정을 이용한 실시간 머리 추적 알고리즘)

  • Jeong Dong-Gil;Kang Dong-Goo;Yang Yu Kyung;Ra Jong Beom
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.6
    • /
    • pp.1-8
    • /
    • 2005
  • In this paper, we propose a two-stage head tracking algorithm adequate for real-time active camera system having pan-tilt-zoom functions. In the color convergence stage, we first assume that the shape of a head is an ellipse and its model color histogram is acquired in advance. Then, the min-shift method is applied to roughly estimate a target position by examining the histogram similarity of the model and a candidate ellipse. To reflect the temporal change of object color and enhance the reliability of mean-shift based tracking, the target histogram obtained in the previous frame is considered to update the model histogram. In the updating process, to alleviate error-accumulation due to outliers in the target ellipse of the previous frame, the target histogram in the previous frame is obtained within an ellipse adaptively shrunken on the basis of the model histogram. In addition, to enhance tracking reliability further, we set the initial position closer to the true position by compensating the global motion, which is rapidly estimated on the basis of two 1-D projection datasets. In the subsequent stage, we refine the position and size of the ellipse obtained in the first stage by using shape information. Here, we define a robust shape-similarity function based on the gradient direction. Extensive experimental results proved that the proposed algorithm performs head hacking well, even when a person moves fast, the head size changes drastically, or the background has many clusters and distracting colors. Also, the propose algorithm can perform tracking with the processing speed of about 30 fps on a standard PC.