• Title/Summary/Keyword: camera motion estimation

Search Result 175, Processing Time 0.028 seconds

Camera pose estimation framework for array-structured images

  • Shin, Min-Jung;Park, Woojune;Kim, Jung Hee;Kim, Joonsoo;Yun, Kuk-Jin;Kang, Suk-Ju
    • ETRI Journal
    • /
    • v.44 no.1
    • /
    • pp.10-23
    • /
    • 2022
  • Despite the significant progress in camera pose estimation and structure-from-motion reconstruction from unstructured images, methods that exploit a priori information on camera arrangements have been overlooked. Conventional state-of-the-art methods do not exploit the geometric structure to recover accurate camera poses from a set of patch images in an array for mosaic-based imaging that creates a wide field-of-view image by sewing together a collection of regular images. We propose a camera pose estimation framework that exploits the array-structured image settings in each incremental reconstruction step. It consists of the two-way registration, the 3D point outlier elimination and the bundle adjustment with a constraint term for consistent rotation vectors to reduce reprojection errors during optimization. We demonstrate that by using individual images' connected structures at different camera pose estimation steps, we can estimate camera poses more accurately from all structured mosaic-based image sets, including omnidirectional scenes.

Feature-based Object Tracking using an Active Camera (능동카메라를 이용한 특징기반의 물체추적)

  • 정영기;호요성
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.3
    • /
    • pp.694-701
    • /
    • 2004
  • In this paper, we proposed a feature-based tracking system that traces moving objects with a pan-tilt camera after separating the global motion of an active camera and the local motion of moving objects. The tracking system traces only the local motion of the comer features in the foreground objects by finding the block motions between two consecutive frames using a block-based motion estimation and eliminating the global motion from the block motions. For the robust estimation of the camera motion using only the background motion, we suggest a dominant motion extraction to classify the background motions from the block motions. We also propose an efficient clustering algorithm based on the attributes of motion trajectories of corner features to remove the motions of noise objects from the separated local motion. The proposed tracking system has demonstrated good performance for several test video sequences.

Stereo Vision-based Visual Odometry Using Robust Visual Feature in Dynamic Environment (동적 환경에서 강인한 영상특징을 이용한 스테레오 비전 기반의 비주얼 오도메트리)

  • Jung, Sang-Jun;Song, Jae-Bok;Kang, Sin-Cheon
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.4
    • /
    • pp.263-269
    • /
    • 2008
  • Visual odometry is a popular approach to estimating robot motion using a monocular or stereo camera. This paper proposes a novel visual odometry scheme using a stereo camera for robust estimation of a 6 DOF motion in the dynamic environment. The false results of feature matching and the uncertainty of depth information provided by the camera can generate the outliers which deteriorate the estimation. The outliers are removed by analyzing the magnitude histogram of the motion vector of the corresponding features and the RANSAC algorithm. The features extracted from a dynamic object such as a human also makes the motion estimation inaccurate. To eliminate the effect of a dynamic object, several candidates of dynamic objects are generated by clustering the 3D position of features and each candidate is checked based on the standard deviation of features on whether it is a real dynamic object or not. The accuracy and practicality of the proposed scheme are verified by several experiments and comparisons with both IMU and wheel-based odometry. It is shown that the proposed scheme works well when wheel slip occurs or dynamic objects exist.

  • PDF

Adaptive Zoom Motion Estimation Method (적응적 신축 움직임 추정 방법)

  • Jang, Won-Seok;Kwon, Oh-Jun;Kwon, Soon-Kak
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.8
    • /
    • pp.915-922
    • /
    • 2014
  • We propose an adaptive zoom motion estimation method where a picture is divided into two areas based on the distance information with a depth camera : the one is object area and the other is background area. In the proposed method, the zoom motion is only applied to the object area except the background area. Further, the block size of motion estimation for the object area is set to smaller than that of background area. This adaptive zoom motion estimation method can be reduced at the complexity of motion estimation and can be improved at the motion estimation performance by reducing the block size of the object area in comparison with the conventional zoom motion estimation method. Based on the simulation results, the proposed method is compared with the conventional methods in terms of motion estimation accuracy and computational complexity.

Camera Motion Parameter Estimation Technique using 2D Homography and LM Method based on Invariant Features

  • Cha, Jeong-Hee
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.4
    • /
    • pp.297-301
    • /
    • 2005
  • In this paper, we propose a method to estimate camera motion parameter based on invariant point features. Typically, feature information of image has drawbacks, it is variable to camera viewpoint, and therefore information quantity increases after time. The LM(Levenberg-Marquardt) method using nonlinear minimum square evaluation for camera extrinsic parameter estimation also has a weak point, which has different iteration number for approaching the minimal point according to the initial values and convergence time increases if the process run into a local minimum. In order to complement these shortfalls, we, first propose constructing feature models using invariant vector of geometry. Secondly, we propose a two-stage calculation method to improve accuracy and convergence by using homography and LM method. In the experiment, we compare and analyze the proposed method with existing method to demonstrate the superiority of the proposed algorithms.

Foreground Motion Tracking and Compression/Transmission of Based Dynamic Mosaic (동적 모자이크 기반의 전경 움직임 추적 및 압축전송)

  • 박동진;윤인모;김찬수;현웅근;김남호;정영기
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.10a
    • /
    • pp.741-744
    • /
    • 2003
  • in this paper, we propose a dynamic-based compression system by creating mosaic background and transmitting the change information. A dynamic mosaic of the background is progressively integrated in a single image using the camera motion information. For the camera motion estimation, we calculate perspective projection parameters for each frame sequentially with respect to its previous frame. The camera motion is robustly estimated on the background by discriminating between background and foreground regions. The modified block-based motion estimation is used to separate the background region.

  • PDF

ESTIMATION OF PEDESTRIAN FLOW SPEED IN SURVEILLANCE VIDEOS

  • Lee, Gwang-Gook;Ka, Kee-Hwan;Kim, Whoi-Yul
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.330-333
    • /
    • 2009
  • This paper proposes a method to estimate the flow speed of pedestrians in surveillance videos. In the proposed method, the average moving speed of pedestrians is measured by estimating the size of real-world motion from the observed motion vectors. For this purpose, pixel-to-meter conversion factors are calculated from camera geometry. Also, the height information, which is missing because of camera projection, is predicted statistically from simulation experiments. Compared to the previous works for flow speed estimation, our method can be applied to various camera views because it separates scene parameters explicitly. Experiments are performed on both simulation image sequences and real video. In the experiments on simulation videos, the proposed method estimated the flow speed with average error of about 0.1m/s. The proposed method also showed a promising result for the real video.

  • PDF

Analysis of Camera Rotation Using Three Symmetric Motion Vectors in Video Sequence (동영상에서의 세 대칭적 움직임벡터를 이용한 카메라 회전각 분석)

  • 문성헌;박영민;윤영우
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.3 no.2
    • /
    • pp.7-14
    • /
    • 2002
  • This paper proposes a camera motion estimation technique using special relations of motion vectors of geometrically symmetrical triple points of two consecutive views of single camera. The proposed technique uses camera-induced motion vectors and their relations other than feature points and epioplar constraints. As contrast to the time consuming iterations or numerical methods in the calculation of E-matrix or F-matrix induced by epipolar constraints, the proposed technique calculates camera motion parameters such as panning, tilting, rolling, and zooming at once by applying the proposed linear equation sets to the motion vectors. And by devised background discriminants, it effectively reflects only the background region into the calculation of motion parameters, thus making the calculation more accurate and fast enough to accommodate MPEG-4 requirements. Experimental results on various types of sequences show the validity and the broad applicability of the proposed technique.

  • PDF

Camera Motion Estimation using Geometrically Symmetric Points in Subsequent Video Frames (인접 영상 프레임에서 기하학적 대칭점을 이용한 카메라 움직임 추정)

  • Jeon, Dae-Seong;Mun, Seong-Heon;Park, Jun-Ho;Yun, Yeong-U
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.39 no.2
    • /
    • pp.35-44
    • /
    • 2002
  • The translation and the rotation of camera occur global motion which affects all over the frame in video sequence. With the video sequences containing global motion, it is practically impossible to extract exact video objects and to calculate genuine object motions. Therefore, high compression ratio cannot be achieved due to the large motion vectors. This problem can be solved when the global motion compensated frames are used. The existing camera motion estimation methods for global motion compensation have a large amount of computations in common. In this paper, we propose a simple global motion estimation algorithm that consists of linear equations without any repetition. The algorithm uses information .of symmetric points in the frame of the video sequence. The discriminant conditions to distinguish regions belonging to distant view from foreground in the frame are presented. Only for the distant view satisfying the discriminant conditions, the linear equations for the panning, tilting, and zooming parameters are applied. From the experimental results using the MPEG test sequences, we can confirm that the proposed algorithm estimates correct global motion parameters. Moreover the real-time capability of the proposed technique can be applicable to many MPEG-4 and MPEG-7 related areas.

LiDAR Data Interpolation Algorithm for 3D-2D Motion Estimation (3D-2D 모션 추정을 위한 LiDAR 정보 보간 알고리즘)

  • Jeon, Hyun Ho;Ko, Yun Ho
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.12
    • /
    • pp.1865-1873
    • /
    • 2017
  • The feature-based visual SLAM requires 3D positions for the extracted feature points to perform 3D-2D motion estimation. LiDAR can provide reliable and accurate 3D position information with low computational burden, while stereo camera has the problem of the impossibility of stereo matching in simple texture image region, the inaccuracy in depth value due to error contained in intrinsic and extrinsic camera parameter, and the limited number of depth value restricted by permissible stereo disparity. However, the sparsity of LiDAR data may increase the inaccuracy of motion estimation and can even lead to the result of motion estimation failure. Therefore, in this paper, we propose three interpolation methods which can be applied to interpolate sparse LiDAR data. Simulation results obtained by applying these three methods to a visual odometry algorithm demonstrates that the selective bilinear interpolation shows better performance in the view point of computation speed and accuracy.