• Title/Summary/Keyword: camera motion estimation

Search Result 175, Processing Time 0.032 seconds

Lane Detection-based Camera Pose Estimation (차선검출 기반 카메라 포즈 추정)

  • Jung, Ho Gi;Suhr, Jae Kyu
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.23 no.5
    • /
    • pp.463-470
    • /
    • 2015
  • When a camera installed on a vehicle is used, estimation of the camera pose including tilt, roll, and pan angle with respect to the world coordinate system is important to associate camera coordinates with world coordinates. Previous approaches using huge calibration patterns have the disadvantage that the calibration patterns are costly to make and install. And, previous approaches exploiting multiple vanishing points detected in a single image are not suitable for automotive applications as a scene where multiple vanishing points can be captured by a front camera is hard to find in our daily environment. This paper proposes a camera pose estimation method. It collects multiple images of lane markings while changing the horizontal angle with respect to the markings. One vanishing point, the cross point of the left and right lane marking, is detected in each image, and vanishing line is estimated based on the detected vanishing points. Finally, camera pose is estimated from the vanishing line. The proposed method is based on the fact that planar motion does not change the vanishing line of the plane and the normal vector of the plane can be estimated by the vanishing line. Experiments with large and small tilt and roll angle show that the proposed method outputs accurate estimation results respectively. It is verified by checking the lane markings are up right in the bird's eye view image when the pan angle is compensated.

Motion Field Estimation Using U-Disparity Map in Vehicle Environment

  • Seo, Seung-Woo;Lee, Gyu-Cheol;Yoo, Ji-Sang
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.1
    • /
    • pp.428-435
    • /
    • 2017
  • In this paper, we propose a novel motion field estimation algorithm for which a U-disparity map and forward-and-backward error removal are applied in a vehicular environment. Generally, a motion exists in an image obtained by a camera attached to a vehicle by vehicle movement; however, the obtained motion vector is inaccurate because of the surrounding environmental factors such as the illumination changes and vehicles shaking. It is, therefore, difficult to extract an accurate motion vector, especially on the road surface, due to the similarity of the adjacent-pixel values; therefore, the proposed algorithm first removes the road surface region in the obtained image by using a U-disparity map, and uses then the optical flow that represents the motion vector of the object in the remaining part of the image. The algorithm also uses a forward-backward error-removal technique to improve the motion-vector accuracy and a vehicle's movement is predicted through the application of the RANSAC (RANdom SAmple Consensus) to the previously obtained motion vectors, resulting in the generation of a motion field. Through experiment results, we show that the performance of the proposed algorithm is superior to that of an existing algorithm.

Omnidirectional Camera Motion Estimation Using Projected Contours (사영 컨투어를 이용한 전방향 카메라의 움직임 추정 방법)

  • Hwang, Yong-Ho;Lee, Jae-Man;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.5
    • /
    • pp.35-44
    • /
    • 2007
  • Since the omnidirectional camera system with a very large field of view could take many information about environment scene from few images, various researches for calibration and 3D reconstruction using omnidirectional image have been presented actively. Most of line segments of man-made objects we projected to the contours by using the omnidirectional camera model. Therefore, the corresponding contours among images sequences would be useful for computing the camera transformations including rotation and translation. This paper presents a novel two step minimization method to estimate the extrinsic parameters of the camera from the corresponding contours. In the first step, coarse camera parameters are estimated by minimizing an angular error function between epipolar planes and back-projected vectors from each corresponding point. Then we can compute the final parameters minimizing a distance error of the projected contours and the actual contours. Simulation results on the synthetic and real images demonstrated that our algorithm can achieve precise contour matching and camera motion estimation.

3-D Facial Motion Estimation using Extended Kalman Filter (확장 칼만 필터를 이용한 얼굴의 3차원 움직임량 추정)

  • 한승철;박강령김재희
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.883-886
    • /
    • 1998
  • In order to detect the user's gaze position on a monitor by computer vision, the accurate estimations of 3D positions and 3D motion of facial features are required. In this paper, we apply a EKF(Extended Kalman Filter) to estimate 3D motion estimates and assumes that its motion is "smooth" in the sense of being represented as constant velocity translational and rotational model. Rotational motion is defined about the orgin of an face-centered coordinate system, while translational motion is defined about that of a camera centered coordinate system. For the experiments, we use the 3D facial motion data generated by computer simulation. Experiment results show that the simulation data andthe estimation results of EKF are similar.e similar.

  • PDF

Scene Recognition based Autonomous Robot Navigation robust to Dynamic Environments (동적 환경에 강인한 장면 인식 기반의 로봇 자율 주행)

  • Kim, Jung-Ho;Kweon, In-So
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.3
    • /
    • pp.245-254
    • /
    • 2008
  • Recently, many vision-based navigation methods have been introduced as an intelligent robot application. However, many of these methods mainly focus on finding an image in the database corresponding to a query image. Thus, if the environment changes, for example, objects moving in the environment, a robot is unlikely to find consistent corresponding points with one of the database images. To solve these problems, we propose a novel navigation strategy which uses fast motion estimation and a practical scene recognition scheme preparing the kidnapping problem, which is defined as the problem of re-localizing a mobile robot after it is undergone an unknown motion or visual occlusion. This algorithm is based on motion estimation by a camera to plan the next movement of a robot and an efficient outlier rejection algorithm for scene recognition. Experimental results demonstrate the capability of the vision-based autonomous navigation against dynamic environments.

  • PDF

1-Point Ransac Based Robust Visual Odometry

  • Nguyen, Van Cuong;Heo, Moon Beom;Jee, Gyu-In
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.2 no.1
    • /
    • pp.81-89
    • /
    • 2013
  • Many of the current visual odometry algorithms suffer from some extreme limitations such as requiring a high amount of computation time, complex algorithms, and not working in urban environments. In this paper, we present an approach that can solve all the above problems using a single camera. Using a planar motion assumption and Ackermann's principle of motion, we construct the vehicle's motion model as a circular planar motion (2DOF). Then, we adopt a 1-point method to improve the Ransac algorithm and the relative motion estimation. In the Ransac algorithm, we use a 1-point method to generate the hypothesis and then adopt the Levenberg-Marquardt method to minimize the geometric error function and verify inliers. In motion estimation, we combine the 1-point method with a simple least-square minimization solution to handle cases in which only a few feature points are present. The 1-point method is the key to speed up our visual odometry application to real-time systems. Finally, a Bundle Adjustment algorithm is adopted to refine the pose estimation. The results on real datasets in urban dynamic environments demonstrate the effectiveness of our proposed algorithm.

Uncertainty Analysis of Observation Matrix for 3D Reconstruction (3차원 복원을 위한 관측행렬의 불확실성 분석)

  • Koh, Sung-shik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.3
    • /
    • pp.527-535
    • /
    • 2016
  • Statistical optimization algorithms have been variously developed to estimate the 3D shape and motion. However, statistical approaches are limited to analyze the sensitive effects of SfM(Shape from Motion) according to the camera's geometrical position or viewing angles and so on. This paper propose the quantitative estimation method about the uncertainties of an observation matrix by using camera imaging configuration factors predict the reconstruction ambiguities in SfM. This is a very efficient method to predict the final reconstruction performance of SfM algorithm. Moreover, the important point is that our method show how to derive the active guidelines in order to set the camera imaging configurations which can be expected to lead the reasonable reconstruction results. The experimental results verify the quantitative estimates of an observation matrix by using camera imaging configurations and confirm the effectiveness of our algorithm.

Wavelet picture Compression and Decompression system Using Difference Image (차영상을 이용한 웨이브렛 동영상 압축 및 복원 시스템)

  • 오정태;나지명;김형주;김영민
    • Proceedings of the IEEK Conference
    • /
    • 2000.06b
    • /
    • pp.242-245
    • /
    • 2000
  • In this paper we present new idea to highly compress the images. The previous image is transformed with wavelet and the transformed data are transmitted. The previous image is subtracted from the next image. Then difference values per pixel are scanned to search motion areas and boundaries. In the motion boundaries, motion vectors and error values are transformed with wavelet and transmitted. We also include camera motion estimation and compensation. In this method this system has advantages of more compressive data, better quality of picture and shorter processing time compared to MPEG2, MPEG4.

  • PDF

Tracking of Single Moving Object based on Motion Estimation (움직임 추정에 기반한 단일 이동객체 추적)

  • Oh Myoung-Kwan
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.6 no.4
    • /
    • pp.349-354
    • /
    • 2005
  • The study on computer vision is aimed on creating a system to substitute the ability of human visual sensor. Especially, moving object tracking system is becoming an important area of study. In this study, we have proposed the tracking system of single moving object based on motion estimation. The tracking system performed motion estimation using differential image, and then tracked the moving object by controlling Pan/Tilt device of camera. Proposed tracking system is devided into image acquisition and preprocessing phase, motion estimation phase and object tracking phase. As a result of experiment, motion of moving object can be estimated. The result of tracking, object was not lost and tracked correctly.

  • PDF

Video Processing of MPEG Compressed Data For 3D Stereoscopic Conversion (3차원 입체 변환을 위한 MPGE 압축 데이터에서의 영상 처리 기법)

  • 김만배
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06a
    • /
    • pp.3-8
    • /
    • 1998
  • The conversion of monoscopic video to 3D stereoscopic video has been studied by some pioneering researchers. In spite of the commercial of potential of the technology, two problems have bothered the progress of this research area: vertical motion parallax and high computational complexity. The former causes the low 3D perception, while the hardware complexity is required by the latter. The previous research has dealt with NTSC video, thur requiring complex processing steps, one of which is motion estimation. This paper proposes 3D stereoscopic conversion method of MPGE encoded data. Our proposed method has the advantage that motion estimation can be avoided by processing MPEG compressed data for the extraction of motion data as well as that camera and object motion in random in random directions can be handled.

  • PDF