• Title/Summary/Keyword: 3D-to-2D motion estimation

Search Result 93, Processing Time 0.022 seconds

CALOS : Camera And Laser for Odometry Sensing (CALOS : 주행계 추정을 위한 카메라와 레이저 융합)

  • Bok, Yun-Su;Hwang, Young-Bae;Kweon, In-So
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.2
    • /
    • pp.180-187
    • /
    • 2006
  • This paper presents a new sensor system, CALOS, for motion estimation and 3D reconstruction. The 2D laser sensor provides accurate depth information of a plane, not the whole 3D structure. On the contrary, the CCD cameras provide the projected image of whole 3D scene, not the depth of the scene. To overcome the limitations, we combine these two types of sensors, the laser sensor and the CCD cameras. We develop a motion estimation scheme appropriate for this sensor system. In the proposed scheme, the motion between two frames is estimated by using three points among the scan data and their corresponding image points, and refined by non-linear optimization. We validate the accuracy of the proposed method by 3D reconstruction using real images. The results show that the proposed system can be a practical solution for motion estimation as well as for 3D reconstruction.

  • PDF

LiDAR Data Interpolation Algorithm for 3D-2D Motion Estimation (3D-2D 모션 추정을 위한 LiDAR 정보 보간 알고리즘)

  • Jeon, Hyun Ho;Ko, Yun Ho
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.12
    • /
    • pp.1865-1873
    • /
    • 2017
  • The feature-based visual SLAM requires 3D positions for the extracted feature points to perform 3D-2D motion estimation. LiDAR can provide reliable and accurate 3D position information with low computational burden, while stereo camera has the problem of the impossibility of stereo matching in simple texture image region, the inaccuracy in depth value due to error contained in intrinsic and extrinsic camera parameter, and the limited number of depth value restricted by permissible stereo disparity. However, the sparsity of LiDAR data may increase the inaccuracy of motion estimation and can even lead to the result of motion estimation failure. Therefore, in this paper, we propose three interpolation methods which can be applied to interpolate sparse LiDAR data. Simulation results obtained by applying these three methods to a visual odometry algorithm demonstrates that the selective bilinear interpolation shows better performance in the view point of computation speed and accuracy.

Motion Estimation Using 3-D Straight Lines (3차원 직선을 이용한 카메라 모션 추정)

  • Lee, Jin Han;Zhang, Guoxuan;Suh, Il Hong
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.4
    • /
    • pp.300-309
    • /
    • 2016
  • This paper proposes a method for motion estimation of consecutive cameras using 3-D straight lines. The motion estimation algorithm uses two non-parallel 3-D line correspondences to quickly establish an initial guess for the relative pose of adjacent frames, which requires less correspondences than that of current approaches requiring three correspondences when using 3-D points or 3-D planes. The estimated motion is further refined by a nonlinear optimization technique with inlier correspondences for higher accuracy. Since there is no dominant line representation in 3-D space, we simulate two line representations, which can be thought as mainly adopted methods in the field, and verify one as the best choice from the simulation results. We also propose a simple but effective 3-D line fitting algorithm considering the fact that the variance arises in the projective directions thus can be reduced to 2-D fitting problem. We provide experimental results of the proposed motion estimation system comparing with state-of-the-art algorithms using an open benchmark dataset.

Motion Depth Generation Using MHI for 3D Video Conversion (3D 동영상 변환을 위한 MHI 기반 모션 깊이맵 생성)

  • Kim, Won Hoi;Gil, Jong In;Choi, Changyeol;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.22 no.4
    • /
    • pp.429-437
    • /
    • 2017
  • 2D-to-3D conversion technology has been studied over past decades and integrated to commercial 3D displays and 3DTVs. Generally, depth cues extracted from a static image is used for generating a depth map followed by DIBR (Depth Image Based Rendering) for producing a stereoscopic image. Further, motion is also an important cue for depth estimation and is estimated by block-based motion estimation, optical flow and so forth. This papers proposes a new method for motion depth generation using Motion History Image (MHI) and evaluates the feasiblity of the MHI utilization. In the experiments, the proposed method was performed on eight video clips with a variety of motion classes. From a qualitative test on motion depth maps as well as the comparison of the processing time, we validated the feasibility of the proposed method.

3D Facial Synthesis and Animation for Facial Motion Estimation (얼굴의 움직임 추적에 따른 3차원 얼굴 합성 및 애니메이션)

  • Park, Do-Young;Shim, Youn-Sook;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.6
    • /
    • pp.618-631
    • /
    • 2000
  • In this paper, we suggest the method of 3D facial synthesis using the motion of 2D facial images. We use the optical flow-based method for estimation of motion. We extract parameterized motion vectors using optical flow between two adjacent image sequences in order to estimate the facial features and the facial motion in 2D image sequences. Then, we combine parameters of the parameterized motion vectors and estimate facial motion information. We use the parameterized vector model according to the facial features. Our motion vector models are eye area, lip-eyebrow area, and face area. Combining 2D facial motion information with 3D facial model action unit, we synthesize the 3D facial model.

  • PDF

Object-based Conversion of 2D Image to 3D (객체 기반 3D 업체 영상 변환 기법)

  • Lee, Wang-Ro;Kang, Keun-Ho;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.9C
    • /
    • pp.555-563
    • /
    • 2011
  • In this paper, we propose an object based 2D image to 3D conversion algorithm by using motion estimation, color labeling and non-local mean filtering methods. In the proposed algorithm, we first extract the motion vector of each object by estimating the motion between frames and then segment a given image frame with color labeling method. Then, combining the results of motion estimation and color labeling, we extract object regions and assign an exact depth value to each object to generate the right image. While generating the right image, occlusion regions occur but they are effectively recovered by using non-local mean filter. Through the experimental results, it is shown that the proposed algorithm performs much better than conventional conversion scheme by removing the eye fatigue effectively.

Fast Motion Estimation using Adaptive Search Region Prediction (적응적 탐색 영역 예측을 이용한 고속 움직임 추정)

  • Ryu, Kwon-Yeol
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.7
    • /
    • pp.1187-1192
    • /
    • 2008
  • This paper proposes a fast motion estimation using an adaptive search region and a new three step search. The proposed method improved in the quality of motion compensation image as $0.43dB{\sim}2.19dB$, according as it predict motion of current block from motion vector of neigher blocks, and adaptively set up search region using predicted motion information. We show that the proposed method applied a new three step search pattern is able to fast motion estimation, according as it reduce computational complexity per blocks as $1.3%{\sim}1.9%$ than conventional method.

A Region Depth Estimation Algorithm using Motion Vector from Monocular Video Sequence (단안영상에서 움직임 벡터를 이용한 영역의 깊이추정)

  • 손정만;박영민;윤영우
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.2
    • /
    • pp.96-105
    • /
    • 2004
  • The recovering 3D image from 2D requires the depth information for each picture element. The manual creation of those 3D models is time consuming and expensive. The goal in this paper is to estimate the relative depth information of every region from single view image with camera translation. The paper is based on the fact that the motion of every point within image which taken from camera translation depends on the depth. Motion vector using full-search motion estimation is compensated for camera rotation and zooming. We have developed a framework that estimates the average frame depth by analyzing motion vector and then calculates relative depth of region to average frame depth. Simulation results show that the depth of region belongs to a near or far object is consistent accord with relative depth that man recognizes.

  • PDF

Reconfigurable Architecture Design for H.264 Motion Estimation and 3D Graphics Rendering of Mobile Applications (이동통신 단말기를 위한 재구성 가능한 구조의 H.264 인코더의 움직임 추정기와 3차원 그래픽 렌더링 가속기 설계)

  • Park, Jung-Ae;Yoon, Mi-Sun;Shin, Hyun-Chul
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.1
    • /
    • pp.10-18
    • /
    • 2007
  • Mobile communication devices such as PDAs, cellular phones, etc., need to perform several kinds of computation-intensive functions including H.264 encoding/decoding and 3D graphics processing. In this paper, new reconfigurable architecture is described, which can perform either motion estimation for H.264 or rendering for 3D graphics. The proposed motion estimation techniques use new efficient SAD computation ordering, DAU, and FDVS algorithms. The new approach can reduce the computation by 70% on the average than that of JM 8.2, without affecting the quality. In 3D rendering, midline traversal algorithm is used for parallel processing to increase throughput. Memories are partitioned into 8 blocks so that 2.4Mbits (47%) of memory is shared and selective power shutdown is possible during motion estimation and 3D graphics rendering. Processing elements are also shared to further reduce the chip area by 7%.

Bundle Adjustment and 3D Reconstruction Method for Underwater Sonar Image (수중 영상 소나의 번들 조정과 3차원 복원을 위한 운동 추정의 모호성에 관한 연구)

  • Shin, Young-Sik;Lee, Yeong-jun;Cho, Hyun-Taek;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.2
    • /
    • pp.51-59
    • /
    • 2016
  • In this paper we present (1) analysis of imaging sonar measurement for two-view relative pose estimation of an autonomous vehicle and (2) bundle adjustment and 3D reconstruction method using imaging sonar. Sonar has been a popular sensor for underwater application due to its robustness to water turbidity and visibility in water medium. While vision based motion estimation has been applied to many ground vehicles for motion estimation and 3D reconstruction, imaging sonar addresses challenges in relative sensor frame motion. We focus on the fact that the sonar measurement inherently poses ambiguity in its measurement. This paper illustrates the source of the ambiguity in sonar measurements and summarizes assumptions for sonar based robot navigation. For validation, we synthetically generated underwater seafloor with varying complexity to analyze the error in the motion estimation.