• Title/Summary/Keyword: camera translation

Search Result 73, Processing Time 0.042 seconds

A Defocus Technique based Depth from Lens Translation using Sequential SVD Factorization

  • Kim, Jong-Il;Ahn, Hyun-Sik;Jeong, Gu-Min;Kim, Do-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.383-388
    • /
    • 2005
  • Depth recovery in robot vision is an essential problem to infer the three dimensional geometry of scenes from a sequence of the two dimensional images. In the past, many studies have been proposed for the depth estimation such as stereopsis, motion parallax and blurring phenomena. Among cues for depth estimation, depth from lens translation is based on shape from motion by using feature points. This approach is derived from the correspondence of feature points detected in images and performs the depth estimation that uses information on the motion of feature points. The approaches using motion vectors suffer from the occlusion or missing part problem, and the image blur is ignored in the feature point detection. This paper presents a novel approach to the defocus technique based depth from lens translation using sequential SVD factorization. Solving such the problems requires modeling of mutual relationship between the light and optics until reaching the image plane. For this mutuality, we first discuss the optical properties of a camera system, because the image blur varies according to camera parameter settings. The camera system accounts for the camera model integrating a thin lens based camera model to explain the light and optical properties and a perspective projection camera model to explain the depth from lens translation. Then, depth from lens translation is proposed to use the feature points detected in edges of the image blur. The feature points contain the depth information derived from an amount of blur of width. The shape and motion can be estimated from the motion of feature points. This method uses the sequential SVD factorization to represent the orthogonal matrices that are singular value decomposition. Some experiments have been performed with a sequence of real and synthetic images comparing the presented method with the depth from lens translation. Experimental results have demonstrated the validity and shown the applicability of the proposed method to the depth estimation.

  • PDF

Determination of Camera System Orientation and Translation in Cartesian Coordinate (직교 좌표에서 카메라 시스템의 방향과 위치 결정)

  • 이용중
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2000.04a
    • /
    • pp.109-114
    • /
    • 2000
  • A new method for the determination of camera system rotation and translation from in 3-D space using recursive least square method is presented in this paper. With this method, the calculation of the equation is found by a linear algorithm. Where the equation are either given or be obtained by solving five or more point correspondences. Good results can be obtained in the presence if more than the eight point. A main advantage of this new method is that it decouple rotation and translation, and then reduces computation. With respect to error in the solution point number in the input image data, adding one more feature correspondence to required minimum number improves the solution accuracy drastically. However, further increase in the number of feature correspondence improve the solution accuracy only slowly. The algorithm proposed by this paper is used to make camera system rotation and translation easy to recognize even when camera system attached at end effecter of six degrees of freedom industrial robot manipulator are applied industrial field.

  • PDF

Robot Posture Estimation Using Circular Image of Inner-Pipe (원형관로 영상을 이용한 관로주행 로봇의 자세 추정)

  • Yoon, Ji-Sup;Kang , E-Sok
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.51 no.6
    • /
    • pp.258-266
    • /
    • 2002
  • This paper proposes the methodology of the image processing algorithm that estimates the pose of the inner-pipe crawling robot. The inner-pipe crawling robot is usually equipped with a lighting device and a camera on its head for monitoring and inspection purpose of defects on the pipe wall and/or the maintenance operation. The proposed methodology is using these devices without introducing the extra sensors and is based on the fact that the position and the intensity of the reflected light from the inner wall of the pipe vary with the robot posture and the camera. The proposed algorithm is divided into two parts, estimating the translation and rotation angle of the camera, followed by the actual pose estimation of the robot . Based on the fact that the vanishing point of the reflected light moves into the opposite direction from the camera rotation, the camera rotation angle can be estimated. And, based on the fact that the most bright parts of the reflected light moves into the same direction with the camera translation, the camera position most bright parts of the reflected light moves into the same direction with the camera translation, the camera position can be obtained. To investigate the performance of the algorithm, the algorithm is applied to a sewage maintenance robot.

Pseudo-RGB-based Place Recognition through Thermal-to-RGB Image Translation (열화상 영상의 Image Translation을 통한 Pseudo-RGB 기반 장소 인식 시스템)

  • Seunghyeon Lee;Taejoo Kim;Yukyung Choi
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.1
    • /
    • pp.48-52
    • /
    • 2023
  • Many studies have been conducted to ensure that Visual Place Recognition is reliable in various environments, including edge cases. However, existing approaches use visible imaging sensors, RGB cameras, which are greatly influenced by illumination changes, as is widely known. Thus, in this paper, we use an invisible imaging sensor, a long wave length infrared camera (LWIR) instead of RGB, that is shown to be more reliable in low-light and highly noisy conditions. In addition, although the camera sensor used to solve this problem is an LWIR camera, but since the thermal image is converted into RGB image the proposed method is highly compatible with existing algorithms and databases. We demonstrate that the proposed method outperforms the baseline method by about 0.19 for recall performance.

A Head-Eye Calibration Technique Using Image Rectification (영상 교정을 이용한 헤드-아이 보정 기법)

  • Kim, Nak-Hyun;Kim, Sang-Hyun
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.37 no.8
    • /
    • pp.11-23
    • /
    • 2000
  • Head-eye calibration is a process for estimating the unknown orientation and position of a camera with respect to a mobile platform, such as a robot wrist. We present a new head-eye calibration technique which can be applied for platforms with rather limited motion capability In particular, the proposed calibration technique can be applied to find the relative orientation of a camera mounted on a linear translation platform which does not have rotation capability. The algorithm find the rotation using a calibration data obtained from pure Translation of a camera along two different axes We have derived a calibration algorithm exploiting the rectification technique in such a way that the rectified images should satisfy the epipolar constraint. We present the calibration procedure for both the rotation and the translation components of a camera relative to the platform coordinates. The efficacy of the algorithm is demonstrated through simulations and real experiments.

  • PDF

Measurement of object depth information using two CCD camera (두 대의 CCD 카메라를 이용한 물체의 깊이정보 측정)

  • 전정희;노경완;김충원
    • Proceedings of the IEEK Conference
    • /
    • 1998.06a
    • /
    • pp.693-696
    • /
    • 1998
  • For camera calibration, this paper describes two steps to camera constants and camera parameters. The former is the radial distortion of lens, image center and focal length etc.. The latter is translation, rotation etc.. Camera calibration use tsai's algorithm. In this paper, the solutions are introduced into overdetermined system as matching points that are acquired from two CCD and measured object depth information.

  • PDF

Calibration of Omnidirectional Camera by Considering Inlier Distribution (인라이어 분포를 이용한 전방향 카메라의 보정)

  • Hong, Hyun-Ki;Hwang, Yong-Ho
    • Journal of Korea Game Society
    • /
    • v.7 no.4
    • /
    • pp.63-70
    • /
    • 2007
  • Since the fisheye lens has a wide field of view, it can capture the scene and illumination from all directions from far less number of omnidirectional images. Due to these advantages of the omnidirectional camera, it is widely used in surveillance and reconstruction of 3D structure of the scene In this paper, we present a new self-calibration algorithm of omnidirectional camera from uncalibrated images by considering the inlier distribution. First, one parametric non-linear projection model of omnidirectional camera is estimated with the known rotation and translation parameters. After deriving projection model, we can compute an essential matrix of the camera with unknown motions, and then determine the camera information: rotation and translations. The standard deviations are used as a quantitative measure to select a proper inlier set. The experimental results showed that we can achieve a precise estimation of the omnidirectional camera model and extrinsic parameters including rotation and translation.

  • PDF

Fine-Motion Estimation Using Ego/Exo-Cameras

  • Uhm, Taeyoung;Ryu, Minsoo;Park, Jong-Il
    • ETRI Journal
    • /
    • v.37 no.4
    • /
    • pp.766-771
    • /
    • 2015
  • Robust motion estimation for human-computer interactions played an important role in a novel method of interaction with electronic devices. Existing pose estimation using a monocular camera employs either ego-motion or exo-motion, both of which are not sufficiently accurate for estimating fine motion due to the motion ambiguity of rotation and translation. This paper presents a hybrid vision-based pose estimation method for fine-motion estimation that is specifically capable of extracting human body motion accurately. The method uses an ego-camera attached to a point of interest and exo-cameras located in the immediate surroundings of the point of interest. The exo-cameras can easily track the exact position of the point of interest by triangulation. Once the position is given, the ego-camera can accurately obtain the point of interest's orientation. In this way, any ambiguity between rotation and translation is eliminated and the exact motion of a target point (that is, ego-camera) can then be obtained. The proposed method is expected to provide a practical solution for robustly estimating fine motion in a non-contact manner, such as in interactive games that are designed for special purposes (for example, remote rehabilitation care systems).

A Region Depth Estimation Algorithm using Motion Vector from Monocular Video Sequence (단안영상에서 움직임 벡터를 이용한 영역의 깊이추정)

  • 손정만;박영민;윤영우
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.2
    • /
    • pp.96-105
    • /
    • 2004
  • The recovering 3D image from 2D requires the depth information for each picture element. The manual creation of those 3D models is time consuming and expensive. The goal in this paper is to estimate the relative depth information of every region from single view image with camera translation. The paper is based on the fact that the motion of every point within image which taken from camera translation depends on the depth. Motion vector using full-search motion estimation is compensated for camera rotation and zooming. We have developed a framework that estimates the average frame depth by analyzing motion vector and then calculates relative depth of region to average frame depth. Simulation results show that the depth of region belongs to a near or far object is consistent accord with relative depth that man recognizes.

  • PDF

Motion Compensated Subband Video Coding with Arbitrarily Shaped Region Adaptivity

  • Kwon, Oh-Jin;Choi, Seok-Rim
    • ETRI Journal
    • /
    • v.23 no.4
    • /
    • pp.190-198
    • /
    • 2001
  • The performance of Motion Compensated Discrete Cosine Transform (MC-DCT) video coding is improved by using the region adaptive subband image coding [18]. On the assumption that the video is acquired from the camera on a moving platform and the distance between the camera and the scene is large enough, both the motion of camera and the motion of moving objects in a frame are compensated. For the compensation of camera motion, a feature matching algorithm is employed. Several feature points extracted using a Sobel operator are used to compensate the camera motion of translation, rotation, and zoom. The illumination change between frames is also compensated. Motion compensated frame differences are divided into three regions called stationary background, moving objects, and newly emerging areas each of which is arbitrarily shaped. Different quantizers are used for different regions. Compared to the conventional MC-DCT video coding using block matching algorithm, our video coding scheme shows about 1.0-dB improvements on average for the experimental video samples.

  • PDF