• 제목/요약/키워드: camera translation

검색결과 73건 처리시간 0.024초

A Defocus Technique based Depth from Lens Translation using Sequential SVD Factorization

  • Kim, Jong-Il;Ahn, Hyun-Sik;Jeong, Gu-Min;Kim, Do-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.383-388
    • /
    • 2005
  • Depth recovery in robot vision is an essential problem to infer the three dimensional geometry of scenes from a sequence of the two dimensional images. In the past, many studies have been proposed for the depth estimation such as stereopsis, motion parallax and blurring phenomena. Among cues for depth estimation, depth from lens translation is based on shape from motion by using feature points. This approach is derived from the correspondence of feature points detected in images and performs the depth estimation that uses information on the motion of feature points. The approaches using motion vectors suffer from the occlusion or missing part problem, and the image blur is ignored in the feature point detection. This paper presents a novel approach to the defocus technique based depth from lens translation using sequential SVD factorization. Solving such the problems requires modeling of mutual relationship between the light and optics until reaching the image plane. For this mutuality, we first discuss the optical properties of a camera system, because the image blur varies according to camera parameter settings. The camera system accounts for the camera model integrating a thin lens based camera model to explain the light and optical properties and a perspective projection camera model to explain the depth from lens translation. Then, depth from lens translation is proposed to use the feature points detected in edges of the image blur. The feature points contain the depth information derived from an amount of blur of width. The shape and motion can be estimated from the motion of feature points. This method uses the sequential SVD factorization to represent the orthogonal matrices that are singular value decomposition. Some experiments have been performed with a sequence of real and synthetic images comparing the presented method with the depth from lens translation. Experimental results have demonstrated the validity and shown the applicability of the proposed method to the depth estimation.

  • PDF

직교 좌표에서 카메라 시스템의 방향과 위치 결정 (Determination of Camera System Orientation and Translation in Cartesian Coordinate)

  • 이용중
    • 한국공작기계학회:학술대회논문집
    • /
    • 한국공작기계학회 2000년도 춘계학술대회논문집 - 한국공작기계학회
    • /
    • pp.109-114
    • /
    • 2000
  • A new method for the determination of camera system rotation and translation from in 3-D space using recursive least square method is presented in this paper. With this method, the calculation of the equation is found by a linear algorithm. Where the equation are either given or be obtained by solving five or more point correspondences. Good results can be obtained in the presence if more than the eight point. A main advantage of this new method is that it decouple rotation and translation, and then reduces computation. With respect to error in the solution point number in the input image data, adding one more feature correspondence to required minimum number improves the solution accuracy drastically. However, further increase in the number of feature correspondence improve the solution accuracy only slowly. The algorithm proposed by this paper is used to make camera system rotation and translation easy to recognize even when camera system attached at end effecter of six degrees of freedom industrial robot manipulator are applied industrial field.

  • PDF

원형관로 영상을 이용한 관로주행 로봇의 자세 추정 (Robot Posture Estimation Using Circular Image of Inner-Pipe)

  • 윤지섭;강이석
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제51권6호
    • /
    • pp.258-266
    • /
    • 2002
  • This paper proposes the methodology of the image processing algorithm that estimates the pose of the inner-pipe crawling robot. The inner-pipe crawling robot is usually equipped with a lighting device and a camera on its head for monitoring and inspection purpose of defects on the pipe wall and/or the maintenance operation. The proposed methodology is using these devices without introducing the extra sensors and is based on the fact that the position and the intensity of the reflected light from the inner wall of the pipe vary with the robot posture and the camera. The proposed algorithm is divided into two parts, estimating the translation and rotation angle of the camera, followed by the actual pose estimation of the robot . Based on the fact that the vanishing point of the reflected light moves into the opposite direction from the camera rotation, the camera rotation angle can be estimated. And, based on the fact that the most bright parts of the reflected light moves into the same direction with the camera translation, the camera position most bright parts of the reflected light moves into the same direction with the camera translation, the camera position can be obtained. To investigate the performance of the algorithm, the algorithm is applied to a sewage maintenance robot.

열화상 영상의 Image Translation을 통한 Pseudo-RGB 기반 장소 인식 시스템 (Pseudo-RGB-based Place Recognition through Thermal-to-RGB Image Translation)

  • 이승현;김태주;최유경
    • 로봇학회논문지
    • /
    • 제18권1호
    • /
    • pp.48-52
    • /
    • 2023
  • Many studies have been conducted to ensure that Visual Place Recognition is reliable in various environments, including edge cases. However, existing approaches use visible imaging sensors, RGB cameras, which are greatly influenced by illumination changes, as is widely known. Thus, in this paper, we use an invisible imaging sensor, a long wave length infrared camera (LWIR) instead of RGB, that is shown to be more reliable in low-light and highly noisy conditions. In addition, although the camera sensor used to solve this problem is an LWIR camera, but since the thermal image is converted into RGB image the proposed method is highly compatible with existing algorithms and databases. We demonstrate that the proposed method outperforms the baseline method by about 0.19 for recall performance.

영상 교정을 이용한 헤드-아이 보정 기법 (A Head-Eye Calibration Technique Using Image Rectification)

  • Kim, Nak-Hyun;Kim, Sang-Hyun
    • 대한전자공학회논문지TC
    • /
    • 제37권8호
    • /
    • pp.11-23
    • /
    • 2000
  • 헤드-아이 보정은 로봇과 같이 이동 가능한 플랫폼상에 장착된 카메라의 방향과 위치를 추정하는 과정이다. 본 논문에서는 다소 제한적인 이동 자유도를 가진 플랫폼에 대해 적용할 수 있는 새로운 헤드-아이 보정 기법을 제시한다. 제안된 보정 기법은 특히 회전 자유도가 없는 선형 플랫폼 위에 장착된 카메라의 상대적인 회전 방향을 구하는데 적용될 수 있다. 서로 다른 두 개의 축 상에서의 순수한 천이(translation) 이동에 의해 얻어진 보정 데이터를 이용하여 회전 방향을 구하는 본 알고리듬은 교정된 스테레오 영상은 epipolar 조건을 만족시켜야 한다는 성질을 이용하여 유도되었다. 본 논문에서는 플랫폼 좌표계상에서의 카메라의 회전 및 천이 파라미터를 구하는 알고리듬을 제시하고, 모의 및 실제 실험 결과를 통해 본 알고리듬의 유효성을 검증한다.

  • PDF

두 대의 CCD 카메라를 이용한 물체의 깊이정보 측정 (Measurement of object depth information using two CCD camera)

  • 전정희;노경완;김충원
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 1998년도 하계종합학술대회논문집
    • /
    • pp.693-696
    • /
    • 1998
  • For camera calibration, this paper describes two steps to camera constants and camera parameters. The former is the radial distortion of lens, image center and focal length etc.. The latter is translation, rotation etc.. Camera calibration use tsai's algorithm. In this paper, the solutions are introduced into overdetermined system as matching points that are acquired from two CCD and measured object depth information.

  • PDF

인라이어 분포를 이용한 전방향 카메라의 보정 (Calibration of Omnidirectional Camera by Considering Inlier Distribution)

  • 홍현기;황용호
    • 한국게임학회 논문지
    • /
    • 제7권4호
    • /
    • pp.63-70
    • /
    • 2007
  • 넓은 시야각을 갖는 전방향(omnidirectional) 카메라 시스템은 적은 수의 영상으로도 주변 장면에 대해 많은 정보를 취득할 수 있는 장점으로 감시, 3차원 해석 등의 분야에 널리 응용되고 있다. 본 논문에서는 어안(fisheye) 렌즈를 이용한 전방향 카메라로 입력된 영상으로부터 카메라의 이동 및 회전 파라미터를 자동으로 추정하는 새로운 자동보정 알고리즘이 제안되었다. 먼저, 카메라 위치를 임의의 각 도로 변환하여 얻어진 영상을 이용해 일차 매개변수로 표현된 카메라의 사영(projection)모델을 추정한다. 그리고 이후 다양하게 변환되는 카메라의 위치에 따라 에센셜(essential) 행렬을 구하며, 이 과정에서 대상 영상으로부터 적합한 인라이어(inlier) 집합을 구하기 위해 특징점이 영역 내에 분포 정도를 반영하는 표준편차(standard deviation)를 정량적(quantitative) 기준으로 이용한다. 다양한 실험을 통해 제안된 알고리즘이 전방향 카메라의 사영 모델과 회전, 이동 등의 변환 파라미터를 정확하게 추정함을 확인하였다.

  • PDF

Fine-Motion Estimation Using Ego/Exo-Cameras

  • Uhm, Taeyoung;Ryu, Minsoo;Park, Jong-Il
    • ETRI Journal
    • /
    • 제37권4호
    • /
    • pp.766-771
    • /
    • 2015
  • Robust motion estimation for human-computer interactions played an important role in a novel method of interaction with electronic devices. Existing pose estimation using a monocular camera employs either ego-motion or exo-motion, both of which are not sufficiently accurate for estimating fine motion due to the motion ambiguity of rotation and translation. This paper presents a hybrid vision-based pose estimation method for fine-motion estimation that is specifically capable of extracting human body motion accurately. The method uses an ego-camera attached to a point of interest and exo-cameras located in the immediate surroundings of the point of interest. The exo-cameras can easily track the exact position of the point of interest by triangulation. Once the position is given, the ego-camera can accurately obtain the point of interest's orientation. In this way, any ambiguity between rotation and translation is eliminated and the exact motion of a target point (that is, ego-camera) can then be obtained. The proposed method is expected to provide a practical solution for robustly estimating fine motion in a non-contact manner, such as in interactive games that are designed for special purposes (for example, remote rehabilitation care systems).

단안영상에서 움직임 벡터를 이용한 영역의 깊이추정 (A Region Depth Estimation Algorithm using Motion Vector from Monocular Video Sequence)

  • 손정만;박영민;윤영우
    • 융합신호처리학회논문지
    • /
    • 제5권2호
    • /
    • pp.96-105
    • /
    • 2004
  • 2차원 이미지로부터 3차원 이미지 복원은 각 픽셀까지의 깊이 정보가 필요하고, 3차원 모델의 복원에 관한 일반적인 수작업은 많은 시간과 비용이 소모된다. 본 논문의 목표는 카메라가 이동하는 중에, 획득된 단안 영상에서 영역의 상대적인 깊이 정보를 추출하는 것이다. 카메라 이동에 의한 영상의 모든 점들의 움직임은 깊이 정보에 종속적이라는 사실에 기반을 두고 있다. 전역 탐색 기법을 사용하여 획득한 움직임 벡터에서 카메라 회전과 배율에 관해서 보상을 한다. 움직임 벡터를 분석하여 평균 깊이를 측정하고, 평균 깊이에 대한 각 영역의 상대적 깊이를 구하였다. 실험결과 영역의 상대적인 깊이는 인간이 인식하는 상대적인 깊이와 일치한다는 것을 보였다.

  • PDF

Motion Compensated Subband Video Coding with Arbitrarily Shaped Region Adaptivity

  • Kwon, Oh-Jin;Choi, Seok-Rim
    • ETRI Journal
    • /
    • 제23권4호
    • /
    • pp.190-198
    • /
    • 2001
  • The performance of Motion Compensated Discrete Cosine Transform (MC-DCT) video coding is improved by using the region adaptive subband image coding [18]. On the assumption that the video is acquired from the camera on a moving platform and the distance between the camera and the scene is large enough, both the motion of camera and the motion of moving objects in a frame are compensated. For the compensation of camera motion, a feature matching algorithm is employed. Several feature points extracted using a Sobel operator are used to compensate the camera motion of translation, rotation, and zoom. The illumination change between frames is also compensated. Motion compensated frame differences are divided into three regions called stationary background, moving objects, and newly emerging areas each of which is arbitrarily shaped. Different quantizers are used for different regions. Compared to the conventional MC-DCT video coding using block matching algorithm, our video coding scheme shows about 1.0-dB improvements on average for the experimental video samples.

  • PDF