• 제목/요약/키워드: Orthogonal robot

검색결과 32건 처리시간 0.017초

A Defocus Technique based Depth from Lens Translation using Sequential SVD Factorization

  • Kim, Jong-Il;Ahn, Hyun-Sik;Jeong, Gu-Min;Kim, Do-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.383-388
    • /
    • 2005
  • Depth recovery in robot vision is an essential problem to infer the three dimensional geometry of scenes from a sequence of the two dimensional images. In the past, many studies have been proposed for the depth estimation such as stereopsis, motion parallax and blurring phenomena. Among cues for depth estimation, depth from lens translation is based on shape from motion by using feature points. This approach is derived from the correspondence of feature points detected in images and performs the depth estimation that uses information on the motion of feature points. The approaches using motion vectors suffer from the occlusion or missing part problem, and the image blur is ignored in the feature point detection. This paper presents a novel approach to the defocus technique based depth from lens translation using sequential SVD factorization. Solving such the problems requires modeling of mutual relationship between the light and optics until reaching the image plane. For this mutuality, we first discuss the optical properties of a camera system, because the image blur varies according to camera parameter settings. The camera system accounts for the camera model integrating a thin lens based camera model to explain the light and optical properties and a perspective projection camera model to explain the depth from lens translation. Then, depth from lens translation is proposed to use the feature points detected in edges of the image blur. The feature points contain the depth information derived from an amount of blur of width. The shape and motion can be estimated from the motion of feature points. This method uses the sequential SVD factorization to represent the orthogonal matrices that are singular value decomposition. Some experiments have been performed with a sequence of real and synthetic images comparing the presented method with the depth from lens translation. Experimental results have demonstrated the validity and shown the applicability of the proposed method to the depth estimation.

  • PDF

시설재배 참외 수확 로봇 개발 (Development of Oriental Melon Harvesting Robot in Greenhouse Cultivation)

  • 하유신;김태욱
    • 생물환경조절학회지
    • /
    • 제23권2호
    • /
    • pp.123-130
    • /
    • 2014
  • 참외 재배환경은 토양 위의 수평바닥에서 재배된 것을 수확하여야 하며, 참외가 잎으로 덮여져 있어 인식이 어렵고, 덩굴성 줄기로 인해 참외를 그립하기에도 매우 불리하다. 이러한 재배환경에 적합하도록 엔드이펙트, 머니퓰레이터, 인식장치 등의 참외 수확 로봇을 개발하였고 이를 시험하였다. 엔드이펙터는 수확물을 잡기 위한 그립퍼와 줄기를 절단하는 커터로 구분되며, 그립퍼는 4개의 핑거가 동시에 구동하고, 커터는 2개로 전후진 동작이 되도록 설계하여 파지력과 절단력을 제어할 수 있도록 하였다. 머니퓰레이터는 중심축을 기준으로 회전을 하는 L-R형 모델에 직교 좌표형과 셔틀형 머니퓰레이터를 조합한 4축 매니플레이트 구조로 설계하였다. 인식장치는 1차 인식장치인 GVC와 2차 인식장치인 LVC를 이용하여 참외를 식별하고 그 중에서 당도나 숙도를 예측하여 선별하였다. 이 장치를 이용하여 로봇의 성능시험을 한 결과 수확시간은 평균 18.2sec/ea, 픽업율은 평균 91.4%, 손상율은 평균 8.2%, 선별율은 평균 72.6%로 나타났다.