• 제목/요약/키워드: 3D Camera

검색결과 1,634건 처리시간 0.031초

Stereo Vision Based 3-D Motion Tracking for Human Animation

  • Han, Seung-Il;Kang, Rae-Won;Lee, Sang-Jun;Ju, Woo-Suk;Lee, Joan-Jae
    • 한국멀티미디어학회논문지
    • /
    • 제10권6호
    • /
    • pp.716-725
    • /
    • 2007
  • In this paper we describe a motion tracking algorithm for 3D human animation using stereo vision system. This allows us to extract the motion data of the end effectors of human body by following the movement through segmentation process in HIS or RGB color model, and then blob analysis is used to detect robust shape. When two hands or two foots are crossed at any position and become disjointed, an adaptive algorithm is presented to recognize whether it is left or right one. And the real motion is the 3-D coordinate motion. A mono image data is a data of 2D coordinate. This data doesn't acquire distance from a camera. By stereo vision like human vision, we can acquire a data of 3D motion such as left, right motion from bottom and distance of objects from camera. This requests a depth value including x axis and y axis coordinate in mono image for transforming 3D coordinate. This depth value(z axis) is calculated by disparity of stereo vision by using only end-effectors of images. The position of the inner joints is calculated and 3D character can be visualized using inverse kinematics.

  • PDF

스케일불변 특징의 삼차원 재구성을 통한 이동 로봇의 상대위치추정 (Relative Localization for Mobile Robot using 3D Reconstruction of Scale-Invariant Features)

  • 길세기;이종실;유제군;이응혁;홍승홍;신동범
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제55권4호
    • /
    • pp.173-180
    • /
    • 2006
  • A key component of autonomous navigation of intelligent home robot is localization and map building with recognized features from the environment. To validate this, accurate measurement of relative location between robot and features is essential. In this paper, we proposed relative localization algorithm based on 3D reconstruction of scale invariant features of two images which are captured from two parallel cameras. We captured two images from parallel cameras which are attached in front of robot and detect scale invariant features in each image using SIFT(scale invariant feature transform). Then, we performed matching for the two image's feature points and got the relative location using 3D reconstruction for the matched points. Stereo camera needs high precision of two camera's extrinsic and matching pixels in two camera image. Because we used two cameras which are different from stereo camera and scale invariant feature point and it's easy to setup the extrinsic parameter. Furthermore, 3D reconstruction does not need any other sensor. And the results can be simultaneously used by obstacle avoidance, map building and localization. We set 20cm the distance between two camera and capture the 3frames per second. The experimental results show :t6cm maximum error in the range of less than 2m and ${\pm}15cm$ maximum error in the range of between 2m and 4m.

스테레오 비젼을 이용한 움직임 검출 (Motion detection using stereo vision)

  • 권창일;원성혁;김민기;이기식;김광택;정일준
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 하계종합학술대회 논문집(4)
    • /
    • pp.206-209
    • /
    • 2000
  • Almost vision application systems use 2-D information by taking only one camera. Recently it arises to utilize 3-D information, which is distance from camera to object, because 2-D information is not sufficient. Therefore, we take stereo camera system. In motion detection algorithm using stereo vision, it operates like one camera system, which takes advantage of correlation, edge, and difference algorithm, when it detects any motion. At that time, to detect motion, it compares two images, which is from two cameras, to calculate disparity that contains distance information. By disparity, it can compute real distance and size of object information. We describe a motion detection algorithm which computes 3-D distance and object size in real time.

  • PDF

적외선 조명 및 단일카메라를 이용한 입체거리 센서의 개발 (3D Range Measurement using Infrared Light and a Camera)

  • 김인철;이수용
    • 제어로봇시스템학회논문지
    • /
    • 제14권10호
    • /
    • pp.1005-1013
    • /
    • 2008
  • This paper describes a new sensor system for 3D range measurement using the structured infrared light. Environment and obstacle sensing is the key issue for mobile robot localization and navigation. Laser scanners and infrared scanners cover $180^{\circ}$ and are accurate but too expensive. Those sensors use rotating light beams so that the range measurements are constrained on a plane. 3D measurements are much more useful in many ways for obstacle detection, map building and localization. Stereo vision is very common way of getting the depth information of 3D environment. However, it requires that the correspondence should be clearly identified and it also heavily depends on the light condition of the environment. Instead of using stereo camera, monocular camera and the projected infrared light are used in order to reduce the effects of the ambient light while getting 3D depth map. Modeling of the projected light pattern enabled precise estimation of the range. Identification of the cells from the pattern is the key issue in the proposed method. Several methods of correctly identifying the cells are discussed and verified with experiments.

반도체 자동화를 위한 빈피킹 로봇의 비전 기반 캘리브레이션 방법에 관한 연구 (A Study on Vision-based Calibration Method for Bin Picking Robots for Semiconductor Automation)

  • 구교문;김기현;김효영;심재홍
    • 반도체디스플레이기술학회지
    • /
    • 제22권1호
    • /
    • pp.72-77
    • /
    • 2023
  • In many manufacturing settings, including the semiconductor industry, products are completed by producing and assembling various components. Sorting out from randomly mixed parts and classification operations takes a lot of time and labor. Recently, many efforts have been made to select and assemble correct parts from mixed parts using robots. Automating the sorting and classification of randomly mixed components is difficult since various objects and the positions and attitudes of robots and cameras in 3D space need to be known. Previously, only objects in specific positions were grasped by robots or people sorting items directly. To enable robots to pick up random objects in 3D space, bin picking technology is required. To realize bin picking technology, it is essential to understand the coordinate system information between the robot, the grasping target object, and the camera. Calibration work to understand the coordinate system information between them is necessary to grasp the object recognized by the camera. It is difficult to restore the depth value of 2D images when 3D restoration is performed, which is necessary for bin picking technology. In this paper, we propose to use depth information of RGB-D camera for Z value in rotation and movement conversion used in calibration. Proceed with camera calibration for accurate coordinate system conversion of objects in 2D images, and proceed with calibration of robot and camera. We proved the effectiveness of the proposed method through accuracy evaluations for camera calibration and calibration between robots and cameras.

  • PDF

계위 공간을 이용한 고품질 3차원 비디오 생성 방법 -다단계 계위공간 개념을 이용해 깊이맵의 경계영역을 정제하는 고화질 복합형 카메라 시스템과 고품질 3차원 스캐너를 결합하여 고품질 깊이맵을 생성하는 방법- (High-qualtiy 3-D Video Generation using Scale Space)

  • 이은경;정영기;호요성
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2009년도 학술대회
    • /
    • pp.620-624
    • /
    • 2009
  • 본 논문은 고화질(high definition, HD) 복합형 카메라 시스템과 고품질(high-quality) 3차원 스캐너를 결합하여 다시점 비디오와 그에 상응하는 다시점 깊이맵을 생성하는 시스템을 제안한다. 복합형 카메라 시스템과 3차원 스캐너를 이용해 3차원 비디오를 생성하기 위해서는, 우선 움직임이 없는 배경영역에 대한 깊이정보를 고품질 3차원 스캐너를 이용해 미리 획득하고, 동적으로 움직이는 전경영역에 대해서는 다시점 카메라와 깊이 카메라를 결합한 복합형 카메라 시스템을 이용해 다시점 비디오와 깊이맵을 획득한다. 그리고 3차원 스캐너와 깊이카메라를 통해 획득한 깊이정보를 이용해 3차원 워핑(warping)을 적용하여 각 다시점 카메라를 위한 초기 깊이정보를 예측한다. 초기 깊이정보를 이용해 다시점 깊이를 예측하는 것은 다시점 카메라의 각 시점에서의 초기 깊이맵을 계산하기 위한 것이다. 고화질의 다시점 깊이맵을 생성하기 위해서 belief propagation 방법을 이용하여 초기 깊이맵을 정제한다. 마지막으로, 전경영역의 경계선 영역의 불규칙적인 깊이맵을 정제하기 위해 전경영역의 외곽선 정보를 추출하여 생성된 깊이맵의 경계선 영역을 다시한번 정제한다. 제안한 3차원 스캐너와 복합형 카메라를 결합한 시스템은 기존의 깊이맵 예측 방법보다 정확한 다시점 깊이맵을 포함하는 3차원 비디오를 생성할 수 있었으며, 보다 자연스러운 3차원 영상을 생성할 수 있었다.

  • PDF

다시점 카메라 및 depth 카메라를 이용한 3 차원 비디오 생성 기술 연구 (A Study on the 3D Video Generation Technique using Multi-view and Depth Camera)

  • 엄기문;장은영;허남호;이수인
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2005년도 추계종합학술대회
    • /
    • pp.549-552
    • /
    • 2005
  • This paper presents a 3D video content generation technique and system that uses the multi-view images and the depth map. The proposed uses 3-view video and depth inputs from the 3-view video camera and depth camera for the 3D video content production. Each camera is calibrated using Tsai's calibration method, and its parameters are used to rectify multi-view images for the multi-view stereo matching. The depth and disparity maps for the center-view are obtained from both the depth camera and the multi-view stereo matching technique. These two maps are fused to obtain more reliable depth map. Obtained depth map is not only used to insert a virtual object to the scene based on the depth key, but is also used to synthesize virtual viewpoint images. Some preliminary test results are given to show the functionality of the proposed technique.

  • PDF

줌렌즈 CCD 카메라의 기하학적 검정 정확도 평가 (Evaluation for Geometric Calibration Accuracy of Zoom-lens CCD Camera)

  • 유환희;정상용;김성삼
    • 한국측량학회지
    • /
    • 제21권3호
    • /
    • pp.245-254
    • /
    • 2003
  • 줌 렌즈 CCD 카메라는 사용상 많은 장점을 갖고 있으나 기하학적으로 불안정하여 카메라 검정이 어려운 문제점을 가지고 있으며 이것은 일반적으로 알려진 것과 같이 줌 카메라의 변수가 줌 위치에 따라 변화하기 때문이다. 본 연구에서는 줌 렌즈 CCD카메라의 변수계산과 3차원 위치정확도를 평가하기 위하여 Abdel-Aziz와 Karara가 제안한 DLT기법과 Tsai 기법을 비교분석하였다. 그 결과, 기준점을 대상물이 위치한 공간에 함께 배치할 경우에 Tsai와 DLT모델식에 의한 3차원 위치정확도는 두 방법 모두 비슷하였으나, 기준점과 대상물이 이격되는 경우 DLT에 비해 Tsai가 더 안정적임을 알 수 있었다. 따라서, 그 동안 많이 사용해 온 DLT기법의 변수 최적화를 위한 추가적인 연구가 3차원 위치 정확도 향상을 위해 필요하다고 판단된다.

점 대응 기법을 이용한 카메라의 교정 파라미터 추정에 관한 연구 (A Study on the Estimation of Camera Calibration Parameters using Cooresponding Points Method)

  • 최성구;고현민;노도환
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제50권4호
    • /
    • pp.161-167
    • /
    • 2001
  • Camera calibration is very important problem in 3D measurement using vision system. In this paper is proposed the simple method for camera calibration. It is designed that uses the principle of vanishing points and the concept of corresponding points extracted from the parallel line pairs. Conventional methods are necessary for 4 reference points in one frame. But we proposed has need for only 2 reference points to estimate vanishing points. It has to calculate camera parameters, focal length, camera attitude and position. Our experiment shows the validity and the usability from the result that absolute error of attitude and position is in $10^{-2}$.

  • PDF