• Title/Summary/Keyword: Kinect Calibration

Search Result 16, Processing Time 0.024 seconds

Convenient View Calibration of Multiple RGB-D Cameras Using a Spherical Object (구형 물체를 이용한 다중 RGB-D 카메라의 간편한 시점보정)

  • Park, Soon-Yong;Choi, Sung-In
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.8
    • /
    • pp.309-314
    • /
    • 2014
  • To generate a complete 3D model from depth images of multiple RGB-D cameras, it is necessary to find 3D transformations between RGB-D cameras. This paper proposes a convenient view calibration technique using a spherical object. Conventional view calibration methods use either planar checkerboards or 3D objects with coded-pattern. In these conventional methods, detection and matching of pattern features and codes takes a significant time. In this paper, we propose a convenient view calibration method using both 3D depth and 2D texture images of a spherical object simultaneously. First, while moving the spherical object freely in the modeling space, depth and texture images of the object are acquired from all RGB-D camera simultaneously. Then, the external parameters of each RGB-D camera is calibrated so that the coordinates of the sphere center coincide in the world coordinate system.

Stereoscopic Video Compositing with a DSLR and Depth Information by Kinect (키넥트 깊이 정보와 DSLR을 이용한 스테레오스코픽 비디오 합성)

  • Kwon, Soon-Chul;Kang, Won-Young;Jeong, Yeong-Hu;Lee, Seung-Hyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.10
    • /
    • pp.920-927
    • /
    • 2013
  • Chroma key technique which composes images by separating an object from its background in specific color has restrictions on color and space. Especially, unlike general chroma key technique, image composition for stereo 3D display requires natural image composition method in 3D space. The thesis attempted to compose images in 3D space using depth keying method which uses high resolution depth information. High resolution depth map was obtained through camera calibration between the DSLR and Kinect sensor. 3D mesh model was created by the high resolution depth information and mapped with RGB color value. Object was converted into point cloud type in 3D space after separating it from its background according to depth information. The image in which 3D virtual background and object are composed obtained and played stereo 3D images using a virtual camera.

Measurement Value Model and System based on Kinect Sensor for Sitting Position Calibration (앉은 자세 교정을 위한 키넥트 센서 기반 자세 측정값 모델 및 시스템)

  • Yoo, Hyunwoo;Kim, Dongkwan;Kim, Taeuk
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.04a
    • /
    • pp.423-426
    • /
    • 2017
  • 최근 스마트 기기의 사용 증가로 인해 자세 관련 질환도 크게 증가하고 있다. 이는 올바르지 않은 자세로 스마트 기기를 사용하는 것에 기인한 것으로 많은 사람들이 자신의 자세를 인식하지 못한 채 올바르지 않은 자세로 스마트 기기를 사용한다. 본 연구에서는 컴퓨터 및 스마트폰의 사용자가 자신의 앉은 자세 정보를 데이터로 인식하기 위해서 키넥트 센서에서 제공하는 골격 모델의 특징점을 추출하여 사용하였다. 이를 바탕으로 앉은 자세의 각도를 계산하여 자세의 올바름의 정도를 알려주는 앉은 자세 측정값 모델 방법과 이 모델에 기반한 시스템을 제안하였다. 본 논문에서는 제안하는 자세 측정 모델 및 시스템의 설계 및 구현을 설명하였고, 실험을 통해서 제안된 모델의 상용화 가능성을 살펴보았다.

Development on Multi-view synthesis system for producing 3D image (3D 영상 제작을 위한 다시점 영상 획득 시스템 개발)

  • Lee, Sang-Ha;Yoo, Jisang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.11a
    • /
    • pp.89-91
    • /
    • 2016
  • 본 논문에서는 실사 영상 기반으로 3D 영상을 생성하기 위하여 효율적으로 다시점 영상을 획득하는 시스템을 제안한다. 기존의 시스템은 대부분 다수의 카메라를 이용하여 다시점 영상을 획득하는 구조이다. 이 경우 각 카메라 간의 정합(calibration)을 수행해야 할 뿐만 아니라 스테레오 매칭을 통해 깊이 정보를 추출하는 과정이 필요하다. 제안하는 시스템에서는 카메라는 고정시킨 상태에서 촬영하고자 하는 객체를 턴테이블 위에 놓고 회전시키면서 촬영한다. 카메라는 Microsoft에서 출시한 컬러 정보와 깊이 정보를 동시에 얻을 수 있는 키넥트(Kinect) v2를 사용한다. 실험을 통하여 제안하는 시스템이 기존 시스템보다 다시점 영상을 효율적으로 생성하는 것을 확인하였다.

  • PDF

Heterogeneous Sensor Coordinate System Calibration Technique for AR Whole Body Interaction (AR 전신 상호작용을 위한 이종 센서 간 좌표계 보정 기법)

  • Hangkee Kim;Daehwan Kim;Dongchun Lee;Kisuk Lee;Nakhoon Baek
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.7
    • /
    • pp.315-324
    • /
    • 2023
  • A simple and accurate whole body rehabilitation interaction technology using immersive digital content is needed for elderly patients with steadily increasing age-related diseases. In this study, we introduce whole-body interaction technology using HoloLens and Kinect for this purpose. To achieve this, we propose three coordinate transformation methods: mesh feature point-based transformation, AR marker-based transformation, and body recognition-based transformation. The mesh feature point-based transformation aligns the coordinate system by designating three feature points on the spatial mesh and using a transform matrix. This method requires manual work and has lower usability, but has relatively high accuracy of 8.5mm. The AR marker-based method uses AR and QR markers recognized by HoloLens and Kinect simultaneously to achieve a compliant accuracy of 11.2mm. The body recognition-based transformation aligns the coordinate system by using the position of the head or HMD recognized by both devices and the position of both hands or controllers. This method has lower accuracy, but does not require additional tools or manual work, making it more user-friendly. Additionally, we reduced the error by more than 10% using RANSAC as a post-processing technique. These three methods can be selectively applied depending on the usability and accuracy required for the content. In this study, we validated this technology by applying it to the "Thunder Punch" and rehabilitation therapy content.

3D Image Construction Using Color and Depth Cameras (색상과 깊이 카메라를 이용한 3차원 영상 구성)

  • Jung, Ha-Hyoung;Kim, Tae-Yeon;Lyou, Joon
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.1
    • /
    • pp.1-7
    • /
    • 2012
  • This paper presents a method for 3D image construction using the hybrid (color and depth) camera system, in which the drawbacks of each camera can be compensated for. Prior to an image generation, intrinsic parameters and extrinsic parameters of each camera are extracted through experiments. The geometry between two cameras is established with theses parameters so as to match the color and depth images. After the preprocessing step, the relation between depth information and distance is derived experimentally as a simple linear function, and 3D image is constructed by coordinate transformations of the matched images. The present scheme has been realized using the Microsoft hybrid camera system named Kinect, and experimental results of 3D image and the distance measurements are given to evaluate the method.