• Title/Summary/Keyword: Kinect Calibration

Search Result 16, Processing Time 0.02 seconds

Marker-less Calibration of Multiple Kinect Devices for 3D Environment Reconstruction (3차원 환경 복원을 위한 다중 키넥트의 마커리스 캘리브레이션)

  • Lee, Suwon
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.10
    • /
    • pp.1142-1148
    • /
    • 2019
  • Reconstruction of the three-dimensional (3D) environment is a key aspect of augmented reality and augmented virtuality, which utilize and incorporate a user's surroundings. Such reconstruction can be easily realized by employing a Kinect device. However, multiple Kinect devices are required for enhancing the reconstruction density and for spatial expansion. While employing multiple Kinect devices, they must be calibrated with respect to each other in advance, and a marker is often used for this purpose. However, a marker needs to be placed at each calibration, and the result of marker detection significantly affects the calibration accuracy. Therefore, a user-friendly, efficient, accurate, and marker-less method for calibrating multiple Kinect devices is proposed in this study. The proposed method includes a joint tracking algorithm for approximate calibration, and the obtained result is further refined by applying the iterative closest point algorithm. Experimental results indicate that the proposed method is a convenient alternative to conventional marker-based methods for calibrating multiple Kinect devices. Hence, the proposed method can be incorporated in various applications of augmented reality and augmented virtuality that require 3D environment reconstruction by employing multiple Kinect devices.

Optimal Depth Calibration for KinectTM Sensors via an Experimental Design Method (실험 계획법에 기반한 키넥트 센서의 최적 깊이 캘리브레이션 방법)

  • Park, Jae-Han;Bae, Ji-Hum;Baeg, Moon-Hong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.11
    • /
    • pp.1003-1007
    • /
    • 2015
  • Depth calibration is a procedure for finding the conversion function that maps disparity data from a depth-sensing camera to actual distance information. In this paper, we present an optimal depth calibration method for Kinect$^{TM}$ sensors based on an experimental design and convex optimization. The proposed method, which utilizes multiple measurements from only two points, suggests a simplified calibration procedure. The confidence ellipsoids obtained from a series of simulations confirm that a simpler procedure produces a more reliable calibration function.

Viewing Angle-Improved 3D Integral Imaging Display with Eye Tracking Sensor

  • Hong, Seokmin;Shin, Donghak;Lee, Joon-Jae;Lee, Byung-Gook
    • Journal of information and communication convergence engineering
    • /
    • v.12 no.4
    • /
    • pp.208-214
    • /
    • 2014
  • In this paper, in order to solve the problems of a narrow viewing angle and the flip effect in a three-dimensional (3D) integral imaging display, we propose an improved system by using an eye tracking method based on the Kinect sensor. In the proposed method, we introduce two types of calibration processes. First process is to perform the calibration between two cameras within Kinect sensor to collect specific 3D information. Second process is to use a space calibration for the coordinate conversion between the Kinect sensor and the coordinate system of the display panel. Our calibration processes can provide the improved performance of estimation for 3D position of the observer's eyes and generate elemental images in real-time speed based on the estimated position. To show the usefulness of the proposed method, we implement an integral imaging display system using the eye tracking process based on our calibration processes and carry out the preliminary experiments by measuring the viewing angle and flipping effect for the reconstructed 3D images. The experimental results reveal that the proposed method extended the viewing angles and removed the flipping images compared with the conventional system.

Microsoft Kinect-based Indoor Building Information Model Acquisition (Kinect(RGB-Depth Camera)를 활용한 실내 공간 정보 모델(BIM) 획득)

  • Kim, Junhee;Yoo, Sae-Woung;Min, Kyung-Won
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.31 no.4
    • /
    • pp.207-213
    • /
    • 2018
  • This paper investigates applicability of Microsoft $Kinect^{(R)}$, RGB-depth camera, to implement a 3D image and spatial information for sensing a target. The relationship between the image of the Kinect camera and the pixel coordinate system is formulated. The calibration of the camera provides the depth and RGB information of the target. The intrinsic parameters are calculated through a checker board experiment and focal length, principal point, and distortion coefficient are obtained. The extrinsic parameters regarding the relationship between the two Kinect cameras consist of rotational matrix and translational vector. The spatial images of 2D projection space are converted to a 3D images, resulting on spatial information on the basis of the depth and RGB information. The measurement is verified through comparison with the length and location of the 2D images of the target structure.

Real-Time Joint Animation Production and Expression System using Deep Learning Model and Kinect Camera (딥러닝 모델과 Kinect 카메라를 이용한 실시간 관절 애니메이션 제작 및 표출 시스템 구축에 관한 연구)

  • Kim, Sang-Joon;Lee, Yu-Jin;Park, Goo-man
    • Journal of Broadcast Engineering
    • /
    • v.26 no.3
    • /
    • pp.269-282
    • /
    • 2021
  • As the distribution of 3D content such as augmented reality and virtual reality increases, the importance of real-time computer animation technology is increasing. However, the computer animation process consists mostly of manual or marker-attaching motion capture, which requires a very long time for experienced professionals to obtain realistic images. To solve these problems, animation production systems and algorithms based on deep learning model and sensors have recently emerged. Thus, in this paper, we study four methods of implementing natural human movement in deep learning model and kinect camera-based animation production systems. Each method is chosen considering its environmental characteristics and accuracy. The first method uses a Kinect camera. The second method uses a Kinect camera and a calibration algorithm. The third method uses deep learning model. The fourth method uses deep learning model and kinect. Experiments with the proposed method showed that the fourth method of deep learning model and using the Kinect simultaneously showed the best results compared to other methods.

Design and Development of the Multiple Kinect Sensor-based Exercise Pose Estimation System (다중 키넥트 센서 기반의 운동 자세 추정 시스템 설계 및 구현)

  • Cho, Yongjoo;Park, Kyoung Shin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.3
    • /
    • pp.558-567
    • /
    • 2017
  • In this research, we developed an efficient real-time human exercise pose estimation system using multiple Kinects. The main objective of this system is to measure and recognize the user's posture (such as knee curl or lunge) more accurately by employing Kinects on the front and the sides. Especially it is designed as an extensible and modular method which enables to support various additional postures in the future. This system is configured as multiple clients and the Unity3D server. The client processes Kinect skeleton data and send to the server. The server performs the multiple-Kinect calibration process and then applies the pose estimation algorithm based on the Kinect-based posture recognition model using feature extractions and the weighted averaging of feature values for different Kinects. This paper presents the design and implementation of the human exercise pose estimation system using multiple Kinects and also describes how to build and execute an interactive Unity3D exergame.

Hand Language Translation Using Kinect

  • Pyo, Junghwan;Kang, Namhyuk;Bang, Jiwon;Jeong, Yongjin
    • Journal of IKEEE
    • /
    • v.18 no.2
    • /
    • pp.291-297
    • /
    • 2014
  • Since hand gesture recognition was realized thanks to improved image processing algorithms, sign language translation has been a critical issue for the hearing-impaired. In this paper, we extract human hand figures from a real time image stream and detect gestures in order to figure out which kind of hand language it means. We used depth-color calibrated image from the Kinect to extract human hands and made a decision tree in order to recognize the hand gesture. The decision tree contains information such as number of fingers, contours, and the hand's position inside a uniform sized image. We succeeded in recognizing 'Hangul', the Korean alphabet, with a recognizing rate of 98.16%. The average execution time per letter of the system was about 76.5msec, a reasonable speed considering hand language translation is based on almost still images. We expect that this research will help communication between the hearing-impaired and other people who don't know hand language.

Motion Capture of the Human Body Using Multiple Depth Sensors

  • Kim, Yejin;Baek, Seongmin;Bae, Byung-Chull
    • ETRI Journal
    • /
    • v.39 no.2
    • /
    • pp.181-190
    • /
    • 2017
  • The movements of the human body are difficult to capture owing to the complexity of the three-dimensional skeleton model and occlusion problems. In this paper, we propose a motion capture system that tracks dynamic human motions in real time. Without using external markers, the proposed system adopts multiple depth sensors (Microsoft Kinect) to overcome the occlusion and body rotation problems. To combine the joint data retrieved from the multiple sensors, our calibration process samples a point cloud from depth images and unifies the coordinate systems in point clouds into a single coordinate system via the iterative closest point method. Using noisy skeletal data from sensors, a posture reconstruction method is introduced to estimate the optimal joint positions for consistent motion generation. Based on the high tracking accuracy of the proposed system, we demonstrate that our system is applicable to various motion-based training programs in dance and Taekwondo.

Hand Tracking Based Projection Mapping System and Applications (손 위치 트래킹 기반의 프로젝션 매핑 시스템 및 응용)

  • Lee, Cheongun;Park, Sanghun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.4
    • /
    • pp.1-9
    • /
    • 2016
  • In this paper we present a projection mapping system onto human's moving hand by a projector as information delivery media and Kinect to recognize hand motion. Most traditional projection mapping techniques project a variety of images onto stationary objects, however, our system provides new user experience by projecting images onto the center of the moving palm. We explain development process of the system, and production of content as applications on our system. We propose hardware organization and development process of open software architecture based on object oriented programming approach. For stable image projection, we describe a device calibration method between the projector and Kinect in three dimensional space, and a denoising technique to minimize artifacts from Kinect coordinates vibration and unstable hand tremor.

Development of Wave Height Field Measurement System Using a Depth Camera (깊이카메라를 이용한 파고장 계측 시스템의 구축)

  • Kim, Hoyong;Jeon, Chanil;Seo, Jeonghwa
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.58 no.6
    • /
    • pp.382-390
    • /
    • 2021
  • The present study suggests the application of a depth camera for wave height field measurement, focusing on the calibration procedure and test setup. Azure Kinect system is used to measure the water surface elevation, with a field of view of 800 mm × 800 mm and repetition rate of 30 Hz. In the optimal optical setup, the spatial resolution of the field of view is 288 × 320 pixels. To detect the water surface by the depth camera, tracer particles that float on the water and reflects infrared is added. The calibration consists of wave height scaling and correction of the barrel distortion. A polynomial regression model of image correction is established using machine learning. The measurement results by the depth camera are compared with capacitance type wave height gauge measurement, to show good agreement.