• Title/Summary/Keyword: 3D depth Camera

Search Result 299, Processing Time 0.04 seconds

Fusion System of Time-of-Flight Sensor and Stereo Cameras Considering Single Photon Avalanche Diode and Convolutional Neural Network (SPAD과 CNN의 특성을 반영한 ToF 센서와 스테레오 카메라 융합 시스템)

  • Kim, Dong Yeop;Lee, Jae Min;Jun, Sewoong
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.4
    • /
    • pp.230-236
    • /
    • 2018
  • 3D depth perception has played an important role in robotics, and many sensory methods have also proposed for it. As a photodetector for 3D sensing, single photon avalanche diode (SPAD) is suggested due to sensitivity and accuracy. We have researched for applying a SPAD chip in our fusion system of time-of-fight (ToF) sensor and stereo camera. Our goal is to upsample of SPAD resolution using RGB stereo camera. Currently, we have 64 x 32 resolution SPAD ToF Sensor, even though there are higher resolution depth sensors such as Kinect V2 and Cube-Eye. This may be a weak point of our system, however we exploit this gap using a transition of idea. A convolution neural network (CNN) is designed to upsample our low resolution depth map using the data of the higher resolution depth as label data. Then, the upsampled depth data using CNN and stereo camera depth data are fused using semi-global matching (SGM) algorithm. We proposed simplified fusion method created for the embedded system.

Accelerated Generation Algorithm for an Elemental Image Array Using Depth Information in Computational Integral Imaging

  • Piao, Yongri;Kwon, Young-Man;Zhang, Miao;Lee, Joon-Jae
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.2
    • /
    • pp.132-138
    • /
    • 2013
  • In this paper, an accelerated generation algorithm to effectively generate an elemental image array in computational integral imaging system is proposed. In the proposed method, the depth information of 3D object is extracted from the images picked up by a stereo camera or depth camera. Then, the elemental image array can be generated by using the proposed accelerated generation algorithm with the depth information of 3D object. The resultant 3D image generated by the proposed accelerated generation algorithm was compared with that the conventional direct algorithm for verifying the efficiency of the proposed method. From the experimental results, the accuracy of elemental image generated by the proposed method could be confirmed.

3D Image Processing for Recognition and Size Estimation of the Fruit of Plum(Japanese Apricot) (3D 영상을 활용한 매실 인식 및 크기 추정)

  • Jang, Eun-Chae;Park, Seong-Jin;Park, Woo-Jun;Bae, Yeonghwan;Kim, Hyuck-Joo
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.2
    • /
    • pp.130-139
    • /
    • 2021
  • In this study, size of the fruit of Japanese apricot (plum) was estimated through a plum recognition and size estimation program using 3D images in order to control the Eurytoma maslovskii that causes the most damage to plum in a timely manner. In 2018, night shooting was carried out using a Kinect 2.0 Camera. For night shooting in 2019, a RealSense Depth Camera D415 was used. Based on the acquired images, a plum recognition and estimation program consisting of four stages of image preprocessing, sizeable plum extraction, RGB and depth image matching and plum size estimation was implemented using MATLAB R2018a. The results obtained by running the program on 10 images produced an average plum recognition error rate of 61.9%, an average plum recognition error rate of 0.5% and an average size measurement error rate of 3.6%. The continued development of these plum recognition and size estimation programs is expected to enable accurate fruit size monitoring in the future and the development of timely control systems for Eurytoma maslovskii.

Repeatability Test for the Asymmetry Measurement of Human Appearance using General-purpose Depth Cameras (범용 깊이 카메라를 이용한 인체 외형 비대칭 측정의 반복성 평가)

  • Jang, Jun-Su
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.30 no.3
    • /
    • pp.184-189
    • /
    • 2016
  • Human appearance analysis is an important part of both eastern and western medicine fields, such as Sasang constitutional medicine, rehabilitation medicine, dental medicine, and etc. By the rapid growing of depth camera technology, 3D measuring becomes popular in many applications including medical area. In this study, the possibility of using depth cameras in asymmetry analysis of human appearance is examined. We introduce the development of 3D measurement system using 2 Microsoft Kinect depth cameras and fully automated asymmetry analysis algorithms based on computer vision technology. We compare the proposed automated method to the manual method, which is usually used in asymmetry analysis. As a measure of repeatability, standard deviations of asymmetry indices are examined by 10 times repeated experiments. Experimental results show that the standard deviation of the automated method (1.00mm for face, 1.22mm for body) is better than that of the manual method (2.06mm for face, 3.44mm for body) for the same 3D measurement. We conclude that the automated method using depth cameras can be successfully applicable to practical asymmetry analysis and contribute to reliable human appearance analysis.

Realtime 3D Human Full-Body Convergence Motion Capture using a Kinect Sensor (Kinect Sensor를 이용한 실시간 3D 인체 전신 융합 모션 캡처)

  • Kim, Sung-Ho
    • Journal of Digital Convergence
    • /
    • v.14 no.1
    • /
    • pp.189-194
    • /
    • 2016
  • Recently, there is increasing demand for image processing technology while activated the use of equipments such as camera, camcorder and CCTV. In particular, research and development related to 3D image technology using the depth camera such as Kinect sensor has been more activated. Kinect sensor is a high-performance camera that can acquire a 3D human skeleton structure via a RGB, skeleton and depth image in real-time frame-by-frame. In this paper, we develop a system. This system captures the motion of a 3D human skeleton structure using the Kinect sensor. And this system can be stored by selecting the motion file format as trc and bvh that is used for general purposes. The system also has a function that converts TRC motion captured format file into BVH format. Finally, this paper confirms visually through the motion capture data viewer that motion data captured using the Kinect sensor is captured correctly.

Development of Algorithm or Depth Extraction in Stereo Endoscopic Image (스테레오 내시경 영상의 깊이정보추출 알고리즘 개발)

  • Lee, S.H.;Kim, J.H.;Hwang, D.S.;Song, C.G.;Lee, Y.M.;Kim, W.K.;Lee, M.H.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1997 no.11
    • /
    • pp.142-145
    • /
    • 1997
  • This paper presents the development of depth extraction algorithm or the 3D Endoscopic Data using a stereo matching method and depth calculation. The purpose of other's algorithms is to reconstruct 3D object surface and make depth map, but a one of this paper is to measure exact depth information on the base of [cm] from camera to object. For this, we carried out camera calibration.

  • PDF

View Synthesis Using OpenGL for Multi-viewpoint 3D TV (다시점 3차원 방송을 위한 OpenGL을 이용하는 중간영상 생성)

  • Lee, Hyun-Jung;Hur, Nam-Ho;Seo, Yong-Duek
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.507-520
    • /
    • 2006
  • In this paper, we propose an application of OpenGL functions for novel view synthesis from multi-view images and depth maps. While image based rendering has been meant to generate synthetic images by processing the camera view with a graphic engine, little has been known about how to apply the given images and depth information to the graphic engine and render the scene. This paper presents an efficient way of constructing a 3D space with camera parameters, reconstructing the 3D scene with color and depth images, and synthesizing virtual views in real-time as well as their depth images.

3D Fingertip Estimation based on the TOF Camera for Virtual Touch Screen System (가상 터치스크린 시스템을 위한 TOF 카메라 기반 3차원 손 끝 추정)

  • Kim, Min-Wook;Ahn, Yang-Keun;Jung, Kwang-Mo;Lee, Chil-Woo
    • The KIPS Transactions:PartB
    • /
    • v.17B no.4
    • /
    • pp.287-294
    • /
    • 2010
  • TOF technique is one of the skills that can obtain the object's 3D depth information. But depth image has low resolution and fingertip occupy very small region, so, it is difficult to find the precise fingertip's 3D information by only using depth image from TOF camera. In this paper, we estimate fingertip's 3D location using Arm Model and reliable hand's 3D location information that is modified by hexahedron as hand model. Using proposed method we can obtain more precise fingertip's 3D information than using only depth image.

3D Depth Camera-based Obstacle Detection in the Active Safety System of an Electric Wheelchair (전동휠체어 주행안전을 위한 3차원 깊이카메라 기반 장애물검출)

  • Seo, Joonho;Kim, Chang Won
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.7
    • /
    • pp.552-556
    • /
    • 2016
  • Obstacle detection is a key feature in the safe driving control of electric wheelchairs. The suggested obstacle detection algorithm was designed to provide obstacle avoidance direction and detect the existence of cliffs. By means of this information, the wheelchair can determine where to steer and whether to stop or go. A 3D depth camera (Microsoft KINECT) is used to scan the 3D point data of the scene, extract information on obstacles, and produce a steering direction for obstacle avoidance. To be specific, ground detection is applied to extract the obstacle candidates from the scanned data and the candidates are projected onto a 2D map. The 2D map provides discretized information of the extracted obstacles to decide on the avoidance direction (left or right) of the wheelchair. As an additional function, cliff detection is developed. By defining the "cliffband," the ratio of the predefined band area and the detected area within the band area, the cliff detection algorithm can decide if a cliff is in front of the wheelchair. Vehicle tests were carried out by applying the algorithm to the electric wheelchair. Additionally, detailed functions of obstacle detection, such as providing avoidance direction and detecting the existence of cliffs, were demonstrated.