• Title/Summary/Keyword: 3D Depth Camera

Search Result 299, Processing Time 0.023 seconds

Noise Reduction Method Using Randomized Unscented Kalman Filter for RGB+D Camera Sensors (랜덤 무향 칼만 필터를 이용한 RGB+D 카메라 센서의 잡음 보정 기법)

  • Kwon, Oh-Seol
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.808-811
    • /
    • 2020
  • This paper proposes a method to minimize the error of the Kinect camera sensor by using a random undirected Kalman filter. Kinect cameras, which provide RGB values and depth information, cause nonlinear errors in the sensor, causing problems in various applications such as skeleton detection. Conventional methods have tried to remove errors by using various filtering techniques. However, there is a limit to removing nonlinear noise effectively. Therefore, in this paper, a randomized unscented Kalman filter was applied to predict and update the nonlinear noise characteristics, we next tried to enhance a performance of skeleton detection. The experimental results confirmed that the proposed method is superior to the conventional method in quantitative results and reconstructed images on 3D space.

Object detection using a light field camera (라이트 필드 카메라를 사용한 객체 검출)

  • Jeong, Mingu;Kim, Dohun;Park, Sanghyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.109-111
    • /
    • 2021
  • Recently, computer vision research using light field cameras has been actively conducted. Since light field cameras have spatial information, various studies are being conducted in fields such as depth map estimation, super resolution, and 3D object detection. In this paper, we propose a method for detecting objects in blur images through a 7×7 array of images acquired through a light field camera. The blur image, which is weak in the existing camera, is detected through the light field camera. The proposed method uses the SSD algorithm to evaluate the performance using blur images acquired from light field cameras.

  • PDF

3D Depth Measurement System based on Parameter Calibration of the Mu1ti-Sensors (실거리 파라미터 교정식 복합센서 기반 3차원 거리측정 시스템)

  • Kim, Jong-Man;Kim, Won-Sop;Hwang, Jong-Sun;Kim, Yeong-Min
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2006.05a
    • /
    • pp.125-129
    • /
    • 2006
  • The analysis of the depth measurement system with multi-sensors (laser, camera, mirror) has been done and the parameter calibration technique has been proposed. In the proposed depth measurement system, the laser beam is reflected to the object by the rotating mirror and again the position of the laser beam is observed through the same mirror by the camera. The depth of the object pointed by the laser beam is computed depending on the pixel position on the CCD. There involved several number of internal and external parameters such as inter-pixel distance, focal length, position and orientation of the system components in the depth measurement error. In this paper, it is shown through the error sensitivity analysis of the parameters that the most important parameters in the sense of error sources are the angle of the laser beam and the inter pixel distance.

  • PDF

Multi-slit prompt-gamma camera for locating of distal dose falloff in proton therapy

  • Park, Jong Hoon;Kim, Sung Hun;Ku, Youngmo;Kim, Chan Hyeong;Lee, Han Rim;Jeong, Jong Hwi;Lee, Se Byeong;Shin, Dong Ho
    • Nuclear Engineering and Technology
    • /
    • v.51 no.5
    • /
    • pp.1406-1416
    • /
    • 2019
  • In this research, a multi-slit prompt-gamma camera was developed to locate the distal dose falloff of the proton beam spots in spot scanning proton therapy. To see the performance of the developed camera, therapeutic proton beams were delivered to a solid plate phantom and then the prompt gammas from the phantom were measured using the camera. Our results show that the camera locates the 90% distal dose falloff (= d90%), within about 2-3 mm of error for the spots which are composed $3.8{\times}10^8$ protons or more. The measured location of d90% is not very sensitive to the irradiation depth of the proton beam (i.e., the depth of proton beam from the phantom surface toward which the camera is located). Considering the number of protons per spot for the most distal spots in typical treatment cases (i.e., 2 Gy dose divided in 2 fields), the camera can locate d90% only for a fraction of the spots depending on the treatment cases. However, the information of those spots is still valuable in that, in the multi-slit prompt-gamma camera, the distal dose falloff of the spots is located solely based on prompt gamma measurement, i.e., not referring to Monte Carlo simulation.

LiDAR Data Interpolation Algorithm for 3D-2D Motion Estimation (3D-2D 모션 추정을 위한 LiDAR 정보 보간 알고리즘)

  • Jeon, Hyun Ho;Ko, Yun Ho
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.12
    • /
    • pp.1865-1873
    • /
    • 2017
  • The feature-based visual SLAM requires 3D positions for the extracted feature points to perform 3D-2D motion estimation. LiDAR can provide reliable and accurate 3D position information with low computational burden, while stereo camera has the problem of the impossibility of stereo matching in simple texture image region, the inaccuracy in depth value due to error contained in intrinsic and extrinsic camera parameter, and the limited number of depth value restricted by permissible stereo disparity. However, the sparsity of LiDAR data may increase the inaccuracy of motion estimation and can even lead to the result of motion estimation failure. Therefore, in this paper, we propose three interpolation methods which can be applied to interpolate sparse LiDAR data. Simulation results obtained by applying these three methods to a visual odometry algorithm demonstrates that the selective bilinear interpolation shows better performance in the view point of computation speed and accuracy.

Active Shape Model-based Object Tracking using Depth Sensor (깊이 센서를 이용한 능동형태모델 기반의 객체 추적 방법)

  • Jung, Hun Jo;Lee, Dong Eun
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.9 no.1
    • /
    • pp.141-150
    • /
    • 2013
  • This study proposes technology using Active Shape Model to track the object separating it by depth-sensors. Unlike the common visual camera, the depth-sensor is not affected by the intensity of illumination, and therefore a more robust object can be extracted. The proposed algorithm removes the horizontal component from the information of the initial depth map and separates the object using the vertical component. In addition, it is also a more efficient morphology, and labeling to perform image correction and object extraction. By applying Active Shape Model to the information of an extracted object, it can track the object more robustly. Active Shape Model has a robust feature-to-object occlusion phenomenon. In comparison to visual camera-based object tracking algorithms, the proposed technology, using the existing depth of the sensor, is more efficient and robust at object tracking. Experimental results, show that the proposed ASM-based algorithm using depth sensor can robustly track objects in real-time.

Intermediate View Synthesis Method using Kinect Depth Camera (Kinect 깊이 카메라를 이용한 가상시점 영상생성 기술)

  • Lee, Sang-Beom;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.1 no.3
    • /
    • pp.29-35
    • /
    • 2012
  • A depth image-based rendering (DIBR) technique is one of the rendering processes of virtual views with a color image and the corresponding depth map. The most important issue of DIBR is that the virtual view has no information at newly exposed areas, so called dis-occlusion. In this paper, we propose an intermediate view generation algorithm using the Kinect depth camera that utilizes the infrared structured light. After we capture a color image and its corresponding depth map, we pre-process the depth map. The pre-processed depth map is warped to the virtual viewpoint and filtered by median filtering to reduce the truncation error. Then, the color image is back-projected to the virtual viewpoint using the warped depth map. In order to fill out the remaining holes caused by dis-occlusion, we perform a background-based image in-painting operation. Finally, we obtain the synthesized image without any dis-occlusion. From experimental results, we have shown that the proposed algorithm generated very natural images in real-time.

  • PDF

Autostereoscopic 3D display system with moving parallax barrier and eye-tracking (이동형 패럴랙스배리어와 시점 추적을 이용한 3D 디스플레이 시스템)

  • Chae, Ho-Byung;Ryu, Young-Roc;Lee, Gang-Sung;Lee, Seung-Hyun
    • Journal of Broadcast Engineering
    • /
    • v.14 no.4
    • /
    • pp.419-427
    • /
    • 2009
  • We present a novel head tracking system for stereoscopic displays that ensures the viewer has a high degree of movement. The tracker is capable of segmenting the viewer from background objects using their relative distance. A depth camera using TOF(Time-Of-Flight) is used to generate a key signal for eye tracking application. A method of the moving parallax barrier is also introduced to supplement a disadvantage of the fixed parallax barrier that provides observation at the specific locations.

A Study on Depth Data Extraction for Object Based on Camera Calibration of Known Patterns (기지 패턴의 카메라 Calibration에 기반한 물체의 깊이 데이터 추출에 관한 연구)

  • 조현우;서경호;김태효
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2001.06a
    • /
    • pp.173-176
    • /
    • 2001
  • In this thesis, a new measurement system is implemented for depth data extraction based on the camera calibration of the known pattern. The relation between 3D world coordinate and 2D image coordinate is analyzed. A new camera calibration algorithm is established from the analysis and then, the internal variables and external variables of the CCD camera are obtained. Suppose that the measurement plane is horizontal plane, from the 2D plane equation and coordinate transformation equation the approximation values corresponding minimum values using Newton-Rabbson method is obtained and they are stored into the look-up table for real time processing . A slit laser light is projected onto the object, and a 2D image obtained on the x-z plane in the measurement system. A 3D shape image can be obtained as the 2D (x-z)images are continuously acquired, during the object is moving to the y direction. The 3D shape images are displayed on computer monitor by use of OpenGL software. In a measuremental result, we found that the resolution of pixels have $\pm$ 1% of error in depth data. It seems that the error components are due to the vibration of mechanic and optical system. We expect that the measurement system need some of mechanic stability and precision optical system in order to improve the system.

  • PDF

A Study on a Measurement Method for 2D Anthropometry using Digital Camera (디지털 카메라를 이용한 2D 인체계측법 연구)

  • 손희정;김효숙;최창석;손희순;김창우
    • The Research Journal of the Costume Culture
    • /
    • v.11 no.1
    • /
    • pp.11-19
    • /
    • 2003
  • This study suggests the new 2D anthropometric method using digital camera. It is used MK2001 program that can convert 2D measurements to 3D measurements. To improve that it is measured 100 college students with direct and indirect anthropometric method. The measurements were processed by the SPSS ver10 Statistical Package. The average, standard deviation, and t-test were calculated for each category. Most measurements by 2D measurements are higher than direct measurements but degree. The difference between direct and indirect measurements is less than 2cm. In the results of t-test, height measurements including other 16 measurements which is easy to measure have no meaningful difference within 1cm. The depth measurements are most high difference. The result of each measurement proves that MK2001 program (2D anthropometry method using digital camera) is available for measuring the human body.

  • PDF