• Title/Summary/Keyword: Depth camera

Search Result 716, Processing Time 0.034 seconds

Automatic Extraction of Particle Streaks for 3D Flow Measurement

  • Kawasue, Kikuhito;Ohya, Yuichiro
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1999.10a
    • /
    • pp.270-273
    • /
    • 1999
  • Circular dynamic stereo has special advantages as it enables a 3-D measurement using a single TV camera and also enables a high accurate measurement without a cumbersome calibration. Annular particle streaks are recorded using this system and the size of annular streaks directly concerns to the depth from TV camera. That is, the size of annular streaks is inversely proportional to the depth from the TV camera and the depth can be measured automatically by image processing technique. Overlapped streaks can be processed also by our method. The flow measurement in a water tank is one of the applications of our system. Tracer particles are introduced into the water in a flow measurement. Since the tracer particles flow with water, three-dimensional velocity distributions in the water tank can be obtained by measuring the all movement of tracer particles. Experimental results demonstrate the feasibility of our method.

  • PDF

Object Recognition using 3D Depth Measurement System. (3차원 거리 측정 장치를 이용한 물체 인식)

  • Gim, Seong-Chan;Ko, Su-Hong;Kim, Hyong-Suk
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.941-942
    • /
    • 2006
  • A depth measurement system to recognize 3D shape of objects using single camera, line laser and a rotating mirror has been investigated. The camera and the light source are fixed, facing the rotating mirror. The laser light is reflected by the mirror and projected to the scene objects whose locations are to be determined. The camera detects the laser light location on object surfaces through the same mirror. The scan over the area to be measured is done by mirror rotation. The Segmentation process of object recognition is performed using the depth data of restored 3D data. The Object recognition domain can be reduced by separating area of interest objects from complex background.

  • PDF

Camera Identification of DIBR-based Stereoscopic Image using Sensor Pattern Noise (센서패턴잡음을 이용한 DIBR 기반 입체영상의 카메라 판별)

  • Lee, Jun-Hee
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.19 no.1
    • /
    • pp.66-75
    • /
    • 2016
  • Stereoscopic image generated by depth image-based rendering(DIBR) for surveillance robot and camera is appropriate in a low bandwidth network. The image is very important data for the decision-making of a commander and thus its integrity has to be guaranteed. One of the methods used to detect manipulation is to check if the stereoscopic image is taken from the original camera. Sensor pattern noise(SPN) used widely for camera identification cannot be directly applied to a stereoscopic image due to the stereo warping in DIBR. To solve this problem, we find out a shifted object in the stereoscopic image and relocate the object to its orignal location in the center image. Then the similarity between SPNs extracted from the stereoscopic image and the original camera is measured only for the object area. Thus we can determine the source of the camera that was used.

Real-time Eye Contact System Using a Kinect Depth Camera for Realistic Telepresence (Kinect 깊이 카메라를 이용한 실감 원격 영상회의의 시선 맞춤 시스템)

  • Lee, Sang-Beom;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4C
    • /
    • pp.277-282
    • /
    • 2012
  • In this paper, we present a real-time eye contact system for realistic telepresence using a Kinect depth camera. In order to generate the eye contact image, we capture a pair of color and depth video. Then, the foreground single user is separated from the background. Since the raw depth data includes several types of noises, we perform a joint bilateral filtering method. We apply the discontinuity-adaptive depth filter to the filtered depth map to reduce the disocclusion area. From the color image and the preprocessed depth map, we construct a user mesh model at the virtual viewpoint. The entire system is implemented through GPU-based parallel programming for real-time processing. Experimental results have shown that the proposed eye contact system is efficient in realizing eye contact, providing the realistic telepresence.

Volumetric Visualization using Depth Information of Stereo Images (스테레오 영상에서의 깊이정보를 이용한 3차원 입체화)

  • Lee, S.J.;Kim, J.H.;Lee, J.W.;Ahn, J.S.;Kim, H.S.;Lee, M.H.
    • Proceedings of the KIEE Conference
    • /
    • 1999.11c
    • /
    • pp.839-841
    • /
    • 1999
  • This paper presents the method of 3D reconstruction of the depth information from the endoscopic stereo scopic images. After camera modeling to find camera parameters, we performed feature-point based stereo matching to find depth information. Acquired some depth information is finally 3D reconstructed using the NURBS(Non Uniform Rational B-Spline) algorithm. The final result image is helpful for the understanding of depth information visually.

  • PDF

Development of Algorithm or Depth Extraction in Stereo Endoscopic Image (스테레오 내시경 영상의 깊이정보추출 알고리즘 개발)

  • Lee, S.H.;Kim, J.H.;Hwang, D.S.;Song, C.G.;Lee, Y.M.;Kim, W.K.;Lee, M.H.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1997 no.11
    • /
    • pp.142-145
    • /
    • 1997
  • This paper presents the development of depth extraction algorithm or the 3D Endoscopic Data using a stereo matching method and depth calculation. The purpose of other's algorithms is to reconstruct 3D object surface and make depth map, but a one of this paper is to measure exact depth information on the base of [cm] from camera to object. For this, we carried out camera calibration.

  • PDF

Reconstruction of 3D Virtual Reality Using Depth Information of Stereo Image (스테레오 영상에서의 깊이정보를 이용한 3D 가상현실 구현)

  • Lee, S.J.;Kim, J.H.;Lee, J.W.;Ahn, J.S.;Lee, D.J.;Lee, M.H.
    • Proceedings of the KIEE Conference
    • /
    • 1999.07g
    • /
    • pp.2950-2952
    • /
    • 1999
  • This paper presents the method of 3D reconstruction of the depth information from the endoscopic stereo scopic images. After camera modeling to find camera parameters, we performed feature-point based stereo matching to find depth information. Acquired some depth information is finally 3D reconstructed using the NURBS(Non Uniform Rational B-Spline) method and OpenGL. The final result image is helpful for the understanding of depth information visually.

  • PDF

Accelerated Generation Algorithm for an Elemental Image Array Using Depth Information in Computational Integral Imaging

  • Piao, Yongri;Kwon, Young-Man;Zhang, Miao;Lee, Joon-Jae
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.2
    • /
    • pp.132-138
    • /
    • 2013
  • In this paper, an accelerated generation algorithm to effectively generate an elemental image array in computational integral imaging system is proposed. In the proposed method, the depth information of 3D object is extracted from the images picked up by a stereo camera or depth camera. Then, the elemental image array can be generated by using the proposed accelerated generation algorithm with the depth information of 3D object. The resultant 3D image generated by the proposed accelerated generation algorithm was compared with that the conventional direct algorithm for verifying the efficiency of the proposed method. From the experimental results, the accuracy of elemental image generated by the proposed method could be confirmed.

Depthmap Generation with Registration of LIDAR and Color Images with Different Field-of-View (다른 화각을 가진 라이다와 칼라 영상 정보의 정합 및 깊이맵 생성)

  • Choi, Jaehoon;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.6
    • /
    • pp.28-34
    • /
    • 2020
  • This paper proposes an approach to the fusion of two heterogeneous sensors with two different fields-of-view (FOV): LIDAR and an RGB camera. Registration between data captured by LIDAR and an RGB camera provided the fusion results. Registration was completed once a depthmap corresponding to a 2-dimensional RGB image was generated. For this fusion, RPLIDAR-A3 (manufactured by Slamtec) and a general digital camera were used to acquire depth and image data, respectively. LIDAR sensor provided distance information between the sensor and objects in a scene nearby the sensor, and an RGB camera provided a 2-dimensional image with color information. Fusion of 2D image and depth information enabled us to achieve better performance with applications of object detection and tracking. For instance, automatic driver assistance systems, robotics or other systems that require visual information processing might find the work in this paper useful. Since the LIDAR only provides depth value, processing and generation of a depthmap that corresponds to an RGB image is recommended. To validate the proposed approach, experimental results are provided.

A Study on Control of Drone Swarms Using Depth Camera (Depth 카메라를 사용한 군집 드론의 제어에 대한 연구)

  • Lee, Seong-Ho;Kim, Dong-Han;Han, Kyong-Ho
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.8
    • /
    • pp.1080-1088
    • /
    • 2018
  • General methods of controlling a drone are divided into manual control and automatic control, which means a drone moves along the route. In case of manual control, a man should be able to figure out the location and status of a drone and have a controller to control it remotely. When people control a drone, they collect information about the location and position of a drone with the eyes and have its internal information such as the battery voltage and atmospheric pressure delivered through telemetry. They make a decision about the movement of a drone based on the gathered information and control it with a radio device. The automatic control method of a drone finding its route itself is not much different from manual control by man. The information about the position of a drone is collected with the gyro and accelerator sensor, and the internal information is delivered to the CPU digitally. The location information of a drone is collected with GPS, atmospheric pressure sensors, camera sensors, and ultrasound sensors. This paper presents an investigation into drone control by a remote computer. Instead of using the automatic control function of a drone, this approach involves a computer observing a drone, determining its movement based on the observation results, and controlling it with a radio device. The computer with a Depth camera collects information, makes a decision, and controls a drone in a similar way to human beings, which makes it applicable to various fields. Its usability is enhanced further since it can control common commercial drones instead of specially manufactured drones for swarm flight. It can also be used to prevent drones clashing each other, control access to a drone, and control drones with no permit.