• Title/Summary/Keyword: TOF camera

Search Result 28, Processing Time 0.026 seconds

Autostereoscopic 3D display system with moving parallax barrier and eye-tracking (이동형 패럴랙스배리어와 시점 추적을 이용한 3D 디스플레이 시스템)

  • Chae, Ho-Byung;Ryu, Young-Roc;Lee, Gang-Sung;Lee, Seung-Hyun
    • Journal of Broadcast Engineering
    • /
    • v.14 no.4
    • /
    • pp.419-427
    • /
    • 2009
  • We present a novel head tracking system for stereoscopic displays that ensures the viewer has a high degree of movement. The tracker is capable of segmenting the viewer from background objects using their relative distance. A depth camera using TOF(Time-Of-Flight) is used to generate a key signal for eye tracking application. A method of the moving parallax barrier is also introduced to supplement a disadvantage of the fixed parallax barrier that provides observation at the specific locations.

Depth Generation Method Using Multiple Color and Depth Cameras (다시점 카메라와 깊이 카메라를 이용한 3차원 장면의 깊이 정보 생성 방법)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.3
    • /
    • pp.13-18
    • /
    • 2011
  • In this paper, we explain capturing, postprocessing, and depth generation methods using multiple color and depth cameras. Although the time-of-flight (TOF) depth camera measures the scene's depth in real-time, there are noises and lens distortion in the output depth images. The correlation between the multi-view color images and depth images is also low. Therefore, it is essential to correct the depth images and then we use them to generate the depth information of the scene. The results of stereo matching based on the disparity information from the depth cameras showed the better performance than the previous method. Moreover, we obtained the accurate depth information even at the occluded or textureless regions which are the weaknesses of stereo matching.

A Real Time Low-Cost Hand Gesture Control System for Interaction with Mechanical Device (기계 장치와의 상호작용을 위한 실시간 저비용 손동작 제어 시스템)

  • Hwang, Tae-Hoon;Kim, Jin-Heon
    • Journal of IKEEE
    • /
    • v.23 no.4
    • /
    • pp.1423-1429
    • /
    • 2019
  • Recently, a system that supports efficient interaction, a human machine interface (HMI), has become a hot topic. In this paper, we propose a new real time low-cost hand gesture control system as one of vehicle interaction methods. In order to reduce computation time, depth information was acquired using a time-of-flight (TOF) camera because it requires a large amount of computation when detecting hand regions using an RGB camera. In addition, fourier descriptor were used to reduce the learning model. Since the Fourier descriptor uses only a small number of points in the whole image, it is possible to miniaturize the learning model. In order to evaluate the performance of the proposed technique, we compared the speeds of desktop and raspberry pi2. Experimental results show that performance difference between small embedded and desktop is not significant. In the gesture recognition experiment, the recognition rate of 95.16% is confirmed.

Virtual View-point Depth Image Synthesis System for CGH (CGH를 위한 가상시점 깊이영상 합성 시스템)

  • Kim, Taek-Beom;Ko, Min-Soo;Yoo, Ji-Sang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.7
    • /
    • pp.1477-1486
    • /
    • 2012
  • In this paper, we propose Multi-view CGH Making System using method of generation of virtual view-point depth image. We acquire reliable depth image using TOF depth camera. We extract parameters of reference-view cameras. Once the position of camera of virtual view-point is defined, select optimal reference-view cameras considering position of it and distance between it and virtual view-point camera. Setting a reference-view camera whose position is reverse of primary reference-view camera as sub reference-view, we generate depth image of virtual view-point. And we compensate occlusion boundaries of virtual view-point depth image using depth image of sub reference-view. In this step, remaining hole boundaries are compensated with minimum values of neighborhood. And then, we generate final depth image of virtual view-point. Finally, using result of depth image from these steps, we generate CGH. The experimental results show that the proposed algorithm performs much better than conventional algorithms.

Spot insepction System for Camera Target Lens using the Computer Aided Vision System (비젼을 이용한 카메라 렌즈 이물질 검사 시스템 개발)

  • 이일환;안우정;박희재;황두현;김왕도
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1996.04a
    • /
    • pp.271-275
    • /
    • 1996
  • In this paper, an automatic spot inspection system has been developed for camera target lens using the computer aided vision system. The developed system comprises: light source, magnifying optics, vision camera, XY robot, and a PC. An efficient algotithm for the spot detection has been implemented, thus up tof ew micrometer size spots can be effectively identified in real time. The developed system has been fully interfaced with XY robot systenm, PLCs, thus the practical spot inspection system has been implemented. The system has been applied to a practical camera manufacturing process, and showed its efficiency.

  • PDF

A Robust Depth Map Upsampling Against Camera Calibration Errors (카메라 보정 오류에 강건한 깊이맵 업샘플링 기술)

  • Kim, Jae-Kwang;Lee, Jae-Ho;Kim, Chang-Ick
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.8-17
    • /
    • 2011
  • Recently, fusion camera systems that consist of depth sensors and color cameras have been widely developed with the advent of a new type of sensor, time-of-flight (TOF) depth sensor. The physical limitation of depth sensors usually generates low resolution images compared to corresponding color images. Therefore, the pre-processing module, such as camera calibration, three dimensional warping, and hole filling, is necessary to generate the high resolution depth map that is placed in the image plane of the color image. However, the result of the pre-processing step is usually inaccurate due to errors from the camera calibration and the depth measurement. Therefore, in this paper, we present a depth map upsampling method robust these errors. First, the confidence of the measured depth value is estimated by the interrelation between the color image and the pre-upsampled depth map. Then, the detailed depth map can be generated by the modified kernel regression method which exclude depth values having low confidence. Our proposed algorithm guarantees the high quality result in the presence of the camera calibration errors. Experimental comparison with other data fusion techniques shows the superiority of our proposed method.

Camera Module for Vehicle Safety (차량 안전용 카메라 모듈)

  • Shin, Seong-Yoon;Cho, Seung-Pyo;Lee, Hyun-Chang;Shin, Kwang-Seong
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.633-634
    • /
    • 2022
  • 본 논문에서는 비행 시간 측정(TOF) 센서와 동일한 View로 고정되고 차량의 진행 방향으로 수평 설치 가능한 카메라를 연구 개발한다. 이 카메라는 객체 인식 정확도 향상을 위하여 1,280×720 해상도 적용하고 30fps로 영상을 출력하며 180°이상의 광각 어안렌즈를 적용하는 것이 가능도록 한다.

  • PDF

The Design of the Obstacle Avoidances System for Unmanned Vehicle Using a Depth Camera (깊이 카메라를 이용한 무인이동체의 장애물 회피 시스템 설계)

  • Kim, Min-Joon;Jang, Jong-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.10a
    • /
    • pp.224-226
    • /
    • 2016
  • With the technical development and rapid increase of private demand, the new market for unmanned vehicle combined with the characteristics of 'unmanned automation' and 'vehicle' is rapidly growing. Even though the pilot driving is currently allowed in some countries, there is no country that has institutionalized the formal driving of self-driving cars. In case of the existing vehicles, safety incidents are frequently happening due to the frequent malfunction of the rear sensor, blind spot of the rear camera, or drivers' carelessness. Once such minor flaws are complemented, the relevant regulations for the commercialization of self-driving car and small drone could be relieved. Contrary to the ultrasonic and laser sensors used for the existing vehicles, this paper aims to attempt the distance measurement by using the depth sensor. A depth camera calculates the distance data based on the TOF method calculating the time difference by lighting laser or infrared light onto an object or area and then receiving the beam coming back. As this camera can obtain the depth data in the pixel unit of CCD camera, it can be used for collecting depth data in real-time. This paper suggests to solve problems mentioned above by using depth data in real-time and also to design the obstacle avoidance system through distance measurement.

  • PDF

Estimation of Body Weight Using Body Volume Determined from Three-Dimensional Images for Korean Cattle (한우의 3차원 영상에서 결정된 몸통 체적을 이용한 체중 추정)

  • Jang, Dong Hwa;Kim, Chulsoo;Kim, Yong Hyeon
    • Journal of Bio-Environment Control
    • /
    • v.30 no.4
    • /
    • pp.393-400
    • /
    • 2021
  • Body weight of livestock is a crucial indicator for assessing feed requirements and nutritional status. This study was performed to estimate the body weight of Korean cattle (Hanwoo) using body volume determined from three-dimensional (3-D) image. A TOF camera with a resolution of 640×480 pixels, a frame rate of 44 fps and a field of view of 47°(H)×37°(V) was used to capture the 3-D images for Hanwoo. A grid image of the body was obtained through preprocessing such as separating the body from background and removing outliers from the obtained 3-D image. The body volume was determined by numerical integration using depth information to individual grid. The coefficient of determination for a linear regression model of body weight and body volume for calibration dataset was 0.8725. On the other hand, the coefficient of determination was 0.9083 in a multiple regression model for estimating body weight, in which the age of Hanwoo was added to the body volume as an explanatory variable. Mean absolute percentage error and root mean square error in the multiple regression model to estimate the body weight for validation dataset were 8.2% and 24.5kg, respectively. The performance of the regression model for weight estimation was improved and the effort required for estimating body weight could be reduced as the body volume of Hanwoo was used. From these results obtained, it was concluded that the body volume determined from 3-D of Hanwoo could be used as an effective variable for estimating body weight.

Multi-view Generation using High Resolution Stereoscopic Cameras and a Low Resolution Time-of-Flight Camera (고해상도 스테레오 카메라와 저해상도 깊이 카메라를 이용한 다시점 영상 생성)

  • Lee, Cheon;Song, Hyok;Choi, Byeong-Ho;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4A
    • /
    • pp.239-249
    • /
    • 2012
  • Recently, the virtual view generation method using depth data is employed to support the advanced stereoscopic and auto-stereoscopic displays. Although depth data is invisible to user at 3D video rendering, its accuracy is very important since it determines the quality of generated virtual view image. Many works are related to such depth enhancement exploiting a time-of-flight (TOF) camera. In this paper, we propose a fast 3D scene capturing system using one TOF camera at center and two high-resolution cameras at both sides. Since we need two depth data for both color cameras, we obtain two views' depth data from the center using the 3D warping technique. Holes in warped depth maps are filled by referring to the surrounded background depth values. In order to reduce mismatches of object boundaries between the depth and color images, we used the joint bilateral filter on the warped depth data. Finally, using two color images and depth maps, we generated 10 additional intermediate images. To realize fast capturing system, we implemented the proposed system using multi-threading technique. Experimental results show that the proposed capturing system captured two viewpoints' color and depth videos in real-time and generated 10 additional views at 7 fps.