• Title/Summary/Keyword: Time-of-Flight camera

Search Result 74, Processing Time 0.027 seconds

Depth Upsampling Method Using Total Generalized Variation (일반적 총변이를 이용한 깊이맵 업샘플링 방법)

  • Hong, Su-Min;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.21 no.6
    • /
    • pp.957-964
    • /
    • 2016
  • Acquisition of reliable depth maps is a critical requirement in many applications such as 3D videos and free-viewpoint TV. Depth information can be obtained from the object directly using physical sensors, such as infrared ray (IR) sensors. Recently, Time-of-Flight (ToF) range camera including KINECT depth camera became popular alternatives for dense depth sensing. Although ToF cameras can capture depth information for object in real time, but are noisy and subject to low resolutions. Recently, filter-based depth up-sampling algorithms such as joint bilateral upsampling (JBU) and noise-aware filter for depth up-sampling (NAFDU) have been proposed to get high quality depth information. However, these methods often lead to texture copying in the upsampled depth map. To overcome this limitation, we formulate a convex optimization problem using higher order regularization for depth map upsampling. We decrease the texture copying problem of the upsampled depth map by using edge weighting term that chosen by the edge information. Experimental results have shown that our scheme produced more reliable depth maps compared with previous methods.

Variation in Echolocation and Prey-capture Behavior of Rhinolophus ferrumequinum during Foraging Flight (관박쥐(Rhinolophus ferrumequinum)의 먹이포획 과정에 대한 행동 및 반향정위 변화)

  • Chung, Chul Un;Kim, Sung Chul;Jeon, Young Shin;Han, Sang Hoon
    • Journal of Environmental Science International
    • /
    • v.26 no.6
    • /
    • pp.779-788
    • /
    • 2017
  • In this study, we analyzed the changes in the echolocation and prey-capture behavior of the horseshoe bat Rhinolophus ferrumequinum from search phase to capture time. The experiment was conducted in an indoor free-flight room fitted with an ultra-high-speed camera. We found that the bats searched for food while hanging from a structure, and capturing was carried out using the flight membrane. In addition, it was confirmed that the mouth and uropatagium were continuously used in tandem during the capturing process. Furthermore, using Constant Frequency (CF), we confirmed that the prey catching method reflected the wing morphology and echolocation pattern of R. ferrumequinum. The echolocation analysis revealed that the pulse duration, pulse interval, peak frequency, start-FM-bandwidth, and CF duration decreased as the search phase approached the terminal phase. Detailed analysis of echolocation pulse showed that the end-FM bandwidth, which increases as it gets nearer to the capture time of prey, was closely related to the accurate grasp of the location of an insect. At the final moment of prey capture, the passive listening that stopped the divergence of the echolocation was identified; this was determined to be the process of minimizing the interruption from the echo of the echolocation call emitted from the bat itself and sound waves emitted from the prey.

Virtual View-point Depth Image Synthesis System for CGH (CGH를 위한 가상시점 깊이영상 합성 시스템)

  • Kim, Taek-Beom;Ko, Min-Soo;Yoo, Ji-Sang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.7
    • /
    • pp.1477-1486
    • /
    • 2012
  • In this paper, we propose Multi-view CGH Making System using method of generation of virtual view-point depth image. We acquire reliable depth image using TOF depth camera. We extract parameters of reference-view cameras. Once the position of camera of virtual view-point is defined, select optimal reference-view cameras considering position of it and distance between it and virtual view-point camera. Setting a reference-view camera whose position is reverse of primary reference-view camera as sub reference-view, we generate depth image of virtual view-point. And we compensate occlusion boundaries of virtual view-point depth image using depth image of sub reference-view. In this step, remaining hole boundaries are compensated with minimum values of neighborhood. And then, we generate final depth image of virtual view-point. Finally, using result of depth image from these steps, we generate CGH. The experimental results show that the proposed algorithm performs much better than conventional algorithms.

A Study on Depth Information Acquisition Improved by Gradual Pixel Bundling Method at TOF Image Sensor

  • Kwon, Soon Chul;Chae, Ho Byung;Lee, Sung Jin;Son, Kwang Chul;Lee, Seung Hyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.7 no.1
    • /
    • pp.15-19
    • /
    • 2015
  • The depth information of an image is used in a variety of applications including 2D/3D conversion, multi-view extraction, modeling, depth keying, etc. There are various methods to acquire depth information, such as the method to use a stereo camera, the method to use the depth camera of flight time (TOF) method, the method to use 3D modeling software, the method to use 3D scanner and the method to use a structured light just like Microsoft's Kinect. In particular, the depth camera of TOF method measures the distance using infrared light, whereas TOF sensor depends on the sensitivity of optical light of an image sensor (CCD/CMOS). Thus, it is mandatory for the existing image sensors to get an infrared light image by bundling several pixels; these requirements generate a phenomenon to reduce the resolution of an image. This thesis proposed a measure to acquire a high-resolution image through gradual area movement while acquiring a low-resolution image through pixel bundling method. From this measure, one can obtain an effect of acquiring image information in which illumination intensity (lux) and resolution were improved without increasing the performance of an image sensor since the image resolution is not improved as resolving a low-illumination intensity (lux) in accordance with the gradual pixel bundling algorithm.

Mixed Reality Image Generation Method for HMD-based Flight Simulator (HMD기반 비행 시뮬레이터를 위한 혼합현실 영상 생성 기법)

  • Joo Ha Hyun;Mun Hye Kang;Yong Ho Moon
    • Journal of Aerospace System Engineering
    • /
    • v.17 no.1
    • /
    • pp.59-67
    • /
    • 2023
  • Recently, interest in flight simulators based on HMD and mixed reality is increasing. However, they have limitations in providing various interactions and a sense of presence to pilot wearing HMD. To overcome these limitations, a mixed reality image corresponding to the interaction under the actual cockpit environment must be generated in real time and provided to the pilot. For this purpose, we proposed a mixed reality image generation method, in which the cockpit area including the pilot's body could be extracted from real image obtained from the camera attached to the HMD and then composed with virtual image to generate a high-resolution mixed reality image. Simulation results showed that the proposed method could provide mixed reality images to HMD at 30 Hz frame rate with 99% image composition accuracy.

Geometric Modelling and Coordinate Transformation of Satellite-Based Linear Pushbroom-Type CCD Camera Images (선형 CCD카메라 영상의 기하학적 모델 수립 및 좌표 변환)

  • 신동석;이영란
    • Korean Journal of Remote Sensing
    • /
    • v.13 no.2
    • /
    • pp.85-98
    • /
    • 1997
  • A geometric model of pushbroom-type linear CCD camera images is proposed in this paper. At present, this type of cameras are used for obtaining almost all kinds of high-resolution optical images from satellites. The proposed geometric model includes not only a forward transformation which is much more efficient. An inverse transformation function cannot be derived analytically in a closed form because the focal point of an image varies with time. In this paper, therefore, an iterative algorithm in which a focal point os converged to a given pixel position is proposed. Although the proposed model can be applied to any pushbroom-type linear CCD camera images, the geometric model of the high-resolution multi-spectral camera on-board KITSAT-3 is used in this paper as an example. The flight model of KITSAT-3 is in development currently and it is due to be launched late 1998.

A Kinematical Analysis of the Kenmotsu on the Parallel Bars (평행봉 Kenmotsu 동작의 운동학적 분석)

  • Kong, Tae-Ung;Kim, Young-Sun;Yoon, Chang-Sun
    • Korean Journal of Applied Biomechanics
    • /
    • v.15 no.3
    • /
    • pp.61-70
    • /
    • 2005
  • The purpose of study was to investigate the kinematic variables of Kenmotsu motion in Parallel bars. To this study, by 3 dimensional kinematical analysis of 4 male national gymnasts participants in the 28th Athens Olympic Game in 2004, kinematic data collected using video camera. Coordinate data were smoothed by using a fourth-order Butterworth low pass digital filter with cutoff frequency of 6Hz. The conclusions were as follows. 1. In P2, because the constrained swing movement made the movement of a rising back difficult7, the movements of Reg. were performed at low position after Air phase. 2. In E5 event, for the shake of a stable handstand and applied techniques like a Belle(E-value), a Belle Piked(super E-value), a vertical velocity in E2, a horizontal velocity in E3 and a vertical velocity in E4 were necessary for high velocities. 3. In E4 event, it was appeared that for a flexible body's movement of a vertical up-flight, the large angle of the shoulder joint and the flexion & extension of the hip joint were necessary in Air phase and a long flight time and vertical displacement made Reg. movements stable at the high position.

Radar Sensor System Concept for Collision Avoidance of Smart UAV (무인기 충돌방지를 위한 레이다 센서 시스템 설계)

  • Kwag, Young-Kil;Kang, Jung-Wan
    • Proceedings of the Korea Electromagnetic Engineering Society Conference
    • /
    • 2003.11a
    • /
    • pp.203-207
    • /
    • 2003
  • Due to the inherent nature of the low flying UAV, obstacle detection is a fundamental requirement in the flight path to avoid the collision from obstacles as well as manned aircraft. In this paper, a preliminary sensor requirements of an obstacle detection system for UAV in low-altitude flight are analyzed, and the automated obstacle detection sensor system is proposed assessing both passive and active sensors such as EO camera, IR, Laser radar, microwave and millimeter radar. In addition, TCAS (Traffic Alert and Collision Avoidance System) are reviewed for the collision avoidance of the manned aircraft system. It is suggested that small-sized radar sensor is the best candidate for the smart UAV because an active radar can provide the real-time informations on range and range rate in the all-weather environment. However, an important constraints on small UAV should be resolved in terms of accommodation of the mass, volume, and power allocated in the payload of the UAV system design requirements.

  • PDF

Development of a Forest Fire Tracking and GIS Mapping Base on Live Streaming (실시간 영상 기반 산불 추적 및 매핑기법 개발)

  • Cho, In-Je;Kim, Gyou-Beom;Park, Beom-Sun
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.10
    • /
    • pp.123-127
    • /
    • 2020
  • In order to obtain the overall fire line information of medium and large forest fires at night, the ground control system was developed to determine whether forest fires occurred through real-time video clips and to calculate the location of the forest fires determined using the location of drones, angle information of video cameras, and altitude information on the map to reduce the time required for regular video matches obtained after the completion of the mission. To verify the reliability of the developed function, the error distance of the aiming position information of the flight altitude star and the image camera was measured, and the location information within the reliable range was displayed on the map. As the function developed in this paper allows real-time identification of multiple locations of forest fires, it is expected that overall fire line information for the establishment of forest fire extinguishing measures will be obtained more quickly.

A Fast Flight-path Generation Algorithm for Virtual Colonoscopy System (가상 대장 내시경 시스템을 위한 고속 경로 생성 알고리즘)

  • 강동구;이재연;나종범
    • Journal of Biomedical Engineering Research
    • /
    • v.24 no.2
    • /
    • pp.77-82
    • /
    • 2003
  • Virtual colonoscopy is a non-invasive computerized procedure to detect polyps by examining the colon from a CT data set. To fly through the inside of colons. the extraction of a suitable flight-path is necessary to Provide the viewpoint and view direction of a virtual camera. However. manual path extraction by Picking Points is a very time-consuming and difficult task due 1,c, the long and complex shape of colon. Also, existing automatic methods are computationally complex. and tend to generate an improper and/or discontinuous path for complicated regions. In this paper, we propose a fast flight-path generation algorithm using the distance and order maps. The order map Provides all Possible directions of a path. The distance map assigns the Euclidean distance value from each inside voxel to the nearest background voxel. By jointly using these two maps. we can obtain a proper centerline regardless of thickness and curvature of an object. Also, we Propose a simple smoothing technique that guarantees not to collide with the surface of an object. The phantom and real colon data are used for experiments. Experimental results show that for a set of human colon data, the proposed algorithm can provide a smoothened and connected flight-path within a minute on an 800MHz PC. And it is proved that the obtained flight-Path provides successive volume-rendered images satisfactory for virtual navigation.