• Title/Summary/Keyword: Time-of-Flight camera

Search Result 74, Processing Time 0.025 seconds

Real-Time Shooting Area Analysis Algorithm of UAV Considering Three-Dimensional Topography (입체적 지형을 고려한 무인항공기의 실시간 촬영 영역 분석 알고리즘)

  • Park, Woo-Min;Choi, Jeong-Hun;Choi, Seong-Geun;Hwang, Nam-Du;Kim, Hwan-Chul
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.12
    • /
    • pp.1196-1206
    • /
    • 2013
  • In this paper, based on the information about navigation system of UAV with PTZ camera and 3D topography, algorithm able to show us in real-time UAV's geographical shooting location and automatically calculate superficial measure of the shooting area is proposed. And the method that can automatically estimate whether UAV is allowed to shoot a specific area is shown. In case of an UAV's shooting attempt at the specific area, obtainability of valid image depends on not only UAV's location but also information of 3D topography. As a result of the study, Ground Control Center will have real-time information about whether UAV can shoot the needed topography. Therefore, accurate remote flight control will be possible in real-time. Furthermore, the algorithm and the method of estimating shooting probability can be applied to pre-flight simulation and set of flight route.

Experimental Framework for Controller Design of a Rotorcraft Unmanned Aerial Vehicle Using Multi-Camera System

  • Oh, Hyon-Dong;Won, Dae-Yeon;Huh, Sung-Sik;Shim, David Hyun-Chul;Tahk, Min-Jea
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.11 no.2
    • /
    • pp.69-79
    • /
    • 2010
  • This paper describes the experimental framework for the control system design and validation of a rotorcraft unmanned aerial vehicle (UAV). Our approach follows the general procedure of nonlinear modeling, linear controller design, nonlinear simulation and flight test but uses an indoor-installed multi-camera system, which can provide full 6-degree of freedom (DOF) navigation information with high accuracy, to overcome the limitation of an outdoor flight experiment. In addition, a 3-DOF flying mill is used for the performance validation of the attitude control, which considers the characteristics of the multi-rotor type rotorcraft UAV. Our framework is applied to the design and mathematical modeling of the control system for a quad-rotor UAV, which was selected as the test-bed vehicle, and the controller design using the classical proportional-integral-derivative control method is explained. The experimental results showed that the proposed approach can be viewed as a successful tool in developing the controller of new rotorcraft UAVs with reduced cost and time.

3D Head Modeling using Depth Sensor

  • Song, Eungyeol;Choi, Jaesung;Jeon, Taejae;Lee, Sangyoun
    • Journal of International Society for Simulation Surgery
    • /
    • v.2 no.1
    • /
    • pp.13-16
    • /
    • 2015
  • Purpose We conducted a study on the reconstruction of the head's shape in 3D using the ToF depth sensor. A time-of-flight camera (ToF camera) is a range imaging camera system that resolves distance based on the known speed of light, measuring the time-of-flight of a light signal between the camera and the subject for each point of the image. The above method is the safest way of measuring the head shape of plagiocephaly patients in 3D. The texture, appearance and size of the head were reconstructed from the measured data and we used the SDF method for a precise reconstruction. Materials and Methods To generate a precise model, mesh was generated by using Marching cube and SDF. Results The ground truth was determined by measuring 10 people of experiment participants for 3 times repetitively and the created 3D model of the same part from this experiment was measured as well. Measurement of actual head circumference and the reconstructed model were made according to the layer 3 standard and measurement errors were also calculated. As a result, we were able to gain exact results with an average error of 0.9 cm, standard deviation of 0.9, min: 0.2 and max: 1.4. Conclusion The suggested method was able to complete the 3D model by minimizing errors. This model is very effective in terms of quantitative and objective evaluation. However, measurement range somewhat lacks 3D information for the manufacture of protective helmets, as measurements were made according to the layer 3 standard. As a result, measurement range will need to be widened to facilitate production of more precise and perfectively protective helmets by conducting scans on all head circumferences in the future.

Multi-view Generation using High Resolution Stereoscopic Cameras and a Low Resolution Time-of-Flight Camera (고해상도 스테레오 카메라와 저해상도 깊이 카메라를 이용한 다시점 영상 생성)

  • Lee, Cheon;Song, Hyok;Choi, Byeong-Ho;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4A
    • /
    • pp.239-249
    • /
    • 2012
  • Recently, the virtual view generation method using depth data is employed to support the advanced stereoscopic and auto-stereoscopic displays. Although depth data is invisible to user at 3D video rendering, its accuracy is very important since it determines the quality of generated virtual view image. Many works are related to such depth enhancement exploiting a time-of-flight (TOF) camera. In this paper, we propose a fast 3D scene capturing system using one TOF camera at center and two high-resolution cameras at both sides. Since we need two depth data for both color cameras, we obtain two views' depth data from the center using the 3D warping technique. Holes in warped depth maps are filled by referring to the surrounded background depth values. In order to reduce mismatches of object boundaries between the depth and color images, we used the joint bilateral filter on the warped depth data. Finally, using two color images and depth maps, we generated 10 additional intermediate images. To realize fast capturing system, we implemented the proposed system using multi-threading technique. Experimental results show that the proposed capturing system captured two viewpoints' color and depth videos in real-time and generated 10 additional views at 7 fps.

Acquisition of Subcentimeter GSD Images Using UAV and Analysis of Visual Resolution (UAV를 이용한 Subcentimeter GSD 영상의 취득 및 시각적 해상도 분석)

  • Han, Soohee;Hong, Chang-Ki
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.6
    • /
    • pp.563-572
    • /
    • 2017
  • The purpose of the study is to investigate the effect of flight height, flight speed, exposure time of camera shutter and autofocusing on the visual resolution of the image in order to obtain ultra-high resolution images with a GSD less than 1cm. It is also aimed to evaluate the ease of recognition of various types of aerial targets. For this purpose, we measured the visual resolution using a 7952*5304 pixel 35mm CMOS sensor and a 55mm prime lens at 20m intervals from 20m to 120m above ground. As a result, with automatic focusing, the visual resolution is measured 1.1~1.6 times as the theoretical GSD, and without automatic focusing, 1.5~3.5 times. Next, the camera was shot at 80m above ground at a constant flight speed of 5m/s, while reducing the exposure time by 1/2 from 1/60sec to 1/2000sec. Assuming that blur is allowed within 1 pixel, the visual resolution is 1.3~1.5 times larger than the theoretical GSD when the exposure time is kept within the longest exposure time, and 1.4~3.0 times larger when it is not kept. If the aerial targets are printed on A4 paper and they are shot within 80m above ground, the encoded targets can be recognized automatically by commercial software, and various types of general targets and coded ones can be manually recognized with ease.

Coordinates Tracking Algorithm Design (표적 좌표지향 알고리즘 설계)

  • 박주광
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.5 no.3
    • /
    • pp.62-76
    • /
    • 2002
  • This paper describes the design of a Coordinates Tracking algorithm for EOTS and its error analysis. EOTS stabilizes the image sensors such as FLIR, CCD TV camera, LRF/LD, and so on, tracks targets automatically, and provides navigation capability for vehicles. The Coordinates Tracking algorithm calculates the azimuth and the elevation angle of EOTS using the inertial navigation system and the attitude sensors of the vehicle, so that LOS designates the target coordinates which is generated by a Radar or an operator. In the error analysis in this paper, the unexpected behaviors of EOTS that is due to the time delay and deadbeat of the digital signals of the vehicle equipments are anticipated and the countermeasures are suggested. This algorithm is verified and the error analysis is confirmed through simulations. The application of this algorithm to EOTS will improve the operational capability by reducing the time which is required to find the target and support especially the flight in a night time flight and the poor weather condition.

Adaptive Depth Noise Removal for Time-of-Flight Camera using Depth Noise Modeling (Time-of-Flight 카메라의 잡음 모델링을 통한 적응적 거리 잡음 제거 방법)

  • Kim, JoongSik;Baek, Yeul-Min;Kim, Whoi-Yul
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2013.06a
    • /
    • pp.325-328
    • /
    • 2013
  • 본 논문에서는 ToF(Time-of-Flight) 카메라의 거리 잡음을 제거하는 방법으로 거리와 진폭에 따른 거리 잡음 모델링을 이용한 적응적인 SUSAN(Smallest Univalue Segment Assimilating Nucleus) 필터를 제안한다. ToF 카메라의 거리 잡음 제거를 위해서 기존에 제안된 여러 가지 방법들은 거리 잡음의 특성을 고려하지 않거나 진폭에 따른 거리 잡음의 특성만을 고려하였다. 하지만 실제 ToF 카메라의 거리 영상에 포함되는 거리 잡음은 진폭과 거리에 따라서 변화하기 때문에 거리와 진폭을 모두 고려한 거리 잡음 모델링이 필요하다. 따라서 제안하는 방법은 우선 거리와 진폭의 변화에 따른 ToF 카메라의 거리 잡음 특성을 모델링 한다. 이후 제안하는 방법은 생성된 거리 잡음 모델에 의해 인자가 결정되는 적응적 SUSAN 필터를 이용하여 ToF 카메라의 거리 영상의 잡음을 제거한다. 실험 결과 제안하는 방법은 기존의 ToF 거리 영상의 거리 잡음제거 방법에 비해 보다 효과적으로 거리 영상의 잡음을 제거하면서 디테일을 잘 보존하였다.

  • PDF

Hybrid Camera System with a TOF and DSLR Cameras (TOF 깊이 카메라와 DSLR을 이용한 복합형 카메라 시스템 구성 방법)

  • Kim, Soohyeon;Kim, Jae-In;Kim, Taejung
    • Journal of Broadcast Engineering
    • /
    • v.19 no.4
    • /
    • pp.533-546
    • /
    • 2014
  • This paper presents a method for a hybrid (color and depth) camera system construction using a photogrammetric technology. A TOF depth camera is efficient since it measures range information of objects in real-time. However, there are some problems of the TOF depth camera such as low resolution and noise due to surface conditions. Therefore, it is essential to not only correct depth noise and distortion but also construct the hybrid camera system providing a high resolution texture map for generating a 3D model using the depth camera. We estimated geometry of the hybrid camera using a traditional relative orientation algorithm and performed texture mapping using backward mapping based on a condition of collinearity. Other algorithm was compared to evaluate performance about the accuracy of a model and texture mapping. The result showed that the proposed method produced the higher model accuracy.

Development of Low Cost Flight Test Equipment by Using Bang-pai Kite (방패연을 이용한 저비용 비행시험장치 개발)

  • Park, Jongseo;Kim, Bonggyun;Lee, Sangchul
    • Journal of the Korean Society for Aviation and Aeronautics
    • /
    • v.25 no.3
    • /
    • pp.68-73
    • /
    • 2017
  • In this study, we design a low-cost test equipment for real-time image observation and transmission/reception distance test using 1m by 1.5m bang-pai kite. The image observation is made by using two servo motors to enable the camera to control the two axis attitude. The image observation equipment is hung on the string of a kite and the test is performed to observe the real time image. The transmission and reception distance test of the wireless RF transceiver module is conducted on the ground and in the air.

Development of A Prototype Device to Capture Day/Night Cloud Images based on Whole-Sky Camera Using the Illumination Data (정밀조도정보를 이용한 전천카메라 기반의 주·야간 구름영상촬영용 원형장치 개발)

  • Lee, Jaewon;Park, Inchun;cho, Jungho;Ki, GyunDo;Kim, Young Chul
    • Atmosphere
    • /
    • v.28 no.3
    • /
    • pp.317-324
    • /
    • 2018
  • In this study, we review the ground-based whole-sky camera (WSC), which is developed to continuously capture day and night cloud images using the illumination data from a precision Lightmeter with a high temporal resolution. The WSC is combined with a precision Lightmeter developed in IYA (International Year of Astronomy) for analysis of an artificial light pollution at night and a DSLR camera equipped with a fish-eye lens widely applied in observational astronomy. The WSC is designed to adjust the shutter speed and ISO of the equipped camera according to illumination data in order to stably capture cloud images. And Raspberry Pi is applied to control automatically the related process of taking cloud and sky images every minute under various conditions depending on illumination data from Lightmeter for 24 hours. In addition, it is utilized to post-process and store the cloud images and to upload the data to web page in real time. Finally, we check the technical possibility of the method to observe the cloud distribution (cover, type, height) quantitatively and objectively by the optical system, through analysis of the captured cloud images from the developed device.