• Title/Summary/Keyword: Time-of-Flight camera

Search Result 74, Processing Time 0.026 seconds

Development of Intelligent Multiple Camera System for High-Speed Impact Experiment (고속충돌 시험용 지능형 다중 카메라 시스템 개발)

  • Chung, Dong Teak;Park, Chi Young;Jin, Doo Han;Kim, Tae Yeon;Lee, Joo Yeon;Rhee, Ihnseok
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.37 no.9
    • /
    • pp.1093-1098
    • /
    • 2013
  • A single-crystal sapphire is used as a transparent bulletproof window material; however, few studies have investigated the dynamic behavior and fracture properties under high-speed impact. High-speed and high-resolution sequential images are required to study the interaction of the bullet with the brittle ceramic materials. In this study, a device is developed to capture the sequence of high-speed impact/penetration phenomena. This system consists of a speed measurement device, a microprocessor-based camera controller, and multiple CCD cameras. By using a linear array sensor, the speed-measuring device can measure a small (diameter: up to 1 2 mm) and fast (speed: up to Mach 3) bullet. Once a bullet is launched, it passes through the speed measurement device where its time and speed is recorded, and then, the camera controller computes the exact time of arrival to the target during flight. Then, it sends the trigger signal to the cameras and flashes with a specific delay to capture the impact images sequentially. It is almost impossible to capture high-speed images without the estimation of the time of arrival. We were able to capture high-speed images using the new system with precise accuracy.

The Biomechanical Analysis of a One-Legged Jump in Traditional Korean Dance According to Breathing Method (호흡 방법에 따른 한국무용 외발뛰기 동작의 운동역학적 분석)

  • An, Ju-Yeun;Yi, Kyung-Ock
    • Korean Journal of Applied Biomechanics
    • /
    • v.25 no.2
    • /
    • pp.199-206
    • /
    • 2015
  • Objective : The purpose of this study was to conduct a biomechanical analysis of a one-legged jump in a traditional Korean dance (Wae Bal Ddwigi) according to breathing method. Method : Participants for this study were 10 dancers with experience for at least 10 years in traditional Korean dance. Independent variables for this test were two different types of breathing methods. Dependent variables were ground reaction force and lower extremity kinematic variables. The jumping movement was divided into three separate stages, take off, flight, and landing. The subjects were asked a questionnaire regarding the degree of impact force and stability of landing posture after the experiment. The Kistler Force Plate (9281B, Switzerland) was used to measure ground reaction force. A digital camera was used to look into angles of each joint of the lower part of body. SPSS was used for statistical analysis via the dependent t-test(p<.05). Results : There were significant differences in jumping according to breathing method. The inhalation & exhalation method yielded significantly longer flight times combined with greater ground reaction force. The breath-holding method required more core flexion during landing, increasing movement at the hips and shoulders. Conclusion : Consequently, there was more flexion at the knee to compensate for this movement. As a result, landing time was significantly higher for breath-holding.

PID Controled UAV Monitoring System for Fire-Event Detection (PID 제어 UAV를 이용한 발화 감지 시스템의 구현)

  • Choi, Jeong-Wook;Kim, Bo-Seong;Yu, Je-Min;Choi, Ji-Hoon;Lee, Seung-Dae
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.1
    • /
    • pp.1-8
    • /
    • 2020
  • If a dangerous situation arises in a place where out of reach from the human, UAVs can be used to determine the size and location of the situation to reduce the further damage. With this in mind, this paper sets the minimum value of the roll, pitch, and yaw using beta flight to detect the UAV's smooth hovering, integration, and derivative (PID) values to ensure that the UAV stays horizontal, minimizing errors for safe hovering, and the camera uses Open CV to install the Raspberry Pi program and then HSV (color, saturation, Brightness) using the color palette, the filter is black and white except for the red color, which is the closest to the fire we want, so that the UAV detects the image in the air in real time. Finally, it was confirmed that hovering was possible at a height of 0.5 to 5m, and red color recognition was possible at a distance of 5cm and at a distance of 5m.

3D Display Method for Moving Viewers (움직이는 관찰자용 3차원 디스플레이 방법)

  • Heo, Gyeong-Mu;Kim, Myeong-Sin
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.37 no.4
    • /
    • pp.37-45
    • /
    • 2000
  • In this paper we suggest a method of detecting the two eyes position of moving viewer by using images obtained through a color CCD camera, and also a method of rendering view-dependent 3D image which consists of depth estimation, image-based 3D object modeling and stereoscopic display process. Through the experiment of applying the suggested methods, we could find the accurate two-eyes position with the success rate of 97.5% within the processing time of 0.39 second using personal computer, and display the view-dependent 3D image using Fl6 flight model. And through the similarity measurement of stereo image rendered at z-buffer by Open Inventor and captured by stereo camera using robot, we could find that view-dependent 3D picture obtained by our proposed method is optimal to viewer.

  • PDF

UAV-based Image Acquisition, Pre-processing, Transmission System Using Mobile Communication Networks (이동통신망을 활용한 무인비행장치 기반 이미지 획득, 전처리, 전송 시스템)

  • Park, Jong-Hong;Ahn, Il-Yeop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.594-596
    • /
    • 2022
  • This paper relates to a system for pre-processing high-definition images acquired through a camera mounted on an unmanned aerial vehicle(UAV) and transmitting them to a server through a mobile communication network. In the case of the existing UAV system for image acquisition service, the acquired image was stored in the external storage device of the camera mounted on the UAV, and the image was checked by directly moving the storage device after the flight was completed. In the case of this method, there is a limitation in that it is impossible to check whether image acquisition or pre-processing is properly performed before directly checking image data through an external storage device. In addition, since the data is stored only in an external storage device, there is a disadvantage that data sharing is cumbersome. In this paper, to solve the above problems, we propose a system that can remotely check images in real time. Furthermore, we propose a system and method capable of performing pre-processing such as geo-tagging and transmission through a mobile communication network in addition to image acquisition through shooting in an UAV.

  • PDF

Example of Application of Drone Mapping System based on LiDAR to Highway Construction Site (드론 LiDAR에 기반한 매핑 시스템의 고속도로 건설 현장 적용 사례)

  • Seung-Min Shin;Oh-Soung Kwon;Chang-Woo Ban
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.26 no.6_3
    • /
    • pp.1325-1332
    • /
    • 2023
  • Recently, much research is being conducted based on point cloud data for the growth of innovations such as construction automation in the transportation field and virtual national space. This data is often measured through remote control in terrain that is difficult for humans to access using devices such as UAVs and UGVs. Drones, one of the UAVs, are mainly used to acquire point cloud data, but photogrammetry using a vision camera, which takes a lot of time to create a point cloud map, is difficult to apply in construction sites where the terrain changes periodically and surveying is difficult. In this paper, we developed a point cloud mapping system by adopting non-repetitive scanning LiDAR and attempted to confirm improvements through field application. For accuracy analysis, a point cloud map was created through a 2 minute 40 second flight and about 30 seconds of software post-processing on a terrain measuring 144.5 × 138.8 m. As a result of comparing the actual measured distance for structures with an average of 4 m, an average error of 4.3 cm was recorded, confirming that the performance was within the error range applicable to the field.

The study of environmental monitoring by science airship and high accuracy digital multi-spectral camera

  • Choi, Chul-Uong;Kim, Young-Seop;Nam, Kwang-Woo
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.750-750
    • /
    • 2002
  • The Airship PKNU is a roughly 12 m (32 ft) long blimp, filled with helium, whose two-gasoline power(3hp per engine) are independently radio controlled. The motors and propellers can be tilted and are attached to the gondola through an axle and supporting braces. Four stabilizing fins are mounted at the tail of the airship. To fill in the helium, a valve is placed at the bottom of the hull. The inaugural flight was on jul. 31.2002 at the Pusan, S.korea Most environment monitoring system\ problem use satellite image. But, Low resolution satellite image (multi-spectral) : 1km ∼ 250 m ground resolutions is lows. So, detail information acquisition is hard at the complex terrain. High resolution satellite image (black and white) 30m : The ground resolution is high. But it is high price, visit cycle and delivery time is long So. We want make high accuracy airship photogrammetry system. This airship can catch picture Multi. spectral Aerial photographing (visible, Near infrared and thermal infrared), and High resolution (over 6million pixel). It can take atmosphere datum (Temperature (wet bulb, dew point, general), Pressure (static, dynamic), Humidity, wind speed). this airship is very Quickness that aircraft install time is lower than 30 minutes, it is compact and that conveyance is easy. High-capacity save image (628 cut per 1time (over 6million and 4band(R,G,B,NIR)) and this airship can save datum this High accuracy navigatin (position and rotate angle) by DGPS tech. and Gyro system. this airship will do monitor about red-tide, sea surface temperate, and CH-A, SS and etc.

  • PDF

Block Adjustment with GPS/INS in Aerial Photogrammetry (GPS/INS에 의한 항공사진측량의 블럭조정)

  • Park Woon Yong;Lee Kang Won;Lee Jae One;Jeong Gong Uhn
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.22 no.3
    • /
    • pp.285-291
    • /
    • 2004
  • GPS photogrammetry or the GPS/INS photogrammetry, which are based on the direct measurement of the projection centers and attitude at the moment of camera exposure time through loading the GPS receiver or INS in aircraft. Both photogrammetric methods can offer us to acquire the exterior orientation parameters with only minimum ground control points, even the ground control process could be completely skipped. Consequently, we can drastically reduce the time and cost for the mapping process. In this thesis, test flight was conducted in Suwon area to evaluate the performance of accuracy and efficiency through the analysis of results among the three photogrammetric methods, that is, traditional photogrammetry, GPS photogrammetry and GPS/INS photogrammetry. Test results shows that a large variety of advantages of GPS photogrammetry and GPS/INS photogrammetry against traditional photogrammetry is to be verified. Especially, the number of ground control points for the exterior orientation could be saved more than 70~80%, respectively.

Study of the UAV for Application Plans and Landscape Analysis (UAV를 이용한 경관분석 및 활용방안에 관한 기초연구)

  • Kim, Seung-Min
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.32 no.3
    • /
    • pp.213-220
    • /
    • 2014
  • This is the study to conduct the topographical analysis using the orthophotographic data from the waypoint flight using the UAV and constructed the system required for the automatic waypoint flight using the multicopter.. The results of the waypoint photographing are as follows. First, result of the waypoint flight over the area of 9.3ha, take time photogrammetry took 40 minutes in total. The multicopter have maintained the certain flight altitude and a constant speed that the accurate photographing was conducted over the waypoint determined by the ground station. Then, the effect of the photogrammetry was checked. Second, attached a digital camera to the multicopter which is lightweight and low in cost compared to the general photogrammetric unmanned airplane and then used it to check its mobility and economy. In addition, the matching of the photo data, and production of DEM and DXF files made it possible to analyze the topography. Third, produced the high resolution orthophoto(2cm) for the inside of the river and found out that the analysis is possible for the changes in vegetation and topography around the river. Fourth, It would be used for the more in-depth research on landscape analysis such as terrain analysis and visibility analysis. This method may be widely used to analyze the various terrains in cities and rivers. It can also be used for the landscape control such as cultural remains and tourist sites as well as the control of the cultural and historical resources such as the visibility analysis for the construction of DSM.

Study on Structure Visual Inspection Technology using Drones and Image Analysis Techniques (드론과 이미지 분석기법을 활용한 구조물 외관점검 기술 연구)

  • Kim, Jong-Woo;Jung, Young-Woo;Rhim, Hong-Chul
    • Journal of the Korea Institute of Building Construction
    • /
    • v.17 no.6
    • /
    • pp.545-557
    • /
    • 2017
  • The study is about the efficient alternative to concrete surface in the field of visual inspection technology for deteriorated infrastructure. By combining industrial drones and deep learning based image analysis techniques with traditional visual inspection and research, we tried to reduce manpowers, time requirements and costs, and to overcome the height and dome structures. On board device mounted on drones is consisting of a high resolution camera for detecting cracks of more than 0.3 mm, a lidar sensor and a embeded image processor module. It was mounted on an industrial drones, took sample images of damage from the site specimen through automatic flight navigation. In addition, the damege parts of the site specimen was used to measure not only the width and length of cracks but white rust also, and tried up compare them with the final image analysis detected results. Using the image analysis techniques, the damages of 54ea sample images were analyzed by the segmentation - feature extraction - decision making process, and extracted the analysis parameters using supervised mode of the deep learning platform. The image analysis of newly added non-supervised 60ea image samples was performed based on the extracted parameters. The result presented in 90.5 % of the damage detection rate.