• Title/Summary/Keyword: Time-of-Flight camera

Search Result 75, Processing Time 0.03 seconds

The Kinematical Analysis of Li Xiaopeng Motion in Horse Vaulting (도마운동 Li Xiaopeng 동작의 운동학적 분석)

  • Park, Jong-Hoon;Yoon, Sang-Moon
    • Korean Journal of Applied Biomechanics
    • /
    • v.13 no.3
    • /
    • pp.81-98
    • /
    • 2003
  • The purpose of this study is to closely examine kinematic characteristics by jump phase of Li Xiaopeng motion in horse vaulting and provide the training data. In doing so, as a result of analyzing kinematic variables through 3-dimensional cinematographic using the high-speed video camera to Li Xiaopeng motion first performed at the men's vault competition at the 14th Busan Asian Games, the following conclusion was obtained. 1. It was indicated that at the post-flight, the increase of flight time and height and twisting rotational velocity has a decisive effect on the increase of twist displacement. And Li Xiaopeng motion showed longer flight time and higher flight height than Ropez motion with the same twist displacement of entire movement. Also the rotational displacement of the trunk at peak of COG was much short of $360^{\circ}$(one rotation) but twist displacement showed $606^{\circ}$. Likewise, Li Xiaopeng motion was indicated to concentrate on twist movement in the early flight. 2. It was indicated that at the landing, Li Xiaopeng motion gets the hip to move back, the trunk to stand up and the horizontal velocity of COG to slow down. This is thought to be the performance of sufficient landing, resulting from large security of rotational displacement of airborne and twist displacement. 3. It was indicated that at the board contact, Li Xiaopeng motion made a rapid rotation uprighting the trunk to recover slowing velocity caused by jumping with the horse in the back, and has already twisted the trunk nearly close to $40^{\circ}$ at board contact. Under the premise that elasticity is generated without the change of the feet contacting the board, it will give an aid to the rotation and twist of pre-flight. Thus, in the round-oH phase, the tap of waist according to the fraction and extension of hip joint and arm push is thought to be very important. 4. It was indicated that at the pre-flight, Li Xiaopeng motion showed bigger movement than the techniques of precedented studies rushing to the horse, and overcomes the concern of relatively low power of jump through the rapid rotation of the trunk. Li Xiaopeng motion secured much twist distance, increased rotational distance with the trunk bent forward, resulting in the effect of rushing to the horse. 5. At horse contact, Li Xiaopeng motion makes a short-time contact, and maintains horse take-off angle close to vertical, contributing to the increase of post-flight time and height. This is thought to be resulted from rapid move toward movement direction along with the rotational velocity of trunk rapidly earned prior to horse contact, and little shave of rotation axis according to twist motion because of effective twist in the same direction.

A Study on Attitude Estimation of UAV Using Image Processing (영상 처리를 이용한 UAV의 자세 추정에 관한 연구)

  • Paul, Quiroz;Hyeon, Ju-Ha;Moon, Yong-Ho;Ha, Seok-Wun
    • Journal of Convergence for Information Technology
    • /
    • v.7 no.5
    • /
    • pp.137-148
    • /
    • 2017
  • Recently, researchers are actively addressed to utilize Unmanned Aerial Vehicles(UAV) for military and industry applications. One of these applications is to trace the preceding flight when it is necessary to track the route of the suspicious reconnaissance aircraft in secret, and it is necessary to estimate the attitude of the target flight such as Roll, Yaw, and Pitch angles in each instant. In this paper, we propose a method for estimating in real time the attitude of a target aircraft using the video information that is provide by an external camera of a following aircraft. Various image processing methods such as color space division, template matching, and statistical methods such as linear regression were applied to detect and estimate key points and Euler angles. As a result of comparing the X-plane flight data with the estimated flight data through the simulation experiment, it is shown that the proposed method can be an effective method to estimate the flight attitude information of the previous flight.

Depth Generation Method Using Multiple Color and Depth Cameras (다시점 카메라와 깊이 카메라를 이용한 3차원 장면의 깊이 정보 생성 방법)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.3
    • /
    • pp.13-18
    • /
    • 2011
  • In this paper, we explain capturing, postprocessing, and depth generation methods using multiple color and depth cameras. Although the time-of-flight (TOF) depth camera measures the scene's depth in real-time, there are noises and lens distortion in the output depth images. The correlation between the multi-view color images and depth images is also low. Therefore, it is essential to correct the depth images and then we use them to generate the depth information of the scene. The results of stereo matching based on the disparity information from the depth cameras showed the better performance than the previous method. Moreover, we obtained the accurate depth information even at the occluded or textureless regions which are the weaknesses of stereo matching.

Development of An Integrated Display Software Platform for Small UAV with Parallel Processing Technique (병렬처리 기법을 이용한 소형 무인비행체용 통합 시현 소프트웨어 플랫폼 개발)

  • Lee, Young-Min;Hwang, In-So;Lim, Bae-Hyeon;Moon, Yong-Ho
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.1
    • /
    • pp.21-27
    • /
    • 2016
  • An integrated display software platform for small UAV is developed based on parallel processing technique in this paper. When the small UAV with high-performance camera and avionic modules is employed to various surveillance-related missions, it is important to reduce the operator's workload and increase the monitoring efficiency. For this purpose, it is needed to develop an efficient monitoring software enable to manipulate the image and flight data obtained during flight within the given processing time and display them simultaneously. In this paper, we set up requirements and suggest the architecture for the software platform. The integrated software platform is implemented with parallel processing scheme. Based on AR drone, we verified that the various data are concurrently displayed by the suggest software platform.

Quality Enhancement of 3D Volumetric Contents Based on 6DoF for 5G Telepresence Service

  • Byung-Seo Park;Woosuk Kim;Jin-Kyum Kim;Dong-Wook Kim;Young-Ho Seo
    • Journal of Web Engineering
    • /
    • v.21 no.3
    • /
    • pp.729-750
    • /
    • 2022
  • In general, the importance of 6DoF (degree of freedom) 3D (dimension) volumetric contents technology is emerging in 5G (generation) telepresence service, Web-based (WebGL) graphics, computer vision, robotics, and next-generation augmented reality. Since it is possible to acquire RGB images and depth images in real-time through depth sensors that use various depth acquisition methods such as time of flight (ToF) and lidar, many changes have been made in object detection, tracking, and recognition research. In this paper, we propose a method to improve the quality of 3D models for 5G telepresence by processing images acquired through depth and RGB cameras on a multi-view camera system. In this paper, the quality is improved in two major ways. The first concerns the shape of the 3D model. A method of removing noise outside the object by applying a mask obtained from a color image and a combined filtering operation to obtain the difference in depth information between pixels inside the object were proposed. Second, we propose an illumination compensation method for images acquired through a multi-view camera system for photo-realistic 3D model generation. It is assumed that the three-dimensional volumetric shooting is done indoors, and the location and intensity of illumination according to time are constant. Since the multi-view camera uses a total of 8 pairs and converges toward the center of space, the intensity and angle of light incident on each camera are different even if the illumination is constant. Therefore, all cameras take a color correction chart and use a color optimization function to obtain a color conversion matrix that defines the relationship between the eight acquired images. Using this, the image input from all cameras is corrected based on the color correction chart. It was confirmed that the quality of the 3D model could be improved by effectively removing noise due to the proposed method when acquiring images of a 3D volumetric object using eight cameras. It has been experimentally proven that the color difference between images is reduced.

An efficient range measurement method using stereoscopic disparity (양안 시차를 이용한 거리 계측의 고속 연산 알고리즘)

  • 김재한
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2001.10a
    • /
    • pp.592-595
    • /
    • 2001
  • 원거리에서 무접촉 거리 계측(ranging)은 군용 장비나 건설, 항해, 자동화 등에 매우 중요하다. 계측방식은 active와 passive 방식으로 구분되는데, active방식은 laser나 microwave, 초음파 등의 time of flight를 이용하거나 레이저 조사(illumination)에 대한 카메라 영상을 해석하는 등의 다양한 방식이 있으나 장치가 복잡하고, passive 방식은 stereo camera의 양안 영상을 이용하거나 focus 특성을 이용하는 방식 등이 있으나 대부분 연산 시간이 많이 요구된다. 본 연구에서는 수동식 스테레오 카메라에서 양안시차를 추출하여 triangulation으로 목표점(target point)의 거리를 측정하는 것을 기본 방식으로 하여, 기존 거리 산출 방식에서 연산 시간이 많이 소요되는 연산 과정을 효율적이고 고속으로 수행할 수 있도록 새로운 방식을 제안하였다. 즉, 목표점에서의 양안 edge 영상을 구하며, 이 영상의 accumulation profile을 correlation하여, 거리 연산의 핵심 요소인 양안 시차를 고속으로 추출하는 알고리즘을 제안하였다. 또한, 제안 방식의 효율성을 실험 결과를 통하여 보였다.

  • PDF

Development of small multi-copter system for indoor collision avoidance flight (실내 비행용 소형 충돌회피 멀티콥터 시스템 개발)

  • Moon, Jung-Ho
    • Journal of Aerospace System Engineering
    • /
    • v.15 no.1
    • /
    • pp.102-110
    • /
    • 2021
  • Recently, multi-copters equipped with various collision avoidance sensors have been introduced to improve flight stability. LiDAR is used to recognize a three-dimensional position. Multiple cameras and real-time SLAM technology are also used to calculate the relative position to obstacles. A three-dimensional depth sensor with a small process and camera is also used. In this study, a small collision-avoidance multi-copter system capable of in-door flight was developed as a platform for the development of collision avoidance software technology. The multi-copter system was equipped with LiDAR, 3D depth sensor, and small image processing board. Object recognition and collision avoidance functions based on the YOLO algorithm were verified through flight tests. This paper deals with recent trends in drone collision avoidance technology, system design/manufacturing process, and flight test results.

Drone Obstacle Avoidance Algorithm using Camera-based Reinforcement Learning (카메라 기반 강화학습을 이용한 드론 장애물 회피 알고리즘)

  • Jo, Si-hun;Kim, Tae-Young
    • Journal of the Korea Computer Graphics Society
    • /
    • v.27 no.5
    • /
    • pp.63-71
    • /
    • 2021
  • Among drone autonomous flight technologies, obstacle avoidance is a very important technology that can prevent damage to drones or surrounding environments and prevent danger. Although the LiDAR sensor-based obstacle avoidance method shows relatively high accuracy and is widely used in recent studies, it has disadvantages of high unit price and limited processing capacity for visual information. Therefore, this paper proposes an obstacle avoidance algorithm for drones using camera-based PPO(Proximal Policy Optimization) reinforcement learning, which is relatively inexpensive and highly scalable using visual information. Drone, obstacles, target points, etc. are randomly located in a learning environment in the three-dimensional space, stereo images are obtained using a Unity camera, and then YOLov4Tiny object detection is performed. Next, the distance between the drone and the detected object is measured through triangulation of the stereo camera. Based on this distance, the presence or absence of obstacles is determined. Penalties are set if they are obstacles and rewards are given if they are target points. The experimennt of this method shows that a camera-based obstacle avoidance algorithm can be a sufficiently similar level of accuracy and average target point arrival time compared to a LiDAR-based obstacle avoidance algorithm, so it is highly likely to be used.

Tracking of ground objects using image information for autonomous rotary unmanned aerial vehicles (자동 비행 소형 무인 회전익항공기의 영상정보를 이용한 지상 이동물체 추적 연구)

  • Kang, Tae-Hwa;Baek, Kwang-Yul;Mok, Sung-Hoon;Lee, Won-Suk;Lee, Dong-Jin;Lim, Seung-Han;Bang, Hyo-Choong
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.38 no.5
    • /
    • pp.490-498
    • /
    • 2010
  • This paper presents an autonomous target tracking approach and technique for transmitting ground control station image periodically for an unmanned aerial vehicle using onboard gimbaled(pan-tilt) camera system. The miniature rotary UAV which was used in this study has a small, high-performance camera, improved target acquisition technique, and autonomous target tracking algorithm. Also in order to stabilize real-time image sequences, image stabilization algorithm was adopted. Finally the target tracking performance was verified through a real flight test.

Implementation of theVerification and Analysis System for the High-Resolution Stereo Camera (고해상도 다기능 스테레오 카메라 지상 검증 및 분석 시스템 구현)

  • Shin, Sang-Youn;Ko, Hyoungho
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.3
    • /
    • pp.471-482
    • /
    • 2019
  • The mission of the high-resolution camera for the lunar exploration is to provide 3D topographic information. It enables us to find the appropriate landing site or to control accurate landing by the short distance stereo image in real-time. In this paper, the ground verification and analysis system using the multi-application stereo camera to develop the high-resolution camera for the lunar exploration are proposed. The mission test items and test plans for the mission requirement are provided and the test results are analyzed by the ground verification and analysis system. For the realistic simulation for the lunar orbiter, the target area that has similar characteristics with the real lunar surface is chosen and the aircraft flight is planned to take image of the area. The DEM is extracted from the stereo image and compose three dimensional results. The high-resolution camera mission requirements for the lunar exploration are verified and the ground data analysis system is developed.