• Title/Summary/Keyword: air object tracking

Search Result 15, Processing Time 0.034 seconds

An Aerial Robot System Tracking a Moving Object

  • Ogata, Takehito;Tan, Joo Kooi;Ishikawa, Seiji
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1917-1920
    • /
    • 2003
  • Automatic tracking of a moving object such as a person is a demanding technique especially in surveillance. This paper describes an experimental system for tracking a moving object on the ground by using a visually controlled aerial robot. A blimp is used as the aerial robot in the proposed system because of its locality in motion and its silent nature. The developed blimp is equipped with a camera for taking downward images and four rotors for controlling the progression. Once a camera takes an image of a specified moving object on the ground, the blimp is controlled so that it follows the object by the employment of the visual information. Experimental results show satisfactory performance of the system. Advantages of the present system include that images from the air often enable us to avoid occlusion among objects on the ground and that blimp’s progression is much less restricted in the air than, e.g., a mobile robot running on the ground.

  • PDF

Digital Twin and Visual Object Tracking using Deep Reinforcement Learning (심층 강화학습을 이용한 디지털트윈 및 시각적 객체 추적)

  • Park, Jin Hyeok;Farkhodov, Khurshedjon;Choi, Piljoo;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.2
    • /
    • pp.145-156
    • /
    • 2022
  • Nowadays, the complexity of object tracking models among hardware applications has become a more in-demand duty to complete in various indeterminable environment tracking situations with multifunctional algorithm skills. In this paper, we propose a virtual city environment using AirSim (Aerial Informatics and Robotics Simulation - AirSim, CityEnvironment) and use the DQN (Deep Q-Learning) model of deep reinforcement learning model in the virtual environment. The proposed object tracking DQN network observes the environment using a deep reinforcement learning model that receives continuous images taken by a virtual environment simulation system as input to control the operation of a virtual drone. The deep reinforcement learning model is pre-trained using various existing continuous image sets. Since the existing various continuous image sets are image data of real environments and objects, it is implemented in 3D to track virtual environments and moving objects in them.

Application Of Probability Filter For Maintenance Of Air Objects

  • Piskunov, Stanislav;Iasechko, Maksym;Yukhno, leksandr;Polstiana, Nadiia;Gnusov, Yurii;Bashynskyi, Kyrylo;Kozyr, Anton
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.5
    • /
    • pp.31-34
    • /
    • 2021
  • The article considers the possibilities of increasing the accuracy of estimates of the parameters of the trajectory of the target with the provision of a given probability of stable support of the air object, in particular, during its maneuver. The aim of the work is to develop a filtration algorithm that provides a given probability of stable tracking of the air object by determining the regular components of filtration errors, in particular, when maneuvering the air object, and their compensation with appropriate correction of filter parameters and estimates of air object trajectory parameters.

Implementation and Verification of Deep Learning-based Automatic Object Tracking and Handy Motion Control Drone System (심층학습 기반의 자동 객체 추적 및 핸디 모션 제어 드론 시스템 구현 및 검증)

  • Kim, Youngsoo;Lee, Junbeom;Lee, Chanyoung;Jeon, Hyeri;Kim, Seungpil
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.5
    • /
    • pp.163-169
    • /
    • 2021
  • In this paper, we implemented a deep learning-based automatic object tracking and handy motion control drone system and analyzed the performance of the proposed system. The drone system automatically detects and tracks targets by analyzing images obtained from the drone's camera using deep learning algorithms, consisting of the YOLO, the MobileNet, and the deepSORT. Such deep learning-based detection and tracking algorithms have both higher target detection accuracy and processing speed than the conventional color-based algorithm, the CAMShift. In addition, in order to facilitate the drone control by hand from the ground control station, we classified handy motions and generated flight control commands through motion recognition using the YOLO algorithm. It was confirmed that such a deep learning-based target tracking and drone handy motion control system stably track the target and can easily control the drone.

Design of Vehicle Location Tracking System using Mobile Interface

  • Chung, Ji-Moon;Choi, Sung;Ryu, Keun-Ho
    • 한국디지털정책학회:학술대회논문집
    • /
    • 2004.11a
    • /
    • pp.185-202
    • /
    • 2004
  • Recent development in wireless computing and GPS technology cause the active development in the application system of location information in real-time environment such as transportation vehicle management, air traffic control and location based system. Especially, study about vehicle location tracking system, which monitors the vehicle's position in a control center, is appeared to be a representative application system. However, the current vehicle location tracking system can not provide vehicle position information that is not stored in a database at a specific time to users. We designed a vehicle location tracking system that could track vehicle location using mobile interface such as PDA. The proposed system consist of a vehicle location retrieving server and a mobile interface. It is provide not only the moving vehicle's current location but also the position at a past and future time which is not stored in database for users.

  • PDF

An Ultrasonic Positioning System Using Zynq SoC (Zynq-SoC를 이용한 초음파 위치추적 시스템)

  • Kang, Moon-Ho
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.8
    • /
    • pp.1250-1256
    • /
    • 2017
  • In this research, a high-performance ultrasonic positioning system is proposed to track the positions of an indoor mobile object. Composed of an ultrasonic sender (mobile object) and a receiver (anchor), the system employs three ultrasonic time-off-flights (TOFs) and trilateration to estimate the positions of the object with an accuracy of sub-centimeter. On the other hand, because ultrasonic waves are interfered by temperature, wind and various obstacles obstructing the propagation while propagating in air, ultrasonic pulse debounce technique and Kalman filter were applied to TOF and position calculation, respectively, to compensate for the interference and to obtain more accurate moving object position. To perform tasks in real time, ultrasonic signals are processed full-digitally with a Zynq SoC, and as a software design tool, Vivado IDE(integrated design environment) is used to design the whole signal processing system in hierarchical block diagrams. And, a hardware/software co-design is implemented, where the digital circuit portion is designed in the Zynq's fpga and the software portion is c-coded in the Zynq's processors by using the baremetal multiprocessing scheme in which the c-codes are distributed to dual-core processors, cpu0 and cpu1. To verify the usefulness of the proposed system, experiments were performed and the results were analyzed, and it was confirmed that the moving object could be tracked with accuracy of sub-cm.

Fingertip Detection through Atrous Convolution and Grad-CAM (Atrous Convolution과 Grad-CAM을 통한 손 끝 탐지)

  • Noh, Dae-Cheol;Kim, Tae-Young
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.5
    • /
    • pp.11-20
    • /
    • 2019
  • With the development of deep learning technology, research is being actively carried out on user-friendly interfaces that are suitable for use in virtual reality or augmented reality applications. To support the interface using the user's hands, this paper proposes a deep learning-based fingertip detection method to enable the tracking of fingertip coordinates to select virtual objects, or to write or draw in the air. After cutting the approximate part of the corresponding fingertip object from the input image with the Grad-CAM, and perform the convolution neural network with Atrous Convolution for the cut image to detect fingertip location. This method is simpler and easier to implement than existing object detection algorithms without requiring a pre-processing for annotating objects. To verify this method we implemented an air writing application and showed that the recognition rate of 81% and the speed of 76 ms were able to write smoothly without delay in the air, making it possible to utilize the application in real time.

Human sensory feedback research in the armstrong laboratory

  • Weisenberger, Janet M.
    • Journal of the Ergonomics Society of Korea
    • /
    • v.16 no.2
    • /
    • pp.83-100
    • /
    • 1997
  • The Human Sensory Feedback Laboratory, park of the Armstrong Laboratory at Wright-Patterson Air Force Base, Ohio, is involved in the development and evaluation of systems that provide sensory feedback to the human operator in telerobotic and virtual environment applications. Specific projects underway in the laboratory are primarily concerned with the information provided by force and vibrotactile feedback to the operator in dextrous manipulation tasks. Four specific research projects are described in the present report. These include : 1) experiments evaluating a 30-element fingertip display, which employs a titanium-nickel shape memory alloy actuator design to provide vibrotactile feedback about object shape and surface texture ; 2) of a fingertip force-feedback display for 3-dimensional information about object shape and suface texture ; 3) use of a force- feedback joystic to provide "force tunnel" information in pilot pursuit tracking tasks ; and 4) evaluations of a 7 degree-of-freedom exoskeleton used to control a robotic arm. Both basic and applied research questions are discussed.

  • PDF

Design of Vehicle Location Tracking System using Mobile Interface (모바일 인터페이스를 이용한 차량 위치 추적 시스템 설계)

  • Oh, Jun-Seok;Ahn, Yoon-Ae;Jang, Seung-Youn;Lee, Bong-Gyou;Ryu, Keun-Ho
    • The KIPS Transactions:PartD
    • /
    • v.9D no.6
    • /
    • pp.1071-1082
    • /
    • 2002
  • Recent development in wireless computing and GPS technology cause the active development in the application system of location information in real-time environment such as transportation vehicle management, air traffic control and location based system. Especially, study about vehicle location tracking system, which monitors the vehicle's position in a control center, is appeared to be a representative application system. However, the current vehicle location tracking system can not provide vehicle position information that is not stored in a database at a specific time to users. We designed a vehicle location tracking system that could track vehicle location using mobile interface such as PDA. The proposed system consist of a vehicle location retrieving server and a mobile interface. It is provide not only the moving vehicle's current location but also the position at a past and future time which is not stored in database for users.

Acceleration of Viewport Extraction for Multi-Object Tracking Results in 360-degree Video (360도 영상에서 다중 객체 추적 결과에 대한 뷰포트 추출 가속화)

  • Heesu Park;Seok Ho Baek;Seokwon Lee;Myeong-jin Lee
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.3
    • /
    • pp.306-313
    • /
    • 2023
  • Realistic and graphics-based virtual reality content is based on 360-degree videos, and viewport extraction through the viewer's intention or automatic recommendation function is essential. This paper designs a viewport extraction system based on multiple object tracking in 360-degree videos and proposes a parallel computing structure necessary for multiple viewport extraction. The viewport extraction process in 360-degree videos is parallelized by composing pixel-wise threads, through 3D spherical surface coordinate transformation from ERP coordinates and 2D coordinate transformation of 3D spherical surface coordinates within the viewport. The proposed structure evaluated the computation time for up to 30 viewport extraction processes in aerial 360-degree video sequences and confirmed up to 5240 times acceleration compared to the CPU-based computation time proportional to the number of viewports. When using high-speed I/O or memory buffers that can reduce ERP frame I/O time, viewport extraction time can be further accelerated by 7.82 times. The proposed parallelized viewport extraction structure can be applied to simultaneous multi-access services for 360-degree videos or virtual reality contents and video summarization services for individual users.