• Title/Summary/Keyword: 3D tracking

Search Result 763, Processing Time 0.031 seconds

Fireworks Modeling Technique based on Particle Tracking (입자추적기반의 불꽃 모델링 기법)

  • Cho, ChangWoo;Kim, KiHyun;Jeong, ChangSung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.6
    • /
    • pp.102-109
    • /
    • 2014
  • A particle system is used for modeling the physical phenomenon. There are many traditional ways for simulation modeling which can be well suited for application including the landscapes of branches, clouds, waves, fog, rain, snow and fireworks in the three-dimensional space. In this paper, we present a new fireworks modeling technique for modeling 3D firework based on Firework Particle Tracking (FPT) using the particle system. Our method can track and recognize the launched and exploded particle of fireworks, and extracts relatively accurate 3D positions of the particles using 3D depth values. It can realize 3D simulation by using tracking information such as position, speed, color and life time of the firework particle. We exploit Region of Interest (ROI) for fast particle extraction and the prevention of false particle extraction caused by noise. Moreover, Kalman filter is used to enhance the robustness in launch step. We propose a new fireworks particle tracking method for the efficient tracking of particles by considering maximum moving range and moving direction of particles, and shall show that the 3D speeds of particles can be obtained by finding the rotation angles of fireworks. Also, we carry out the performance evaluation of particle tracking: tracking speed and accuracy for tracking, classification, rotation angle respectively with respect to four types of fireworks: sphere, circle, chrysanthemum and heart.

Robust Position Tracking for Position-Based Visual Servoing and Its Application to Dual-Arm Task (위치기반 비주얼 서보잉을 위한 견실한 위치 추적 및 양팔 로봇의 조작작업에의 응용)

  • Kim, Chan-O;Choi, Sung;Cheong, Joo-No;Yang, Gwang-Woong;Kim, Hong-Seo
    • The Journal of Korea Robotics Society
    • /
    • v.2 no.2
    • /
    • pp.129-136
    • /
    • 2007
  • This paper introduces a position-based robust visual servoing method which is developed for operation of a human-like robot with two arms. The proposed visual servoing method utilizes SIFT algorithm for object detection and CAMSHIFT algorithm for object tracking. While the conventional CAMSHIFT has been used mainly for object tracking in a 2D image plane, we extend its usage for object tracking in 3D space, by combining the results of CAMSHIFT for two image plane of a stereo camera. This approach shows a robust and dependable result. Once the robot's task is defined based on the extracted 3D information, the robot is commanded to carry out the task. We conduct several position-based visual servoing tasks and compare performances under different conditions. The results show that the proposed visual tracking algorithm is simple but very effective for position-based visual servoing.

  • PDF

Realtime Markerless 3D Object Tracking for Augmented Reality (증강현실을 위한 실시간 마커리스 3차원 객체 추적)

  • Min, Jae-Hong;Islam, Mohammad Khairul;Paul, Anjan Kumar;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.14 no.2
    • /
    • pp.272-277
    • /
    • 2010
  • AR(Augmented Reality) needs medium between real and virtual, world, and recognition techniques are necessary to track an object continuously. Optical tracking using marker is mainly used, but it takes time and is inconvenient to attach marker onto the target objects. Therefore, many researchers try to develop markerless tracking techniques nowaday. In this paper, we extract features and 3D position from 3D objects and suggest realtime tracking based on these features and positions, which do not use just coplanar features and 2D position. We extract features using SURF, get rotation matrix and translation vector of 3D object using POSIT with these features and track the object in real time. If the extracted features are nor enough and it fail to track the object, then new features are extracted and re-matched to recover the tracking. Also, we get rotation in matrix and translation vector of 3D object using POSIT and track the object in real time.

User classification and location tracking algorithm using deep learning (딥러닝을 이용한 사용자 구분 및 위치추적 알고리즘)

  • Park, Jung-tak;Lee, Sol;Park, Byung-Seo;Seo, Young-ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.78-79
    • /
    • 2022
  • In this paper, we propose a technique for tracking the classification and location of each user through body proportion analysis of the normalized skeletons of multiple users obtained using RGB-D cameras. To this end, each user's 3D skeleton is extracted from the 3D point cloud and body proportion information is stored. After that, the stored body proportion information is compared with the body proportion data output from the entire frame to propose a user classification and location tracking algorithm in the entire image.

  • PDF

Unscented Kalman Snake for 3D Vessel Tracking

  • Lee, Sang-Hoon;Lee, Sanghoon
    • Journal of International Society for Simulation Surgery
    • /
    • v.2 no.1
    • /
    • pp.17-25
    • /
    • 2015
  • Purpose In this paper, we propose a robust 3D vessel tracking algorithm by utilizing an active contour model and unscented Kalman filter which are the two representative algorithms on segmentation and tracking. Materials and Methods The proposed algorithm firstly accepts user input to produce an initial estimate of vessel boundary segmentation. On each Computed Tomography Angiography (CTA) slice, the active contour is applied to segment the vessel boundary. After that, the estimation process of the unscented Kalman filter is applied to track the vessel boundary of the current slice to estimate the inter-slice vessel position translation and shape deformation. Finally both active contour and unscented Kalman filter are inter-operated for vessel segmentation of the next slice. Results The arbitrarily shaped blood vessel boundary on each slice is segmented by using the active contour model, and the Kalman filter is employed to track the translation and shape deformation between CTA slices. The proposed algorithm is applied to the 3D visualization of chest CTA images using graphics hardware. Conclusion Through this algorithm, more opportunities, giving quick and brief diagnosis, could be provided for the radiologist before detailed diagnosis using 2D CTA slices, Also, for the surgeon, the algorithm could be used for surgical planning, simulation, navigation and rehearsal, and is expected to be applied to highly valuable applications for more accurate 3D vessel tracking and rendering.

An Improved Approach for 3D Hand Pose Estimation Based on a Single Depth Image and Haar Random Forest

  • Kim, Wonggi;Chun, Junchul
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.8
    • /
    • pp.3136-3150
    • /
    • 2015
  • A vision-based 3D tracking of articulated human hand is one of the major issues in the applications of human computer interactions and understanding the control of robot hand. This paper presents an improved approach for tracking and recovering the 3D position and orientation of a human hand using the Kinect sensor. The basic idea of the proposed method is to solve an optimization problem that minimizes the discrepancy in 3D shape between an actual hand observed by Kinect and a hypothesized 3D hand model. Since each of the 3D hand pose has 23 degrees of freedom, the hand articulation tracking needs computational excessive burden in minimizing the 3D shape discrepancy between an observed hand and a 3D hand model. For this, we first created a 3D hand model which represents the hand with 17 different parts. Secondly, Random Forest classifier was trained on the synthetic depth images generated by animating the developed 3D hand model, which was then used for Haar-like feature-based classification rather than performing per-pixel classification. Classification results were used for estimating the joint positions for the hand skeleton. Through the experiment, we were able to prove that the proposed method showed improvement rates in hand part recognition and a performance of 20-30 fps. The results confirmed its practical use in classifying hand area and successfully tracked and recovered the 3D hand pose in a real time fashion.

Real-time Avatar Animation using Component-based Human Body Tracking (구성요소 기반 인체 추적을 이용한 실시간 아바타 애니메이션)

  • Lee Kyoung-Mi
    • Journal of Internet Computing and Services
    • /
    • v.7 no.1
    • /
    • pp.65-74
    • /
    • 2006
  • Human tracking is a requirement for the advanced human-computer interface (HCI), This paper proposes a method which uses a component-based human model, detects body parts, estimates human postures, and animates an avatar, Each body part consists of color, connection, and location information and it matches to a corresponding component of the human model. For human tracking, the 2D information of human posture is used for body tracking by computing similarities between frames, The depth information is decided by a relative location between components and is transferred to a moving direction to build a 2-1/2D human model. While each body part is modelled by posture and directions, the corresponding component of a 3D avatar is rotated in 3D using the information transferred from the human model. We achieved 90% tracking rate of a test video containing a variety of postures and the rate increased as the proposed system processed more frames.

  • PDF

Viewing Angle-Improved 3D Integral Imaging Display with Eye Tracking Sensor

  • Hong, Seokmin;Shin, Donghak;Lee, Joon-Jae;Lee, Byung-Gook
    • Journal of information and communication convergence engineering
    • /
    • v.12 no.4
    • /
    • pp.208-214
    • /
    • 2014
  • In this paper, in order to solve the problems of a narrow viewing angle and the flip effect in a three-dimensional (3D) integral imaging display, we propose an improved system by using an eye tracking method based on the Kinect sensor. In the proposed method, we introduce two types of calibration processes. First process is to perform the calibration between two cameras within Kinect sensor to collect specific 3D information. Second process is to use a space calibration for the coordinate conversion between the Kinect sensor and the coordinate system of the display panel. Our calibration processes can provide the improved performance of estimation for 3D position of the observer's eyes and generate elemental images in real-time speed based on the estimated position. To show the usefulness of the proposed method, we implement an integral imaging display system using the eye tracking process based on our calibration processes and carry out the preliminary experiments by measuring the viewing angle and flipping effect for the reconstructed 3D images. The experimental results reveal that the proposed method extended the viewing angles and removed the flipping images compared with the conventional system.

Markov Chain of Active Tracking in a Radar System and Its Application to Quantitative Analysis on Track Formation Range

  • Ahn, Chang-Soo;Roh, Ji-Eun;Kim, Seon-Joo;Kim, Young-Sik;Lee, Juseop
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.3
    • /
    • pp.1275-1283
    • /
    • 2015
  • Markov chains for active tracking which assigns additional track illuminations evenly between search illuminations for a radar system are presented in this article. And some quantitative analyses on track formation range are discussed by using them. Compared with track-while-search (TWS) tracking that uses scan-to-scan correlation at search illuminations for tracking of a target, active tracking has shown the maximum improvement in track formation range of about 27.6%. It is also shown that the number and detection probability of additional track beams have impact on the track formation range. For the consideration of radar resource management at the preliminary radar system design stage, the presented analysis method can be used easily without the need of Monte Carlo simulation.

Vision-Based Trajectory Tracking Control System for a Quadrotor-Type UAV in Indoor Environment (실내 환경에서의 쿼드로터형 무인 비행체를 위한 비전 기반의 궤적 추종 제어 시스템)

  • Shi, Hyoseok;Park, Hyun;Kim, Heon-Hui;Park, Kwang-Hyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39C no.1
    • /
    • pp.47-59
    • /
    • 2014
  • This paper deals with a vision-based trajectory tracking control system for a quadrotor-type UAV for entertainment purpose in indoor environment. In contrast to outdoor flights that emphasize the autonomy to complete special missions such as aerial photographs and reconnaissance, indoor flights for entertainment require trajectory following and hovering skills especially in precision and stability of performance. This paper proposes a trajectory tracking control system consisting of a motion generation module, a pose estimation module, and a trajectory tracking module. The motion generation module generates a sequence of motions that are specified by 3-D locations at each sampling time. In the pose estimation module, 3-D position and orientation information of a quadrotor is estimated by recognizing a circular ring pattern installed on the vehicle. The trajectory tracking module controls the 3-D position of a quadrotor in real time using the information from the motion generation module and pose estimation module. The proposed system is tested through several experiments in view of one-point, multi-points, and trajectory tracking control.