• Title/Summary/Keyword: 3-D model-based tracking

Search Result 111, Processing Time 0.026 seconds

Estimation of People Tracking by Kalman Filter based Observations from Laser Range Sensor (레이저스케너 센서기반의 칼만필터 관측을 이용한 사람이동예측)

  • Jin, Taeseok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.22 no.3
    • /
    • pp.265-272
    • /
    • 2019
  • For tracking a varying number of people using laser range finder, it is important to deal with appearance/disappearance of people due to various causes including occlusions. We propose a method for tracking people with automatic initialization by integrating observations from laser range finder. In our method, the problem of estimating 2D positions and orientations of multiple people's walking direction is formulated based on a mixture kalman filter. Proposal distributions of a kalman filter are constructed by using a mixture model that incorporates information from a laser range scanner. Our experimental results demonstrate the effectiveness and robustness of our method.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Real-Time Hand Pose Tracking and Finger Action Recognition Based on 3D Hand Modeling (3차원 손 모델링 기반의 실시간 손 포즈 추적 및 손가락 동작 인식)

  • Suk, Heung-Il;Lee, Ji-Hong;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.12
    • /
    • pp.780-788
    • /
    • 2008
  • Modeling hand poses and tracking its movement are one of the challenging problems in computer vision. There are two typical approaches for the reconstruction of hand poses in 3D, depending on the number of cameras from which images are captured. One is to capture images from multiple cameras or a stereo camera. The other is to capture images from a single camera. The former approach is relatively limited, because of the environmental constraints for setting up multiple cameras. In this paper we propose a method of reconstructing 3D hand poses from a 2D input image sequence captured from a single camera by means of Belief Propagation in a graphical model and recognizing a finger clicking motion using a hidden Markov model. We define a graphical model with hidden nodes representing joints of a hand, and observable nodes with the features extracted from a 2D input image sequence. To track hand poses in 3D, we use a Belief Propagation algorithm, which provides a robust and unified framework for inference in a graphical model. From the estimated 3D hand pose we extract the information for each finger's motion, which is then fed into a hidden Markov model. To recognize natural finger actions, we consider the movements of all the fingers to recognize a single finger's action. We applied the proposed method to a virtual keypad system and the result showed a high recognition rate of 94.66% with 300 test data.

Measurement of Brownian motion of nanoparticles in suspension using a network-based PTV technique

  • Banerjee A.;Choi C. K.;Kihm K. D.;Takagi T.
    • 한국가시화정보학회:학술대회논문집
    • /
    • 2004.12a
    • /
    • pp.91-110
    • /
    • 2004
  • A comprehensive three-dimensional nano-particle tracking technique in micro- and nano-scale spatial resolution using the Total Internal Reflection Fluorescence Microscope (TIRFM) is discussed. Evanescent waves from the total internal reflection of a 488nm argon-ion laser are used to measure the hindered Brownian diffusion within few hundred nanometers of a glass-water interface. 200-nm fluorescence-coated polystyrene spheres are used as tracers to achieve three-dimensional tracking within the near-wall penetration depth. A novel ratiometric imaging technique coupled with a neural network model is used to tag and track the tracer particles. This technique allows for the determination of the relative depth wise locations of the particles. This analysis, to our knowledge is the first such three-dimensional ratiometric nano-particle tracking velocimetry technique to be applied for measuring Brownian diffusion close to the wall.

  • PDF

Study on Underwater Object Tracking Based on Real-Time Recurrent Regression Networks Using Multi-beam Sonar Images (실시간 순환 신경망 기반의 멀티빔 소나 이미지를 이용한 수중 물체의 추적에 관한 연구)

  • Lee, Eon-ho;Lee, Yeongjun;Choi, Jinwoo;Lee, Sejin
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.1
    • /
    • pp.8-15
    • /
    • 2020
  • This research is a case study of underwater object tracking based on real-time recurrent regression networks (Re3). Re3 has the concept of generic object tracking. Because of these characteristics, it is very effective to apply this model to unclear underwater sonar images. The model also an pursues object tracking method, thus it solves the problem of calculating load that may be limited when object detection models are used, unlike the tracking models. The model is also highly intuitive, so it has excellent continuity of tracking even if the object being tracked temporarily becomes partially occluded or faded. There are 4 types of the dataset using multi-beam sonar images: including (a) dummy object floated at the testbed; (b) dummy object settled at the bottom of the sea; (c) tire object settled at the bottom of the testbed; (d) multi-objects settled at the bottom of the testbed. For this study, the experiments were conducted to obtain underwater sonar images from the sea and underwater testbed, and the validity of using noisy underwater sonar images was tested to be able to track objects robustly.

Visual servoing based on neuro-fuzzy model

  • Jun, Hyo-Byung;Sim, Kwee-Bo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.712-715
    • /
    • 1997
  • In image jacobian based visual servoing, generally, inverse jacobian should be calculated by complicated coordinate transformations. These are required excessive computation and the singularity of the image jacobian should be considered. This paper presents a visual servoing to control the pose of the robotic manipulator for tracking and grasping 3-D moving object whose pose and motion parameters are unknown. Because the object is in motion tracking and grasping must be done on-line and the controller must have continuous learning ability. In order to estimate parameters of a moving object we use the kalman filter. And for tracking and grasping a moving object we use a fuzzy inference based reinforcement learning algorithm of dynamic recurrent neural networks. Computer simulation results are presented to demonstrate the performance of this visual servoing

  • PDF

Dynamic Human Pose Tracking using Motion-based Search (모션 기반의 검색을 사용한 동적인 사람 자세 추적)

  • Jung, Do-Joon;Yoon, Jeong-Oh
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.7
    • /
    • pp.2579-2585
    • /
    • 2010
  • This paper proposes a dynamic human pose tracking method using motion-based search strategy from an image sequence obtained from a monocular camera. The proposed method compares the image features between 3D human model projections and real input images. The method repeats the process until predefined criteria and then estimates 3D human pose that generates the best match. When searching for the best matching configuration with respect to the input image, the search region is determined from the estimated 2D image motion and then search is performed randomly for the body configuration conducted within that search region. As the 2D image motion is highly constrained, this significantly reduces the dimensionality of the feasible space. This strategy have two advantages: the motion estimation leads to an efficient allocation of the search space, and the pose estimation method is adaptive to various kinds of motion.

A Robust Object Detection and Tracking Method using RGB-D Model (RGB-D 모델을 이용한 강건한 객체 탐지 및 추적 방법)

  • Park, Seohee;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.18 no.4
    • /
    • pp.61-67
    • /
    • 2017
  • Recently, CCTV has been combined with areas such as big data, artificial intelligence, and image analysis to detect various abnormal behaviors and to detect and analyze the overall situation of objects such as people. Image analysis research for this intelligent video surveillance function is progressing actively. However, CCTV images using 2D information generally have limitations such as object misrecognition due to lack of topological information. This problem can be solved by adding the depth information of the object created by using two cameras to the image. In this paper, we perform background modeling using Mixture of Gaussian technique and detect whether there are moving objects by segmenting the foreground from the modeled background. In order to perform the depth information-based segmentation using the RGB information-based segmentation results, stereo-based depth maps are generated using two cameras. Next, the RGB-based segmented region is set as a domain for extracting depth information, and depth-based segmentation is performed within the domain. In order to detect the center point of a robustly segmented object and to track the direction, the movement of the object is tracked by applying the CAMShift technique, which is the most basic object tracking method. From the experiments, we prove the efficiency of the proposed object detection and tracking method using the RGB-D model.

Towards 3D Modeling of Buildings using Mobile Augmented Reality and Aerial Photographs (모바일 증강 현실 및 항공사진을 이용한 건물의 3차원 모델링)

  • Kim, Se-Hwan;Ventura, Jonathan;Chang, Jae-Sik;Lee, Tae-Hee;Hollerer, Tobias
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.2
    • /
    • pp.84-91
    • /
    • 2009
  • This paper presents an online partial 3D modeling methodology that uses a mobile augmented reality system and aerial photographs, and a tracking methodology that compares the 3D model with a video image. Instead of relying on models which are created in advance, the system generates a 3D model for a real building on the fly by combining frontal and aerial views. A user's initial pose is estimated using an aerial photograph, which is retrieved from a database according to the user's GPS coordinates, and an inertial sensor which measures pitch. We detect edges of the rooftop based on Graph cut, and find edges and a corner of the bottom by minimizing the proposed cost function. To track the user's position and orientation in real-time, feature-based tracking is carried out based on salient points on the edges and the sides of a building the user is keeping in view. We implemented camera pose estimators using both a least squares estimator and an unscented Kalman filter (UKF). We evaluated the speed and accuracy of both approaches, and we demonstrated the usefulness of our computations as important building blocks for an Anywhere Augmentation scenario.

Robust AAM-based Face Tracking with Occlusion Using SIFT Features (SIFT 특징을 이용하여 중첩상황에 강인한 AAM 기반 얼굴 추적)

  • Eom, Sung-Eun;Jang, Jun-Su
    • The KIPS Transactions:PartB
    • /
    • v.17B no.5
    • /
    • pp.355-362
    • /
    • 2010
  • Face tracking is to estimate the motion of a non-rigid face together with a rigid head in 3D, and plays important roles in higher levels such as face/facial expression/emotion recognition. In this paper, we propose an AAM-based face tracking algorithm. AAM has been widely used to segment and track deformable objects, but there are still many difficulties. Particularly, it often tends to diverge or converge into local minima when a target object is self-occluded, partially or completely occluded. To address this problem, we utilize the scale invariant feature transform (SIFT). SIFT is an effective method for self and partial occlusion because it is able to find correspondence between feature points under partial loss. And it enables an AAM to continue to track without re-initialization in complete occlusions thanks to the good performance of global matching. We also register and use the SIFT features extracted from multi-view face images during tracking to effectively track a face across large pose changes. Our proposed algorithm is validated by comparing other algorithms under the above 3 kinds of occlusions.