• Title/Summary/Keyword: Human Tracking

Search Result 652, Processing Time 0.019 seconds

A Motion Capture and Mimic System for Motion Controls (운동 제어를 위한 운동 포착 및 재현 시스템)

  • Yoon, Joongsun
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.14 no.7
    • /
    • pp.59-66
    • /
    • 1997
  • A general procedure for a motion capture and mimic system has been delineated. Utilizing sensors operated in the magnetic fields, complicated and optimized movements are easily digitized to analyze and repreduce. The system consists of a motion capture module, a motion visualization module, a motion plan module, a motion mimic module, and a GUI module. Design concepts of the system are modular, open, and user friendly to ensure the overall system performance. Custom-built and/or off-the-shelf modules are ease- ly integrated into the system. With modifications, this procedure can be applied for complicated motion controls. This procedure is implemented on tracking a head and balancing a pole. A neural controller based on this control scheme dtilizing human motions can easily evolve from a small amount of learning data.

  • PDF

Feature Extraction Based on Hybrid Skeleton for Human-Robot Interaction (휴먼-로봇 인터액션을 위한 하이브리드 스켈레톤 특징점 추출)

  • Joo, Young-Hoon;So, Jea-Yun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.2
    • /
    • pp.178-183
    • /
    • 2008
  • Human motion analysis is researched as a new method for human-robot interaction (HRI) because it concerns with the key techniques of HRI such as motion tracking and pose recognition. To analysis human motion, extracting features of human body from sequential images plays an important role. After finding the silhouette of human body from the sequential images obtained by CCD color camera, the skeleton model is frequently used in order to represent the human motion. In this paper, using the silhouette of human body, we propose the feature extraction method based on hybrid skeleton for detecting human motion. Finally, we show the effectiveness and feasibility of the proposed method through some experiments.

A real-time multiple vehicle tracking method for traffic congestion identification

  • Zhang, Xiaoyu;Hu, Shiqiang;Zhang, Huanlong;Hu, Xing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.6
    • /
    • pp.2483-2503
    • /
    • 2016
  • Traffic congestion is a severe problem in many modern cities around the world. Real-time and accurate traffic congestion identification can provide the advanced traffic management systems with a reliable basis to take measurements. The most used data sources for traffic congestion are loop detector, GPS data, and video surveillance. Video based traffic monitoring systems have gained much attention due to their enormous advantages, such as low cost, flexibility to redesign the system and providing a rich information source for human understanding. In general, most existing video based systems for monitoring road traffic rely on stationary cameras and multiple vehicle tracking method. However, most commonly used multiple vehicle tracking methods are lack of effective track initiation schemes. Based on the motion of the vehicle usually obeys constant velocity model, a novel vehicle recognition method is proposed. The state of recognized vehicle is sent to the GM-PHD filter as birth target. In this way, we relieve the insensitive of GM-PHD filter for new entering vehicle. Combining with the advanced vehicle detection and data association techniques, this multiple vehicle tracking method is used to identify traffic congestion. It can be implemented in real-time with high accuracy and robustness. The advantages of our proposed method are validated on four real traffic data.

Robust Real-time Tracking of Facial Features with Application to Emotion Recognition (안정적인 실시간 얼굴 특징점 추적과 감정인식 응용)

  • Ahn, Byungtae;Kim, Eung-Hee;Sohn, Jin-Hun;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.4
    • /
    • pp.266-272
    • /
    • 2013
  • Facial feature extraction and tracking are essential steps in human-robot-interaction (HRI) field such as face recognition, gaze estimation, and emotion recognition. Active shape model (ASM) is one of the successful generative models that extract the facial features. However, applying only ASM is not adequate for modeling a face in actual applications, because positions of facial features are unstably extracted due to limitation of the number of iterations in the ASM fitting algorithm. The unaccurate positions of facial features decrease the performance of the emotion recognition. In this paper, we propose real-time facial feature extraction and tracking framework using ASM and LK optical flow for emotion recognition. LK optical flow is desirable to estimate time-varying geometric parameters in sequential face images. In addition, we introduce a straightforward method to avoid tracking failure caused by partial occlusions that can be a serious problem for tracking based algorithm. Emotion recognition experiments with k-NN and SVM classifier shows over 95% classification accuracy for three emotions: "joy", "anger", and "disgust".

Person-following of a Mobile Robot using a Complementary Tracker with a Camera-laser Scanner (카메라-레이저스캐너 상호보완 추적기를 이용한 이동 로봇의 사람 추종)

  • Kim, Hyoung-Rae;Cui, Xue-Nan;Lee, Jae-Hong;Lee, Seung-Jun;Kim, Hakil
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.1
    • /
    • pp.78-86
    • /
    • 2014
  • This paper proposes a method of tracking an object for a person-following mobile robot by combining a monocular camera and a laser scanner, where each sensor can supplement the weaknesses of the other sensor. For human-robot interaction, a mobile robot needs to maintain a distance between a moving person and itself. Maintaining distance consists of two parts: object tracking and person-following. Object tracking consists of particle filtering and online learning using shape features which are extracted from an image. A monocular camera easily fails to track a person due to a narrow field-of-view and influence of illumination changes, and has therefore been used together with a laser scanner. After constructing the geometric relation between the differently oriented sensors, the proposed method demonstrates its robustness in tracking and following a person with a success rate of 94.7% in indoor environments with varying lighting conditions and even when a moving object is located between the robot and the person.

Mobile Augmented Visualization Technology Using Vive Tracker (포즈 추적 센서를 활용한 모바일 증강 가시화 기술)

  • Lee, Dong-Chun;Kim, Hang-Kee;Lee, Ki-Suk
    • Journal of Korea Game Society
    • /
    • v.21 no.5
    • /
    • pp.41-48
    • /
    • 2021
  • This paper introduces a mobile augmented visualization technology that augments a three-dimensional virtual human body on a mannequin model using two pose(position and rotation) tracking sensors. The conventional camera tracking technology used for augmented visualization has the disadvantage of failing to calculate the camera pose when the camera shakes or moves quickly because it uses the camera image, but using a pose tracking sensor can overcome this disadvantage. Also, even if the position of the mannequin is changed or rotated, augmented visualization is possible using the data of the pose tracking sensor attached to the mannequin, and above all there is no load for camera tracking.

A Study on the Tracking Algorithm for BSD Detection of Smart Vehicles (스마트 자동차의 BSD 검지를 위한 추적알고리즘에 관한 연구)

  • Kim Wantae
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.19 no.2
    • /
    • pp.47-55
    • /
    • 2023
  • Recently, Sensor technologies are emerging to prevent traffic accidents and support safe driving in complex environments where human perception may be limited. The UWS is a technology that uses an ultrasonic sensor to detect objects at short distances. While it has the advantage of being simple to use, it also has the disadvantage of having a limited detection distance. The LDWS, on the other hand, is a technology that uses front image processing to detect lane departure and ensure the safety of the driving path. However, it may not be sufficient for determining the driving environment around the vehicle. To overcome these limitations, a system that utilizes FMCW radar is being used. The BSD radar system using FMCW continuously emits signals while driving, and the emitted signals bounce off nearby objects and return to the radar. The key technologies involved in designing the BSD radar system are tracking algorithms for detecting the surrounding situation of the vehicle. This paper presents a tracking algorithm for designing a BSD radar system, while explaining the principles of FMCW radar technology and signal types. Additionally, this paper presents the target tracking procedure and target filter to design an accurate tracking system and performance is verified through simulation.

Real-time Human Pose Estimation using RGB-D images and Deep Learning

  • Rim, Beanbonyka;Sung, Nak-Jun;Ma, Jun;Choi, Yoo-Joo;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.3
    • /
    • pp.113-121
    • /
    • 2020
  • Human Pose Estimation (HPE) which localizes the human body joints becomes a high potential for high-level applications in the field of computer vision. The main challenges of HPE in real-time are occlusion, illumination change and diversity of pose appearance. The single RGB image is fed into HPE framework in order to reduce the computation cost by using depth-independent device such as a common camera, webcam, or phone cam. However, HPE based on the single RGB is not able to solve the above challenges due to inherent characteristics of color or texture. On the other hand, depth information which is fed into HPE framework and detects the human body parts in 3D coordinates can be usefully used to solve the above challenges. However, the depth information-based HPE requires the depth-dependent device which has space constraint and is cost consuming. Especially, the result of depth information-based HPE is less reliable due to the requirement of pose initialization and less stabilization of frame tracking. Therefore, this paper proposes a new method of HPE which is robust in estimating self-occlusion. There are many human parts which can be occluded by other body parts. However, this paper focuses only on head self-occlusion. The new method is a combination of the RGB image-based HPE framework and the depth information-based HPE framework. We evaluated the performance of the proposed method by COCO Object Keypoint Similarity library. By taking an advantage of RGB image-based HPE method and depth information-based HPE method, our HPE method based on RGB-D achieved the mAP of 0.903 and mAR of 0.938. It proved that our method outperforms the RGB-based HPE and the depth-based HPE.

Robust 3D Hand Tracking based on a Coupled Particle Filter (결합된 파티클 필터에 기반한 강인한 3차원 손 추적)

  • Ahn, Woo-Seok;Suk, Heung-Il;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.1
    • /
    • pp.80-84
    • /
    • 2010
  • Tracking hands is an essential technique for hand gesture recognition which is an efficient way in Human Computer Interaction (HCI). Recently, many researchers have focused on hands tracking using a 3D hand model and showed robust tracking results compared to using 2D hand models. In this paper, we propose a novel 3D hand tracking method based on a coupled particle filter. This provides robust and fast tracking results by estimating each part of global hand poses and local finger motions separately and then utilizing the estimated results as a prior for each other. Furthermore, in order to improve the robustness, we apply a multi-cue based method by integrating a color-based area matching method and an edge-based distance matching method. In our experiments, the proposed method showed robust tracking results for complex hand motions in a cluttered background.

Gaussian mixture model for automated tracking of modal parameters of long-span bridge

  • Mao, Jian-Xiao;Wang, Hao;Spencer, Billie F. Jr.
    • Smart Structures and Systems
    • /
    • v.24 no.2
    • /
    • pp.243-256
    • /
    • 2019
  • Determination of the most meaningful structural modes and gaining insight into how these modes evolve are important issues for long-term structural health monitoring of the long-span bridges. To address this issue, modal parameters identified throughout the life of the bridge need to be compared and linked with each other, which is the process of mode tracking. The modal frequencies for a long-span bridge are typically closely-spaced, sensitive to the environment (e.g., temperature, wind, traffic, etc.), which makes the automated tracking of modal parameters a difficult process, often requiring human intervention. Machine learning methods are well-suited for uncovering complex underlying relationships between processes and thus have the potential to realize accurate and automated modal tracking. In this study, Gaussian mixture model (GMM), a popular unsupervised machine learning method, is employed to automatically determine and update baseline modal properties from the identified unlabeled modal parameters. On this foundation, a new mode tracking method is proposed for automated mode tracking for long-span bridges. Firstly, a numerical example for a three-degree-of-freedom system is employed to validate the feasibility of using GMM to automatically determine the baseline modal properties. Subsequently, the field monitoring data of a long-span bridge are utilized to illustrate the practical usage of GMM for automated determination of the baseline list. Finally, the continuously monitoring bridge acceleration data during strong typhoon events are employed to validate the reliability of proposed method in tracking the changing modal parameters. Results show that the proposed method can automatically track the modal parameters in disastrous scenarios and provide valuable references for condition assessment of the bridge structure.