• 제목/요약/키워드: Human Tracking

검색결과 652건 처리시간 0.026초

운동 제어를 위한 운동 포착 및 재현 시스템 (A Motion Capture and Mimic System for Motion Controls)

  • 윤중선
    • 한국정밀공학회지
    • /
    • 제14권7호
    • /
    • pp.59-66
    • /
    • 1997
  • A general procedure for a motion capture and mimic system has been delineated. Utilizing sensors operated in the magnetic fields, complicated and optimized movements are easily digitized to analyze and repreduce. The system consists of a motion capture module, a motion visualization module, a motion plan module, a motion mimic module, and a GUI module. Design concepts of the system are modular, open, and user friendly to ensure the overall system performance. Custom-built and/or off-the-shelf modules are ease- ly integrated into the system. With modifications, this procedure can be applied for complicated motion controls. This procedure is implemented on tracking a head and balancing a pole. A neural controller based on this control scheme dtilizing human motions can easily evolve from a small amount of learning data.

  • PDF

휴먼-로봇 인터액션을 위한 하이브리드 스켈레톤 특징점 추출 (Feature Extraction Based on Hybrid Skeleton for Human-Robot Interaction)

  • 주영훈;소제윤
    • 제어로봇시스템학회논문지
    • /
    • 제14권2호
    • /
    • pp.178-183
    • /
    • 2008
  • Human motion analysis is researched as a new method for human-robot interaction (HRI) because it concerns with the key techniques of HRI such as motion tracking and pose recognition. To analysis human motion, extracting features of human body from sequential images plays an important role. After finding the silhouette of human body from the sequential images obtained by CCD color camera, the skeleton model is frequently used in order to represent the human motion. In this paper, using the silhouette of human body, we propose the feature extraction method based on hybrid skeleton for detecting human motion. Finally, we show the effectiveness and feasibility of the proposed method through some experiments.

A real-time multiple vehicle tracking method for traffic congestion identification

  • Zhang, Xiaoyu;Hu, Shiqiang;Zhang, Huanlong;Hu, Xing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권6호
    • /
    • pp.2483-2503
    • /
    • 2016
  • Traffic congestion is a severe problem in many modern cities around the world. Real-time and accurate traffic congestion identification can provide the advanced traffic management systems with a reliable basis to take measurements. The most used data sources for traffic congestion are loop detector, GPS data, and video surveillance. Video based traffic monitoring systems have gained much attention due to their enormous advantages, such as low cost, flexibility to redesign the system and providing a rich information source for human understanding. In general, most existing video based systems for monitoring road traffic rely on stationary cameras and multiple vehicle tracking method. However, most commonly used multiple vehicle tracking methods are lack of effective track initiation schemes. Based on the motion of the vehicle usually obeys constant velocity model, a novel vehicle recognition method is proposed. The state of recognized vehicle is sent to the GM-PHD filter as birth target. In this way, we relieve the insensitive of GM-PHD filter for new entering vehicle. Combining with the advanced vehicle detection and data association techniques, this multiple vehicle tracking method is used to identify traffic congestion. It can be implemented in real-time with high accuracy and robustness. The advantages of our proposed method are validated on four real traffic data.

안정적인 실시간 얼굴 특징점 추적과 감정인식 응용 (Robust Real-time Tracking of Facial Features with Application to Emotion Recognition)

  • 안병태;김응희;손진훈;권인소
    • 로봇학회논문지
    • /
    • 제8권4호
    • /
    • pp.266-272
    • /
    • 2013
  • Facial feature extraction and tracking are essential steps in human-robot-interaction (HRI) field such as face recognition, gaze estimation, and emotion recognition. Active shape model (ASM) is one of the successful generative models that extract the facial features. However, applying only ASM is not adequate for modeling a face in actual applications, because positions of facial features are unstably extracted due to limitation of the number of iterations in the ASM fitting algorithm. The unaccurate positions of facial features decrease the performance of the emotion recognition. In this paper, we propose real-time facial feature extraction and tracking framework using ASM and LK optical flow for emotion recognition. LK optical flow is desirable to estimate time-varying geometric parameters in sequential face images. In addition, we introduce a straightforward method to avoid tracking failure caused by partial occlusions that can be a serious problem for tracking based algorithm. Emotion recognition experiments with k-NN and SVM classifier shows over 95% classification accuracy for three emotions: "joy", "anger", and "disgust".

카메라-레이저스캐너 상호보완 추적기를 이용한 이동 로봇의 사람 추종 (Person-following of a Mobile Robot using a Complementary Tracker with a Camera-laser Scanner)

  • 김형래;최학남;이재홍;이승준;김학일
    • 제어로봇시스템학회논문지
    • /
    • 제20권1호
    • /
    • pp.78-86
    • /
    • 2014
  • This paper proposes a method of tracking an object for a person-following mobile robot by combining a monocular camera and a laser scanner, where each sensor can supplement the weaknesses of the other sensor. For human-robot interaction, a mobile robot needs to maintain a distance between a moving person and itself. Maintaining distance consists of two parts: object tracking and person-following. Object tracking consists of particle filtering and online learning using shape features which are extracted from an image. A monocular camera easily fails to track a person due to a narrow field-of-view and influence of illumination changes, and has therefore been used together with a laser scanner. After constructing the geometric relation between the differently oriented sensors, the proposed method demonstrates its robustness in tracking and following a person with a success rate of 94.7% in indoor environments with varying lighting conditions and even when a moving object is located between the robot and the person.

포즈 추적 센서를 활용한 모바일 증강 가시화 기술 (Mobile Augmented Visualization Technology Using Vive Tracker)

  • 이동춘;김항기;이기석
    • 한국게임학회 논문지
    • /
    • 제21권5호
    • /
    • pp.41-48
    • /
    • 2021
  • 본 논문은 2개의 포즈(위치 및 회전) 추적 센서를 사용하여 마네킹 모델위에 3차원의 가상 인체를 증강하는 모바일 증강 가시화 기술에 대해서 소개한다. 증강 가시화를 위해 사용된 종래의 카메라 트래킹 기술은 카메라 영상을 사용하기 때문에 카메라의 떨림이나 빠른 이동시 카메라의 포즈 계산에 실패하는 단점이 있으나, 바이브 트래커를 이용하게 되면 이러한 단점을 극복할 수 있다. 또한 증강하고자 하는 객체인 마네킹의 위치가 바뀌거나 회전을 하게 되더라도 마네킹에 부착된 포즈 추적 센서를 사용하여 증강 가시화가 가능한 장점이 있으며 무엇보다 카메라 트래킹을 위한 부하가 없다는 장점을 가진다.

스마트 자동차의 BSD 검지를 위한 추적알고리즘에 관한 연구 (A Study on the Tracking Algorithm for BSD Detection of Smart Vehicles)

  • 김완태
    • 디지털산업정보학회논문지
    • /
    • 제19권2호
    • /
    • pp.47-55
    • /
    • 2023
  • Recently, Sensor technologies are emerging to prevent traffic accidents and support safe driving in complex environments where human perception may be limited. The UWS is a technology that uses an ultrasonic sensor to detect objects at short distances. While it has the advantage of being simple to use, it also has the disadvantage of having a limited detection distance. The LDWS, on the other hand, is a technology that uses front image processing to detect lane departure and ensure the safety of the driving path. However, it may not be sufficient for determining the driving environment around the vehicle. To overcome these limitations, a system that utilizes FMCW radar is being used. The BSD radar system using FMCW continuously emits signals while driving, and the emitted signals bounce off nearby objects and return to the radar. The key technologies involved in designing the BSD radar system are tracking algorithms for detecting the surrounding situation of the vehicle. This paper presents a tracking algorithm for designing a BSD radar system, while explaining the principles of FMCW radar technology and signal types. Additionally, this paper presents the target tracking procedure and target filter to design an accurate tracking system and performance is verified through simulation.

Real-time Human Pose Estimation using RGB-D images and Deep Learning

  • 림빈보니카;성낙준;마준;최유주;홍민
    • 인터넷정보학회논문지
    • /
    • 제21권3호
    • /
    • pp.113-121
    • /
    • 2020
  • Human Pose Estimation (HPE) which localizes the human body joints becomes a high potential for high-level applications in the field of computer vision. The main challenges of HPE in real-time are occlusion, illumination change and diversity of pose appearance. The single RGB image is fed into HPE framework in order to reduce the computation cost by using depth-independent device such as a common camera, webcam, or phone cam. However, HPE based on the single RGB is not able to solve the above challenges due to inherent characteristics of color or texture. On the other hand, depth information which is fed into HPE framework and detects the human body parts in 3D coordinates can be usefully used to solve the above challenges. However, the depth information-based HPE requires the depth-dependent device which has space constraint and is cost consuming. Especially, the result of depth information-based HPE is less reliable due to the requirement of pose initialization and less stabilization of frame tracking. Therefore, this paper proposes a new method of HPE which is robust in estimating self-occlusion. There are many human parts which can be occluded by other body parts. However, this paper focuses only on head self-occlusion. The new method is a combination of the RGB image-based HPE framework and the depth information-based HPE framework. We evaluated the performance of the proposed method by COCO Object Keypoint Similarity library. By taking an advantage of RGB image-based HPE method and depth information-based HPE method, our HPE method based on RGB-D achieved the mAP of 0.903 and mAR of 0.938. It proved that our method outperforms the RGB-based HPE and the depth-based HPE.

결합된 파티클 필터에 기반한 강인한 3차원 손 추적 (Robust 3D Hand Tracking based on a Coupled Particle Filter)

  • 안우석;석흥일;이성환
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제37권1호
    • /
    • pp.80-84
    • /
    • 2010
  • 손 추적 기술은 인간과 기계와의 효율적인 의사소통을 위한 손동작 인식 기술의 핵심 기반 기술이다. 최근의 손 추적 연구는 3차원 손 모델을 이용한 연구 방향에 초점을 맞추고 있고, 기존의 2차원 손 모델을 이용한 방법보다 강인한 추적 성능을 보이고 있다. 본 논문에서는 결합된 파티클 필터에 기반한 새로운 3차원 손 추적 방법을 제안한다. 이는 전역적 손 형상과 지역적 손가락 움직임을 분리하여 추정하고, 각각의 추정 결과를 서로의 사전 정보로 이용하여 기존의 방법보다 빠르고 강인한 추적을 가능하게 한다. 또한, 추적 성능 향상을 위해 색상과 에지를 함께 고려한 다중 증거 결합 방법을 적용한다. 실험결과, 제안하는 방법은 복잡한 배경이나 동작에서도 강인한 추적 결과를 보였다.

Gaussian mixture model for automated tracking of modal parameters of long-span bridge

  • Mao, Jian-Xiao;Wang, Hao;Spencer, Billie F. Jr.
    • Smart Structures and Systems
    • /
    • 제24권2호
    • /
    • pp.243-256
    • /
    • 2019
  • Determination of the most meaningful structural modes and gaining insight into how these modes evolve are important issues for long-term structural health monitoring of the long-span bridges. To address this issue, modal parameters identified throughout the life of the bridge need to be compared and linked with each other, which is the process of mode tracking. The modal frequencies for a long-span bridge are typically closely-spaced, sensitive to the environment (e.g., temperature, wind, traffic, etc.), which makes the automated tracking of modal parameters a difficult process, often requiring human intervention. Machine learning methods are well-suited for uncovering complex underlying relationships between processes and thus have the potential to realize accurate and automated modal tracking. In this study, Gaussian mixture model (GMM), a popular unsupervised machine learning method, is employed to automatically determine and update baseline modal properties from the identified unlabeled modal parameters. On this foundation, a new mode tracking method is proposed for automated mode tracking for long-span bridges. Firstly, a numerical example for a three-degree-of-freedom system is employed to validate the feasibility of using GMM to automatically determine the baseline modal properties. Subsequently, the field monitoring data of a long-span bridge are utilized to illustrate the practical usage of GMM for automated determination of the baseline list. Finally, the continuously monitoring bridge acceleration data during strong typhoon events are employed to validate the reliability of proposed method in tracking the changing modal parameters. Results show that the proposed method can automatically track the modal parameters in disastrous scenarios and provide valuable references for condition assessment of the bridge structure.