• Title/Summary/Keyword: Pose Tracking

Search Result 157, Processing Time 0.031 seconds

Fast Natural Feature Tracking Using Optical Flow (광류를 사용한 빠른 자연특징 추적)

  • Bae, Byung-Jo;Park, Jong-Seung
    • The KIPS Transactions:PartB
    • /
    • v.17B no.5
    • /
    • pp.345-354
    • /
    • 2010
  • Visual tracking techniques for Augmented Reality are classified as either a marker tracking approach or a natural feature tracking approach. Marker-based tracking algorithms can be efficiently implemented sufficient to work in real-time on mobile devices. On the other hand, natural feature tracking methods require a lot of computationally expensive procedures. Most previous natural feature tracking methods include heavy feature extraction and pattern matching procedures for each of the input image frame. It is difficult to implement real-time augmented reality applications including the capability of natural feature tracking on low performance devices. The required computational time cost is also in proportion to the number of patterns to be matched. To speed up the natural feature tracking process, we propose a novel fast tracking method based on optical flow. We implemented the proposed method on mobile devices to run in real-time and be appropriately used with mobile augmented reality applications. Moreover, during tracking, we keep up the total number of feature points by inserting new feature points proportional to the number of vanished feature points. Experimental results showed that the proposed method reduces the computational cost and also stabilizes the camera pose estimation results.

Detection of Smoking Behavior in Images Using Deep Learning Technology (딥러닝 기술을 이용한 영상에서 흡연행위 검출)

  • Dong Jun Kim;Yu Jin Choi;Kyung Min Park;Ji Hyun Park;Jae-Moon Lee;Kitae Hwang;In Hwan Jung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.4
    • /
    • pp.107-113
    • /
    • 2023
  • This paper proposes a method for detecting smoking behavior in images using artificial intelligence technology. Since smoking is not a static phenomenon but an action, the object detection technology was combined with the posture estimation technology that can detect the action. A smoker detection learning model was developed to detect smokers in images, and the characteristics of smoking behaviors were applied to posture estimation technology to detect smoking behaviors in images. YOLOv8 was used for object detection, and OpenPose was used for posture estimation. In addition, when smokers and non-smokers are included in the image, a method of separating only people was applied. The proposed method was implemented using Google Colab NVIDEA Tesla T4 GPU in Python, and it was found that the smoking behavior was perfectly detected in the given video as a result of the test.

A study on accident prevention AI system based on estimation of bus passengers' intentions (시내버스 승하차 의도분석 기반 사고방지 AI 시스템 연구)

  • Seonghwan Park;Sunoh Byun;Junghoon Park
    • Smart Media Journal
    • /
    • v.12 no.11
    • /
    • pp.57-66
    • /
    • 2023
  • In this paper, we present a study on an AI-based system utilizing the CCTV system within city buses to predict the intentions of boarding and alighting passengers, with the aim of preventing accidents. The proposed system employs the YOLOv7 Pose model to detect passengers, while utilizing an LSTM model to predict intentions of tracked passengers. The system can be installed on the bus's CCTV terminals, allowing for real-time visual confirmation of passengers' intentions throughout driving. It also provides alerts to the driver, mitigating potential accidents during passenger transitions. Test results show accuracy rates of 0.81 for analyzing boarding intentions and 0.79 for predicting alighting intentions onboard. To ensure real-time performance, we verified that a minimum of 5 frames per second analysis is achievable in a GPU environment. his algorithm enhance the safety of passenger transitions during bus operations. In the future, with improved hardware specifications and abundant data collection, the system's expansion into various safety-related metrics is promising. This algorithm is anticipated to play a pivotal role in ensuring safety when autonomous driving becomes commercialized. Additionally, its applicability could extend to other modes of public transportation, such as subways and all forms of mass transit, contributing to the overall safety of public transportation systems.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Robust Face and Facial Feature Tracking in Image Sequences (연속 영상에서 강인한 얼굴 및 얼굴 특징 추적)

  • Jang, Kyung-Shik;Lee, Chan-Hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.9
    • /
    • pp.1972-1978
    • /
    • 2010
  • AAM(Active Appearance Model) is one of the most effective ways to detect deformable 2D objects and is a kind of mathematical optimization methods. The cost function is a convex function because it is a least-square function, but the search space is not convex space so it is not guaranteed that a local minimum is the optimal solution. That is, if the initial value does not depart from around the global minimum, it converges to a local minimum, so it is difficult to detect face contour correctly. In this study, an AAM-based face tracking algorithm is proposed, which is robust to various lighting conditions and backgrounds. Eye detection is performed using SIFT and Genetic algorithm, the information of eye are used for AAM's initial matching information. Through experiments, it is verified that the proposed AAM-based face tracking method is more robust with respect to pose and background of face than the conventional basic AAM-based face tracking method.

Facial Behavior Recognition for Driver's Fatigue Detection (운전자 피로 감지를 위한 얼굴 동작 인식)

  • Park, Ho-Sik;Bae, Cheol-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.9C
    • /
    • pp.756-760
    • /
    • 2010
  • This paper is proposed to an novel facial behavior recognition system for driver's fatigue detection. Facial behavior is shown in various facial feature such as head expression, head pose, gaze, wrinkles. But it is very difficult to clearly discriminate a certain behavior by the obtained facial feature. Because, the behavior of a person is complicated and the face representing behavior is vague in providing enough information. The proposed system for facial behavior recognition first performs detection facial feature such as eye tracking, facial feature tracking, furrow detection, head orientation estimation, head motion detection and indicates the obtained feature by AU of FACS. On the basis of the obtained AU, it infers probability each state occur through Bayesian network.

3D Visualization using Face Position and Direction Tracking (얼굴 위치와 방향 추적을 이용한 3차원 시각화)

  • Kim, Min-Ha;Kim, Ji-Hyun;Kim, Cheol-Ki;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.10a
    • /
    • pp.173-175
    • /
    • 2011
  • In this paper, we present an user interface which can show some 3D objects at various angles using tracked 3d head position and orientation. In implemented user interface, First, when user's head moves left/right (X-Axis) and up/down(Y-Axis), displayed objects are moved towards user's eyes using 3d head position. Second, when user's head rotate upon an X-Axis(pitch) or an Y-Axis(yaw), displayed objects are rotated by the same value as user's. The results of experiment from a variety of user's position and orientation show good accuracy and reactivity for 3d visualization.

  • PDF

Estimating Interest Levels based on Visitor Behavior Recognition Towards a Guide Robot (안내 로봇을 향한 관람객의 행위 인식 기반 관심도 추정)

  • Ye Jun Lee;Juhyun Kim;Eui-Jung Jung;Min-Gyu Kim
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.4
    • /
    • pp.463-471
    • /
    • 2023
  • This paper proposes a method to estimate the level of interest shown by visitors towards a specific target, a guide robot, in spaces where a large number of visitors, such as exhibition halls and museums, can show interest in a specific subject. To accomplish this, we apply deep learning-based behavior recognition and object tracking techniques for multiple visitors, and based on this, we derive the behavior analysis and interest level of visitors. To implement this research, a personalized dataset tailored to the characteristics of exhibition hall and museum environments was created, and a deep learning model was constructed based on this. Four scenarios that visitors can exhibit were classified, and through this, prediction and experimental values were obtained, thus completing the validation for the interest estimation method proposed in this paper.

Three Dimensional Tracking of Road Signs based on Stereo Vision Technique (스테레오 비전 기술을 이용한 도로 표지판의 3차원 추적)

  • Choi, Chang-Won;Choi, Sung-In;Park, Soon-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.12
    • /
    • pp.1259-1266
    • /
    • 2014
  • Road signs provide important safety information about road and traffic conditions to drivers. Road signs include not only common traffic signs but also warning information regarding unexpected obstacles and road constructions. Therefore, accurate detection and identification of road signs is one of the most important research topics related to safe driving. In this paper, we propose a 3-D vision technique to automatically detect and track road signs in a video sequence which is acquired from a stereo vision camera mounted on a vehicle. First, color information is used to initially detect the sign candidates. Second, the SVM (Support Vector Machine) is employed to determine true signs from the candidates. Once a road sign is detected in a video frame, it is continuously tracked from the next frame until it is disappeared. The 2-D position of a detected sign in the next frame is predicted by the 3-D motion of the vehicle. Here, the 3-D vehicle motion is acquired by using the 3-D pose information of the detected sign. Finally, the predicted 2-D position is corrected by template-matching of the scaled template of the detected sign within a window area around the predicted position. Experimental results show that the proposed method can detect and track many types of road signs successfully. Tracking comparisons with two different methods are shown.

Face Tracking and Recognition on the arbitrary person using Nonliner Manifolds (비선형적 매니폴드를 이용한 임의 얼굴에 대한 얼굴 추적 및 인식)

  • Ju, Myung-Ho;Kang, Hang-Bong
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.342-347
    • /
    • 2008
  • Face tracking and recognition are difficult problems because the face is a non-rigid object. If the system tries to track or recognize the unknown face continuously, it can be more hard problems. In this paper, we propose the method to track and to recognize the face of the unknown person on video sequences using linear combination of nonlinear manifold models that is constructed in the system. The arbitrary input face has different similarities with different persons in system according to its shape or pose. Do we can approximate the new nonlinear manifold model for the input face by estimating the similarities with other faces statistically. The approximated model is updated at each frame for the input face. Our experimental results show that the proposed method is efficient to track and recognize for the arbitrary person.

  • PDF