• Title/Summary/Keyword: Pose Detection

Search Result 298, Processing Time 0.026 seconds

Detection of Smoking Behavior in Images Using Deep Learning Technology (딥러닝 기술을 이용한 영상에서 흡연행위 검출)

  • Dong Jun Kim;Yu Jin Choi;Kyung Min Park;Ji Hyun Park;Jae-Moon Lee;Kitae Hwang;In Hwan Jung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.4
    • /
    • pp.107-113
    • /
    • 2023
  • This paper proposes a method for detecting smoking behavior in images using artificial intelligence technology. Since smoking is not a static phenomenon but an action, the object detection technology was combined with the posture estimation technology that can detect the action. A smoker detection learning model was developed to detect smokers in images, and the characteristics of smoking behaviors were applied to posture estimation technology to detect smoking behaviors in images. YOLOv8 was used for object detection, and OpenPose was used for posture estimation. In addition, when smokers and non-smokers are included in the image, a method of separating only people was applied. The proposed method was implemented using Google Colab NVIDEA Tesla T4 GPU in Python, and it was found that the smoking behavior was perfectly detected in the given video as a result of the test.

Lane Detection-based Camera Pose Estimation (차선검출 기반 카메라 포즈 추정)

  • Jung, Ho Gi;Suhr, Jae Kyu
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.23 no.5
    • /
    • pp.463-470
    • /
    • 2015
  • When a camera installed on a vehicle is used, estimation of the camera pose including tilt, roll, and pan angle with respect to the world coordinate system is important to associate camera coordinates with world coordinates. Previous approaches using huge calibration patterns have the disadvantage that the calibration patterns are costly to make and install. And, previous approaches exploiting multiple vanishing points detected in a single image are not suitable for automotive applications as a scene where multiple vanishing points can be captured by a front camera is hard to find in our daily environment. This paper proposes a camera pose estimation method. It collects multiple images of lane markings while changing the horizontal angle with respect to the markings. One vanishing point, the cross point of the left and right lane marking, is detected in each image, and vanishing line is estimated based on the detected vanishing points. Finally, camera pose is estimated from the vanishing line. The proposed method is based on the fact that planar motion does not change the vanishing line of the plane and the normal vector of the plane can be estimated by the vanishing line. Experiments with large and small tilt and roll angle show that the proposed method outputs accurate estimation results respectively. It is verified by checking the lane markings are up right in the bird's eye view image when the pan angle is compensated.

Robust pupil detection and gaze tracking under occlusion of eyes

  • Lee, Gyung-Ju;Kim, Jin-Suh;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.10
    • /
    • pp.11-19
    • /
    • 2016
  • The size of a display is large, The form becoming various of that do not apply to previous methods of gaze tracking and if setup gaze-track-camera above display, can solve the problem of size or height of display. However, This method can not use of infrared illumination information of reflected cornea using previous methods. In this paper, Robust pupil detecting method for eye's occlusion, corner point of inner eye and center of pupil, and using the face pose information proposes a method for calculating the simply position of the gaze. In the proposed method, capture the frame for gaze tracking that according to position of person transform camera mode of wide or narrow angle. If detect the face exist in field of view(FOV) in wide mode of camera, transform narrow mode of camera calculating position of face. The frame captured in narrow mode of camera include gaze direction information of person in long distance. The method for calculating the gaze direction consist of face pose estimation and gaze direction calculating step. Face pose estimation is estimated by mapping between feature point of detected face and 3D model. To calculate gaze direction the first, perform ellipse detect using splitting from iris edge information of pupil and if occlusion of pupil, estimate position of pupil with deformable template. Then using center of pupil and corner point of inner eye, face pose information calculate gaze position at display. In the experiment, proposed gaze tracking algorithm in this paper solve the constraints that form of a display, to calculate effectively gaze direction of person in the long distance using single camera, demonstrate in experiments by distance.

A Study on Improvement of Face Recognition Rate with Transformation of Various Facial Poses and Expressions (얼굴의 다양한 포즈 및 표정의 변환에 따른 얼굴 인식률 향상에 관한 연구)

  • Choi Jae-Young;Whangbo Taeg-Keun;Kim Nak-Bin
    • Journal of Internet Computing and Services
    • /
    • v.5 no.6
    • /
    • pp.79-91
    • /
    • 2004
  • Various facial pose detection and recognition has been a difficult problem. The problem is due to the fact that the distribution of various poses in a feature space is mere dispersed and more complicated than that of frontal faces, This thesis proposes a robust pose-expression-invariant face recognition method in order to overcome insufficiency of the existing face recognition system. First, we apply the TSL color model for detecting facial region and estimate the direction of face using facial features. The estimated pose vector is decomposed into X-V-Z axes, Second, the input face is mapped by deformable template using this vectors and 3D CANDIDE face model. Final. the mapped face is transformed to frontal face which appropriates for face recognition by the estimated pose vector. Through the experiments, we come to validate the application of face detection model and the method for estimating facial poses, Moreover, the tests show that recognition rate is greatly boosted through the normalization of the poses and expressions.

  • PDF

Multi-camera-based 3D Human Pose Estimation for Close-Proximity Human-robot Collaboration in Construction

  • Sarkar, Sajib;Jang, Youjin;Jeong, Inbae
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.328-335
    • /
    • 2022
  • With the advance of robot capabilities and functionalities, construction robots assisting construction workers have been increasingly deployed on construction sites to improve safety, efficiency and productivity. For close-proximity human-robot collaboration in construction sites, robots need to be aware of the context, especially construction worker's behavior, in real-time to avoid collision with workers. To recognize human behavior, most previous studies obtained 3D human poses using a single camera or an RGB-depth (RGB-D) camera. However, single-camera detection has limitations such as occlusions, detection failure, and sensor malfunction, and an RGB-D camera may suffer from interference from lighting conditions and surface material. To address these issues, this study proposes a novel method of 3D human pose estimation by extracting 2D location of each joint from multiple images captured at the same time from different viewpoints, fusing each joint's 2D locations, and estimating the 3D joint location. For higher accuracy, the probabilistic representation is used to extract the 2D location of the joints, considering each joint location extracted from images as a noisy partial observation. Then, this study estimates the 3D human pose by fusing the probabilistic 2D joint locations to maximize the likelihood. The proposed method was evaluated in both simulation and laboratory settings, and the results demonstrated the accuracy of estimation and the feasibility in practice. This study contributes to ensuring human safety in close-proximity human-robot collaboration by providing a novel method of 3D human pose estimation.

  • PDF

Markerless camera pose estimation framework utilizing construction material with standardized specification

  • Harim Kim;Heejae Ahn;Sebeen Yoon;Taehoon Kim;Thomas H.-K. Kang;Young K. Ju;Minju Kim;Hunhee Cho
    • Computers and Concrete
    • /
    • v.33 no.5
    • /
    • pp.535-544
    • /
    • 2024
  • In the rapidly advancing landscape of computer vision (CV) technology, there is a burgeoning interest in its integration with the construction industry. Camera calibration is the process of deriving intrinsic and extrinsic parameters that affect when the coordinates of the 3D real world are projected onto the 2D plane, where the intrinsic parameters are internal factors of the camera, and extrinsic parameters are external factors such as the position and rotation of the camera. Camera pose estimation or extrinsic calibration, which estimates extrinsic parameters, is essential information for CV application at construction since it can be used for indoor navigation of construction robots and field monitoring by restoring depth information. Traditionally, camera pose estimation methods for cameras relied on target objects such as markers or patterns. However, these methods, which are marker- or pattern-based, are often time-consuming due to the requirement of installing a target object for estimation. As a solution to this challenge, this study introduces a novel framework that facilitates camera pose estimation using standardized materials found commonly in construction sites, such as concrete forms. The proposed framework obtains 3D real-world coordinates by referring to construction materials with certain specifications, extracts the 2D coordinates of the corresponding image plane through keypoint detection, and derives the camera's coordinate through the perspective-n-point (PnP) method which derives the extrinsic parameters by matching 3D and 2D coordinate pairs. This framework presents a substantial advancement as it streamlines the extrinsic calibration process, thereby potentially enhancing the efficiency of CV technology application and data collection at construction sites. This approach holds promise for expediting and optimizing various construction-related tasks by automating and simplifying the calibration procedure.

A Novel Multi-view Face Detection Method Based on Improved Real Adaboost Algorithm

  • Xu, Wenkai;Lee, Eung-Joo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.11
    • /
    • pp.2720-2736
    • /
    • 2013
  • Multi-view face detection has become an active area for research in the last few years. In this paper, a novel multi-view human face detection algorithm based on improved real Adaboost is presented. Real Adaboost algorithm is improved by weighted combination of weak classifiers and the approximately best combination coefficients are obtained. After that, we proved that the function of sample weight adjusting method and weak classifier training method is to guarantee the independence of weak classifiers. A coarse-to-fine hierarchical face detector combining the high efficiency of Haar feature with pose estimation phase based on our real Adaboost algorithm is proposed. This algorithm reduces training time cost greatly compared with classical real Adaboost algorithm. In addition, it speeds up strong classifier converging and reduces the number of weak classifiers. For frontal face detection, the experiments on MIT+CMU frontal face test set result a 96.4% correct rate with 528 false alarms; for multi-view face in real time test set result a 94.7 % correct rate. The experimental results verified the effectiveness of the proposed approach.

Robust Vehicle Occupant Detection based on RGB-Depth-Thermal Camera (다양한 환경에서 강건한 RGB-Depth-Thermal 카메라 기반의 차량 탑승자 점유 검출)

  • Song, Changho;Kim, Seung-Hun
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.1
    • /
    • pp.31-37
    • /
    • 2018
  • Recently, the safety in vehicle also has become a hot topic as self-driving car is developed. In passive safety systems such as airbags and seat belts, the system is being changed into an active system that actively grasps the status and behavior of the passengers including the driver to mitigate the risk. Furthermore, it is expected that it will be possible to provide customized services such as seat deformation, air conditioning operation and D.W.D (Distraction While Driving) warning suitable for the passenger by using occupant information. In this paper, we propose robust vehicle occupant detection algorithm based on RGB-Depth-Thermal camera for obtaining the passengers information. The RGB-Depth-Thermal camera sensor system was configured to be robust against various environment. Also, one of the deep learning algorithms, OpenPose, was used for occupant detection. This algorithm is advantageous not only for RGB image but also for thermal image even using existing learned model. The algorithm will be supplemented to acquire high level information such as passenger attitude detection and face recognition mentioned in the introduction and provide customized active convenience service.

Efficient Object Tracking System Using the Fusion of a CCD Camera and an Infrared Camera (CCD카메라와 적외선 카메라의 융합을 통한 효과적인 객체 추적 시스템)

  • Kim, Seung-Hun;Jung, Il-Kyun;Park, Chang-Woo;Hwang, Jung-Hoon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.3
    • /
    • pp.229-235
    • /
    • 2011
  • To make a robust object tracking and identifying system for an intelligent robot and/or home system, heterogeneous sensor fusion between visible ray system and infrared ray system is proposed. The proposed system separates the object by combining the ROI (Region of Interest) estimated from two different images based on a heterogeneous sensor that consolidates the ordinary CCD camera and the IR (Infrared) camera. Human's body and face are detected in both images by using different algorithms, such as histogram, optical-flow, skin-color model and Haar model. Also the pose of human body is estimated from the result of body detection in IR image by using PCA algorithm along with AdaBoost algorithm. Then, the results from each detection algorithm are fused to extract the best detection result. To verify the heterogeneous sensor fusion system, few experiments were done in various environments. From the experimental results, the system seems to have good tracking and identification performance regardless of the environmental changes. The application area of the proposed system is not limited to robot or home system but the surveillance system and military system.

Human Skeleton Keypoints based Fall Detection using GRU (PoseNet과 GRU를 이용한 Skeleton Keypoints 기반 낙상 감지)

  • Kang, Yoon Kyu;Kang, Hee Yong;Weon, Dal Soo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.2
    • /
    • pp.127-133
    • /
    • 2021
  • A recent study of people physically falling focused on analyzing the motions of the falls using a recurrent neural network (RNN) and a deep learning approach to get good results from detecting 2D human poses from a single color image. In this paper, we investigate a detection method for estimating the position of the head and shoulder keypoints and the acceleration of positional change using the skeletal keypoints information extracted using PoseNet from an image obtained with a low-cost 2D RGB camera, increasing the accuracy of judgments about the falls. In particular, we propose a fall detection method based on the characteristics of post-fall posture in the fall motion-analysis method. A public data set was used to extract human skeletal features, and as a result of an experiment to find a feature extraction method that can achieve high classification accuracy, the proposed method showed a 99.8% success rate in detecting falls more effectively than a conventional, primitive skeletal data-use method.