• Title/Summary/Keyword: Small person pose estimation

Search Result 3, Processing Time 0.018 seconds

Multi-resolution Fusion Network for Human Pose Estimation in Low-resolution Images

  • Kim, Boeun;Choo, YeonSeung;Jeong, Hea In;Kim, Chung-Il;Shin, Saim;Kim, Jungho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.7
    • /
    • pp.2328-2344
    • /
    • 2022
  • 2D human pose estimation still faces difficulty in low-resolution images. Most existing top-down approaches scale up the target human bonding box images to the large size and insert the scaled image into the network. Due to up-sampling, artifacts occur in the low-resolution target images, and the degraded images adversely affect the accurate estimation of the joint positions. To address this issue, we propose a multi-resolution input feature fusion network for human pose estimation. Specifically, the bounding box image of the target human is rescaled to multiple input images of various sizes, and the features extracted from the multiple images are fused in the network. Moreover, we introduce a guiding channel which induces the multi-resolution input features to alternatively affect the network according to the resolution of the target image. We conduct experiments on MS COCO dataset which is a representative dataset for 2D human pose estimation, where our method achieves superior performance compared to the strong baseline HRNet and the previous state-of-the-art methods.

Armed person detection using Deep Learning (딥러닝 기반의 무기 소지자 탐지)

  • Kim, Geonuk;Lee, Minhun;Huh, Yoojin;Hwang, Gisu;Oh, Seoung-Jun
    • Journal of Broadcast Engineering
    • /
    • v.23 no.6
    • /
    • pp.780-789
    • /
    • 2018
  • Nowadays, gun crimes occur very frequently not only in public places but in alleyways around the world. In particular, it is essential to detect a person armed by a pistol to prevent those crimes since small guns, such as pistols, are often used for those crimes. Because conventional works for armed person detection have treated an armed person as a single object in an input image, their accuracy is very low. The reason for the low accuracy comes from the fact that the gunman is treated as a single object although the pistol is a relatively much smaller object than the person. To solve this problem, we propose a novel algorithm called APDA(Armed Person Detection Algorithm). APDA detects the armed person using in a post-processing the positions of both wrists and the pistol achieved by the CNN-based human body feature detection model and the pistol detection model, respectively. We show that APDA can provide both 46.3% better recall and 14.04% better precision than SSD-MobileNet.

High-Quality Depth Map Generation of Humans in Monocular Videos (단안 영상에서 인간 오브젝트의 고품질 깊이 정보 생성 방법)

  • Lee, Jungjin;Lee, Sangwoo;Park, Jongjin;Noh, Junyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.20 no.2
    • /
    • pp.1-11
    • /
    • 2014
  • The quality of 2D-to-3D conversion depends on the accuracy of the assigned depth to scene objects. Manual depth painting for given objects is labor intensive as each frame is painted. Specifically, a human is one of the most challenging objects for a high-quality conversion, as a human body is an articulated figure and has many degrees of freedom (DOF). In addition, various styles of clothes, accessories, and hair create a very complex silhouette around the 2D human object. We propose an efficient method to estimate visually pleasing depths of a human at every frame in a monocular video. First, a 3D template model is matched to a person in a monocular video with a small number of specified user correspondences. Our pose estimation with sequential joint angular constraints reproduces a various range of human motions (i.e., spine bending) by allowing the utilization of a fully skinned 3D model with a large number of joints and DOFs. The initial depth of the 2D object in the video is assigned from the matched results, and then propagated toward areas where the depth is missing to produce a complete depth map. For the effective handling of the complex silhouettes and appearances, we introduce a partial depth propagation method based on color segmentation to ensure the detail of the results. We compared the result and depth maps painted by experienced artists. The comparison shows that our method produces viable depth maps of humans in monocular videos efficiently.