• Title/Summary/Keyword: pose estimation

Search Result 389, Processing Time 0.028 seconds

Shape Descriptor for 3D Foot Pose Estimation (3차원 발 자세 추정을 위한 새로운 형상 기술자)

  • Song, Ho-Geun;Kang, Ki-Hyun;Jung, Da-Woon;Yoon, Yong-In
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.2
    • /
    • pp.469-478
    • /
    • 2010
  • This paper proposes the effective shape descriptor for 3D foot pose estimation. To reduce processing time, silhouette-based foot image database is built and meta information which involves the 3D pose of the foot is appended to the database. And we proposed a modified Centroid Contour Distance whose size of the feature space is small and performance of pose estimation is better than the others. In order to analyze performance of the descriptor, we evaluate time and spatial complexity with retrieval accuracy, and then compare with the previous methods. Experimental results show that the proposed descriptor is more effective than the previous methods on feature extraction time and pose estimation accuracy.

Improvement of Face Recognition Speed Using Pose Estimation (얼굴의 자세추정을 이용한 얼굴인식 속도 향상)

  • Choi, Sun-Hyung;Cho, Seong-Won;Chung, Sun-Tae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.5
    • /
    • pp.677-682
    • /
    • 2010
  • This paper addresses a method of estimating roughly the human pose by comparing Haar-wavelet value which is learned in face detection technology using AdaBoost algorithm. We also presents its application to face recognition. The learned weak classifier is used to a Haar-wavelet robust to each pose's feature by comparing the coefficients during the process of face detection. The Mahalanobis distance is used to measure the matching degree in Haar-wavelet selection. When a facial image is detected using the selected Haar-wavelet, the pose is estimated. The proposed pose estimation can be used to improve face recognition speed. Experiments are conducted to evaluate the performance of the proposed method for pose estimation.

A Pilot Study on Outpainting-powered Pet Pose Estimation (아웃페인팅 기반 반려동물 자세 추정에 관한 예비 연구)

  • Gyubin Lee;Youngchan Lee;Wonsang You
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.1
    • /
    • pp.69-75
    • /
    • 2023
  • In recent years, there has been a growing interest in deep learning-based animal pose estimation, especially in the areas of animal behavior analysis and healthcare. However, existing animal pose estimation techniques do not perform well when body parts are occluded or not present. In particular, the occlusion of dog tail or ear might lead to a significant degradation of performance in pet behavior and emotion recognition. In this paper, to solve this intractable problem, we propose a simple yet novel framework for pet pose estimation where pet pose is predicted on an outpainted image where some body parts hidden outside the input image are reconstructed by the image inpainting network preceding the pose estimation network, and we performed a preliminary study to test the feasibility of the proposed approach. We assessed CE-GAN and BAT-Fill for image outpainting, and evaluated SimpleBaseline for pet pose estimation. Our experimental results show that pet pose estimation on outpainted images generated using BAT-Fill outperforms the existing methods of pose estimation on outpainting-less input image.

Experimental Study of Spacecraft Pose Estimation Algorithm Using Vision-based Sensor

  • Hyun, Jeonghoon;Eun, Youngho;Park, Sang-Young
    • Journal of Astronomy and Space Sciences
    • /
    • v.35 no.4
    • /
    • pp.263-277
    • /
    • 2018
  • This paper presents a vision-based relative pose estimation algorithm and its validation through both numerical and hardware experiments. The algorithm and the hardware system were simultaneously designed considering actual experimental conditions. Two estimation techniques were utilized to estimate relative pose; one was a nonlinear least square method for initial estimation, and the other was an extended Kalman Filter for subsequent on-line estimation. A measurement model of the vision sensor and equations of motion including nonlinear perturbations were utilized in the estimation process. Numerical simulations were performed and analyzed for both the autonomous docking and formation flying scenarios. A configuration of LED-based beacons was designed to avoid measurement singularity, and its structural information was implemented in the estimation algorithm. The proposed algorithm was verified again in the experimental environment by using the Autonomous Spacecraft Test Environment for Rendezvous In proXimity (ASTERIX) facility. Additionally, a laser distance meter was added to the estimation algorithm to improve the relative position estimation accuracy. Throughout this study, the performance required for autonomous docking could be presented by confirming the change in estimation accuracy with respect to the level of measurement error. In addition, hardware experiments confirmed the effectiveness of the suggested algorithm and its applicability to actual tasks in the real world.

2D-3D Pose Estimation using Multi-view Object Co-segmentation (다시점 객체 공분할을 이용한 2D-3D 물체 자세 추정)

  • Kim, Seong-heum;Bok, Yunsu;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.1
    • /
    • pp.33-41
    • /
    • 2017
  • We present a region-based approach for accurate pose estimation of small mechanical components. Our algorithm consists of two key phases: Multi-view object co-segmentation and pose estimation. In the first phase, we explain an automatic method to extract binary masks of a target object captured from multiple viewpoints. For initialization, we assume the target object is bounded by the convex volume of interest defined by a few user inputs. The co-segmented target object shares the same geometric representation in space, and has distinctive color models from those of the backgrounds. In the second phase, we retrieve a 3D model instance with correct upright orientation, and estimate a relative pose of the object observed from images. Our energy function, combining region and boundary terms for the proposed measures, maximizes the overlapping regions and boundaries between the multi-view co-segmentations and projected masks of the reference model. Based on high-quality co-segmentations consistent across all different viewpoints, our final results are accurate model indices and pose parameters of the extracted object. We demonstrate the effectiveness of the proposed method using various examples.

Comparison of Deep Learning Based Pose Detection Models to Detect Fall of Workers in Underground Utility Tunnels (딥러닝 자세 추정 모델을 이용한 지하공동구 다중 작업자 낙상 검출 모델 비교)

  • Jeongsoo Kim
    • Journal of the Society of Disaster Information
    • /
    • v.20 no.2
    • /
    • pp.302-314
    • /
    • 2024
  • Purpose: This study proposes a fall detection model based on a top-down deep learning pose estimation model to automatically determine falls of multiple workers in an underground utility tunnel, and evaluates the performance of the proposed model. Method: A model is presented that combines fall discrimination rules with the results inferred from YOLOv8-pose, one of the top-down pose estimation models, and metrics of the model are evaluated for images of standing and falling two or fewer workers in the tunnel. The same process is also conducted for a bottom-up type of pose estimation model (OpenPose). In addition, due to dependency of the falling interference of the models on worker detection by YOLOv8-pose and OpenPose, metrics of the models for fall was not only investigated, but also for person. Result: For worker detection, both YOLOv8-pose and OpenPose models have F1-score of 0.88 and 0.71, respectively. However, for fall detection, the metrics were deteriorated to 0.71 and 0.23. The results of the OpenPose based model were due to partially detected worker body, and detected workers but fail to part them correctly. Conclusion: Use of top-down type of pose estimation models would be more effective way to detect fall of workers in the underground utility tunnel, with respect to joint recognition and partition between workers.

Camera pose estimation framework for array-structured images

  • Shin, Min-Jung;Park, Woojune;Kim, Jung Hee;Kim, Joonsoo;Yun, Kuk-Jin;Kang, Suk-Ju
    • ETRI Journal
    • /
    • v.44 no.1
    • /
    • pp.10-23
    • /
    • 2022
  • Despite the significant progress in camera pose estimation and structure-from-motion reconstruction from unstructured images, methods that exploit a priori information on camera arrangements have been overlooked. Conventional state-of-the-art methods do not exploit the geometric structure to recover accurate camera poses from a set of patch images in an array for mosaic-based imaging that creates a wide field-of-view image by sewing together a collection of regular images. We propose a camera pose estimation framework that exploits the array-structured image settings in each incremental reconstruction step. It consists of the two-way registration, the 3D point outlier elimination and the bundle adjustment with a constraint term for consistent rotation vectors to reduce reprojection errors during optimization. We demonstrate that by using individual images' connected structures at different camera pose estimation steps, we can estimate camera poses more accurately from all structured mosaic-based image sets, including omnidirectional scenes.

Stabilized 3D Pose Estimation of 3D Volumetric Sequence Using 360° Multi-view Projection (360° 다시점 투영을 이용한 3D 볼류메트릭 시퀀스의 안정적인 3차원 자세 추정)

  • Lee, Sol;Seo, Young-ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.76-77
    • /
    • 2022
  • In this paper, we propose a method to stabilize the 3D pose estimation result of a 3D volumetric data sequence by matching the pose estimation results from multi-view. Draw a circle centered on the volumetric model and project the model from the viewpoint at regular intervals. After performing Openpose 2D pose estimation on the projected 2D image, the 2D joint is matched to localize the 3D joint position. The tremor of 3D joints sequence according to the angular spacing was quantified and expressed in graphs, and the minimum conditions for stable results are suggested.

  • PDF

An Algorithm for a pose estimation of a robot using Scale-Invariant feature Transform

  • Lee, Jae-Kwang;Huh, Uk-Youl;Kim, Hak-Il
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.517-519
    • /
    • 2004
  • This paper describes an approach to estimate a robot pose with an image. The algorithm of pose estimation with an image can be broken down into three stages : extracting scale-invariant features, matching these features and calculating affine invariant. In the first step, the robot mounted mono camera captures environment image. Then feature extraction is executed in a captured image. These extracted features are recorded in a database. In the matching stage, a Random Sample Consensus(RANSAC) method is employed to match these features. After matching these features, the robot pose is estimated with positions of features by calculating affine invariant. This algorithm is implemented and demonstrated by Matlab program.

  • PDF

Localization of Mobile Robot using Local Map and Kalman Filtering (지역 지도와 칼만 필터를 이용한 이동 로봇의 위치 추정)

  • Lim, Byung-Hyun;Kim, Yeong-Min;Hwang, Jong-Sun;Ko, Nak-Yong
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2003.07b
    • /
    • pp.1227-1230
    • /
    • 2003
  • In this paper, we propose a pose estimation method using local map acquired from 2d laser range finder information. The proposed method uses extended kalman filter. The state equation is a navigation system equation of Nomad Super Scout II. The measurement equation is a map-based measurement equation using a SICK PLS 101-112 sensor. We describe a map consisting of geometric features such as plane, edge and corner. For pose estimation we scan external environments by laser rage finer. And then these data are fed to kalman filter to estimate robot pose and position. The proposed method enables very fast simultaneous map building and pose estimation.

  • PDF