• Title/Summary/Keyword: pose estimation

Search Result 388, Processing Time 0.03 seconds

Human Skeleton Keypoints based Fall Detection using GRU (PoseNet과 GRU를 이용한 Skeleton Keypoints 기반 낙상 감지)

  • Kang, Yoon Kyu;Kang, Hee Yong;Weon, Dal Soo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.2
    • /
    • pp.127-133
    • /
    • 2021
  • A recent study of people physically falling focused on analyzing the motions of the falls using a recurrent neural network (RNN) and a deep learning approach to get good results from detecting 2D human poses from a single color image. In this paper, we investigate a detection method for estimating the position of the head and shoulder keypoints and the acceleration of positional change using the skeletal keypoints information extracted using PoseNet from an image obtained with a low-cost 2D RGB camera, increasing the accuracy of judgments about the falls. In particular, we propose a fall detection method based on the characteristics of post-fall posture in the fall motion-analysis method. A public data set was used to extract human skeletal features, and as a result of an experiment to find a feature extraction method that can achieve high classification accuracy, the proposed method showed a 99.8% success rate in detecting falls more effectively than a conventional, primitive skeletal data-use method.

Fuzzy rule-based Hand Motion Estimation for A 6 Dimensional Spatial Tracker

  • Lee, Sang-Hoon;Kim, Hyun-Seok;Suh, Il-Hong;Park, Myung-Kwan
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.82-86
    • /
    • 2004
  • A fuzzy rule-based hand-motion estimation algorithm is proposed for a 6 dimensional spatial tracker in which low cost accelerometers and gyros are employed. To be specific, beginning and stopping of hand motions needs to be accurately detected to initiate and terminate integration process to get position and pose of the hand from accelerometer and gyro signals, since errors due to noise and/or hand-shaking motions accumulated by integration processes. Fuzzy rules of yes or no of hand-motion-detection are here proposed for rules of accelerometer signals, and sum of derivatives of accelerometer and gyro signals. Several experimental results and shown to validate our proposed algorithms.

  • PDF

Survey on Developing Autonomous Micro Aerial Vehicles (드론 자율비행 기술 동향)

  • Kim, S.S.;Jung, S.G.;Cha, J.H.
    • Electronics and Telecommunications Trends
    • /
    • v.36 no.2
    • /
    • pp.1-11
    • /
    • 2021
  • As sensors such as Inertial Measurement Unit, cameras, and Light Detection and Rangings have become cheaper and smaller, research has been actively conducted to implement functions automating micro aerial vehicles such as multirotor type drones. This would fully enable the autonomous flight of drones in the real world without human intervention. In this article, we present a survey of state-of-the-art development on autonomous drones. To build an autonomous drone, the essential components can be classified into pose estimation, environmental perception, and obstacle-free trajectory generation. To describe the trend, we selected three leading research groups-University of Pennsylvania, ETH Zurich, and Carnegie Mellon University-which have demonstrated impressive experiment results on automating drones using their estimation, perception, and trajectory generation techniques. For each group, we summarize the core of their algorithm and describe how they implemented those in such small-sized drones. Finally, we present our up to date research status on developing an autonomous drone.

Implementation of Multi-device Remote Control System using Gaze Estimation Algorithm (시선 방향 추정 알고리즘을 이용한 다중 사물 제어 시스템의 구현)

  • Yu, Hyemi;Lee, Jooyoung;Jeon, Surim;Nah, JeongEun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.812-814
    • /
    • 2022
  • 제어할 사물을 선택하기 위해 여러 단계를 거쳐야 하는 기존 '스마트 홈'의 단점을 보완하고자 본 논문에서는 사용자의 시선 방향을 추정하여 사용자가 바라보는 방향에 있는 사물을 제어할 수 있는 시스템을 제안한다. 일반 RGB 카메라를 통해 Pose Estimation으로 추출한 Landmark들의 좌표 값을 이용하여 시선 방향을 추정하는 알고리즘을 구현하였으며, 이는 근적외선 카메라와 Gaze Tracking 모델링을 통해 이루어지던 기존의 시선 추적 기술에 비해 가벼운 데이터를 산출하고 사용자와 센서간의 위치 제약이 적으며 별도의 장비를 필요로 하지 않는다. 해당 알고리즘으로 산출한 시선 추적의 정확도가 실제 주거환경에서 사용하기에 실효성이 있음을 실험을 통해 입증하였으며, 최종적으로 이 알고리즘을 적용하여 적외선 기기와 Google Home 제품에 사용할 수 있는 시선 방향 사물 제어 시스템을 구현하였다.

Monocular Vision Based Localization System using Hybrid Features from Ceiling Images for Robot Navigation in an Indoor Environment (실내 환경에서의 로봇 자율주행을 위한 천장영상으로부터의 이종 특징점을 이용한 단일비전 기반 자기 위치 추정 시스템)

  • Kang, Jung-Won;Bang, Seok-Won;Atkeson, Christopher G.;Hong, Young-Jin;Suh, Jin-Ho;Lee, Jung-Woo;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.6 no.3
    • /
    • pp.197-209
    • /
    • 2011
  • This paper presents a localization system using ceiling images in a large indoor environment. For a system with low cost and complexity, we propose a single camera based system that utilizes ceiling images acquired from a camera installed to point upwards. For reliable operation, we propose a method using hybrid features which include natural landmarks in a natural scene and artificial landmarks observable in an infrared ray domain. Compared with previous works utilizing only infrared based features, our method reduces the required number of artificial features as we exploit both natural and artificial features. In addition, compared with previous works using only natural scene, our method has an advantage in the convergence speed and robustness as an observation of an artificial feature provides a crucial clue for robot pose estimation. In an experiment with challenging situations in a real environment, our method was performed impressively in terms of the robustness and accuracy. To our knowledge, our method is the first ceiling vision based localization method using features from both visible and infrared rays domains. Our system can be easily utilized with a variety of service robot applications in a large indoor environment.

Virtual Navigation of Blood Vessels using 3D Curve-Skeletons (3차원 골격곡선을 이용한 가상혈관 탐색 방안)

  • Park, Sang-Jin;Park, Hyungjun
    • Korean Journal of Computational Design and Engineering
    • /
    • v.22 no.1
    • /
    • pp.89-99
    • /
    • 2017
  • In order to make a virtual endoscopy system effective for exploring the interior of the 3D model of a human organ, it is necessary to generate an accurate navigation path located inside the 3D model and to obtain consistent camera position and pose estimation along the path. In this paper, we propose an approach to virtual navigation of blood vessels, which makes proper use of orthogonal contours and skeleton curves. The approach generates the orthogonal contours and the skeleton curves from the 3D mesh model and its voxel model, all of which represent the blood vessels. For a navigation zone specified by two nodes on the skeleton curves, it computes the shortest path between the two nodes, estimates the positions and poses of a virtual camera at the nodes in the navigation zone, and interpolates the positions and poses to make the camera move smoothly along the path. In addition to keyboard and mouse input, intuitive hand gestures determined by the Leap Motion SDK are used as user interface for virtual navigation of the blood vessels. The proposed approach provides easy and accurate means for the user to examine the interior of 3D blood vessels without any collisions between the camera and their surface. With a simple user study, we present illustrative examples of applying the approach to 3D mesh models of various blood vessels in order to show its quality and usefulness.

Real-time 3D Pose Estimation of Both Human Hands via RGB-Depth Camera and Deep Convolutional Neural Networks (RGB-Depth 카메라와 Deep Convolution Neural Networks 기반의 실시간 사람 양손 3D 포즈 추정)

  • Park, Na Hyeon;Ji, Yong Bin;Gi, Geon;Kim, Tae Yeon;Park, Hye Min;Kim, Tae-Seong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.686-689
    • /
    • 2018
  • 3D 손 포즈 추정(Hand Pose Estimation, HPE)은 스마트 인간 컴퓨터 인터페이스를 위해서 중요한 기술이다. 이 연구에서는 딥러닝 방법을 기반으로 하여 단일 RGB-Depth 카메라로 촬영한 양손의 3D 손 자세를 실시간으로 인식하는 손 포즈 추정 시스템을 제시한다. 손 포즈 추정 시스템은 4단계로 구성된다. 첫째, Skin Detection 및 Depth cutting 알고리즘을 사용하여 양손을 RGB와 깊이 영상에서 감지하고 추출한다. 둘째, Convolutional Neural Network(CNN) Classifier는 오른손과 왼손을 구별하는데 사용된다. CNN Classifier 는 3개의 convolution layer와 2개의 Fully-Connected Layer로 구성되어 있으며, 추출된 깊이 영상을 입력으로 사용한다. 셋째, 학습된 CNN regressor는 추출된 왼쪽 및 오른쪽 손의 깊이 영상에서 손 관절을 추정하기 위해 다수의 Convolutional Layers, Pooling Layers, Fully Connected Layers로 구성된다. CNN classifier와 regressor는 22,000개 깊이 영상 데이터셋으로 학습된다. 마지막으로, 각 손의 3D 손 자세는 추정된 손 관절 정보로부터 재구성된다. 테스트 결과, CNN classifier는 오른쪽 손과 왼쪽 손을 96.9%의 정확도로 구별할 수 있으며, CNN regressor는 형균 8.48mm의 오차 범위로 3D 손 관절 정보를 추정할 수 있다. 본 연구에서 제안하는 손 포즈 추정 시스템은 가상 현실(virtual reality, VR), 증강 현실(Augmented Reality, AR) 및 융합 현실 (Mixed Reality, MR) 응용 프로그램을 포함한 다양한 응용 분야에서 사용할 수 있다.

The Methodology of the Golf Swing Similarity Measurement Using Deep Learning-Based 2D Pose Estimation

  • Jonghyuk, Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.1
    • /
    • pp.39-47
    • /
    • 2023
  • In this paper, we propose a method to measure the similarity between golf swings in videos. As it is known that deep learning-based artificial intelligence technology is effective in the field of computer vision, attempts to utilize artificial intelligence in video-based sports data analysis are increasing. In this study, the joint coordinates of a person in a golf swing video were obtained using a deep learning-based pose estimation model, and based on this, the similarity of each swing segment was measured. For the evaluation of the proposed method, driver swing videos from the GolfDB dataset were used. As a result of measuring swing similarity by pairing swing videos of a total of 36 players, 26 players evaluated that their other swing sequence was the most similar, and the average ranking of similarity was confirmed to be about 5th. This ensured that the similarity could be measured in detail even when the motion was performed similarly.

Localization of a Mobile Robot Using Ceiling Image with Identical Features (동일한 형태의 특징점을 갖는 천장 영상 이용 이동 로봇 위치추정)

  • Noh, Sung Woo;Ko, Nak Yong;Kuc, Tae Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.2
    • /
    • pp.160-167
    • /
    • 2016
  • This paper reports a localization method of a mobile robot using ceiling image. The ceiling has landmarks which are not distinguishablefrom one another. The location of every landmark in a map is given a priori while correspondence is not given between a detected landmark and a landmark in the map. Only the initial pose of the robot relative to the landmarks is given. The method uses particle filter approach for localization. Along with estimating robot pose, the method also associates a landmark in the map to a landmark detected from the ceiling image. The method is tested in an indoor environment which has circular landmarks on the ceiling. The test verifies the feasibility of the method in an environment where range data to walls or to beacons are not available or severely corrupted with noise. This method is useful for localization in a warehouse where measurement by Laser range finder and range data to beacons of RF or ultrasonic signal have large uncertainty.

Projection mapping onto multiple objects using a projector robot

  • Yamazoe, Hirotake;Kasetani, Misaki;Noguchi, Tomonobu;Lee, Joo-Ho
    • Advances in robotics research
    • /
    • v.2 no.1
    • /
    • pp.45-57
    • /
    • 2018
  • Even though the popularity of projection mapping continues to increase and it is being implemented in more and more settings, most current projection mapping systems are limited to special purposes, such as outdoor events, live theater and musical performances. This lack of versatility arises from the large number of projectors needed and their proper calibration. Furthermore, we cannot change the positions and poses of projectors, or their projection targets, after the projectors have been calibrated. To overcome these problems, we propose a projection mapping method using a projector robot that can perform projection mapping in more general or ubiquitous situations, such as shopping malls. We can estimate a projector's position and pose with the robot's self-localization sensors, but the accuracy of this approach remains inadequate for projection mapping. Consequently, the proposed method solves this problem by combining self-localization by robot sensors with position and pose estimation of projection targets based on a 3D model. We first obtain the projection target's 3D model and then use it to accurately estimate the target's position and pose and thus achieve accurate projection mapping with a projector robot. In addition, our proposed method performs accurate projection mapping even after a projection target has been moved, which often occur in shopping malls. In this paper, we employ Ubiquitous Display (UD), which we are researching as a projector robot, to experimentally evaluate the effectiveness of the proposed method.