• Title/Summary/Keyword: pose estimation

Search Result 392, Processing Time 0.028 seconds

An Evaluation Method for the Musculoskeletal Hazards in Wood Manufacturing Workers Using MediaPipe (MediaPipe를 이용한 목재 제조업 작업자의 근골격계 유해요인 평가 방법)

  • Jung, Sungoh;Kook, Joongjin
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.2
    • /
    • pp.117-122
    • /
    • 2022
  • This paper proposes a method for evaluating the work of manufacturing workers using MediaPipe as a risk factor for musculoskeletal diseases. Recently, musculoskeletal disorders (MSDs) caused by repeated working attitudes in industrial sites have emerged as one of the biggest problems in the industrial health field while increasing public interest. The Korea Occupational Safety and Health Agency presents tools such as NIOSH Lifting Equations (NIOSH), OWAS (Ovako Working-posture Analysis System), Rapid Upper Limb Assessment (RULA), and Rapid Entertainment Assessment (REBA) as ways to quantitatively calculate the risk of musculoskeletal diseases that can occur due to workers' repeated working attitudes. To compensate for these shortcomings, the system proposed in this study obtains the position of the joint by estimating the posture of the worker using the posture estimation learning model of MediaPipe. The position of the joint is calculated using inverse kinetics to obtain an angle and substitute it into the REBA equation to calculate the load level of the working posture. The calculated result was compared to the expert's image-based REBA evaluation result, and if there was a result with a large error, feedback was conducted with the expert again.

Optimal Position Estimation of a Service Robot using GVG Nodes and Beacon Trilateral Method (비콘 삼변측량과 보로노이 세선화를 이용한 서비스로봇의 최적 이동위치 추정)

  • Lim, Su-Jong;Lee, Woo-Jin;Yun, Sang-Seok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.8-11
    • /
    • 2021
  • This paper proposes a method of estimating the optimal position of a robot in order to provide a service by approaching a user located outside the sensing area of the robot in an indoor environment. First, in order to estimate the user's location, the location in the indoor environment was estimated by applying a trilateral approach to the beacon-tag module data, and Voronoi thinning to set the optimal movement goal from the user's estimated location. Based on the generated nodes, the final location was estimated through the calculation of the user location, obstacle, and movement path, and the location accuracy of the service robot was verified through the movement of the destination of the actual robot platform.

  • PDF

Estimating Interest Levels based on Visitor Behavior Recognition Towards a Guide Robot (안내 로봇을 향한 관람객의 행위 인식 기반 관심도 추정)

  • Ye Jun Lee;Juhyun Kim;Eui-Jung Jung;Min-Gyu Kim
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.4
    • /
    • pp.463-471
    • /
    • 2023
  • This paper proposes a method to estimate the level of interest shown by visitors towards a specific target, a guide robot, in spaces where a large number of visitors, such as exhibition halls and museums, can show interest in a specific subject. To accomplish this, we apply deep learning-based behavior recognition and object tracking techniques for multiple visitors, and based on this, we derive the behavior analysis and interest level of visitors. To implement this research, a personalized dataset tailored to the characteristics of exhibition hall and museum environments was created, and a deep learning model was constructed based on this. Four scenarios that visitors can exhibit were classified, and through this, prediction and experimental values were obtained, thus completing the validation for the interest estimation method proposed in this paper.

Vision-based Sensor Fusion of a Remotely Operated Vehicle for Underwater Structure Diagnostication (수중 구조물 진단용 원격 조종 로봇의 자세 제어를 위한 비전 기반 센서 융합)

  • Lee, Jae-Min;Kim, Gon-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.4
    • /
    • pp.349-355
    • /
    • 2015
  • Underwater robots generally show better performances for tasks than humans under certain underwater constraints such as. high pressure, limited light, etc. To properly diagnose in an underwater environment using remotely operated underwater vehicles, it is important to keep autonomously its own position and orientation in order to avoid additional control efforts. In this paper, we propose an efficient method to assist in the operation for the various disturbances of a remotely operated vehicle for the diagnosis of underwater structures. The conventional AHRS-based bearing estimation system did not work well due to incorrect measurements caused by the hard-iron effect when the robot is approaching a ferromagnetic structure. To overcome this drawback, we propose a sensor fusion algorithm with the camera and AHRS for estimating the pose of the ROV. However, the image information in the underwater environment is often unreliable and blurred by turbidity or suspended solids. Thus, we suggest an efficient method for fusing the vision sensor and the AHRS with a criterion which is the amount of blur in the image. To evaluate the amount of blur, we adopt two methods: one is the quantification of high frequency components using the power spectrum density analysis of 2D discrete Fourier transformed image, and the other is identifying the blur parameter based on cepstrum analysis. We evaluate the performance of the robustness of the visual odometry and blur estimation methods according to the change of light and distance. We verify that the blur estimation method based on cepstrum analysis shows a better performance through the experiments.

Robust Real-Time Visual Odometry Estimation for 3D Scene Reconstruction (3차원 장면 복원을 위한 강건한 실시간 시각 주행 거리 측정)

  • Kim, Joo-Hee;Kim, In-Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.4
    • /
    • pp.187-194
    • /
    • 2015
  • In this paper, we present an effective visual odometry estimation system to track the real-time pose of a camera moving in 3D space. In order to meet the real-time requirement as well as to make full use of rich information from color and depth images, our system adopts a feature-based sparse odometry estimation method. After matching features extracted from across image frames, it repeats both the additional inlier set refinement and the motion refinement to get more accurate estimate of camera odometry. Moreover, even when the remaining inlier set is not sufficient, our system computes the final odometry estimate in proportion to the size of the inlier set, which improves the tracking success rate greatly. Through experiments with TUM benchmark datasets and implementation of the 3D scene reconstruction application, we confirmed the high performance of the proposed visual odometry estimation method.

Vehicle Dynamics and Road Slope Estimation based on Cascade Extended Kalman Filter (Cascade Extended Kalman Filter 기반의 차량동특성 및 도로종단경사 추정)

  • Kim, Moon-Sik;Kim, Chang-Il;Lee, Kwang-Soo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.9
    • /
    • pp.208-214
    • /
    • 2014
  • Vehicle dynamic states used in various advanced driving safety systems are influenced by road geometry. Among the road geometry information, the vehicle pitch angle influenced by road slope and acceleration-deceleration is essential parameter used in pose estimation including the navigation system, advanced adaptive cruise control and others on sag road. Although the road slope data is essential parameter, the method measuring the parameter is not commercialized. The digital map including the road geometry data and high-precision DGPS system such as DGPS(Differential Global Positioning System) based RTK(Real-Time Kinematics) are used unusually. In this paper, low-cost cascade extended Kalman filter(CEKF) based road slope estimation method is proposed. It use cascade two EKFs. The EKFs use several measured vehicle states such as yaw rate, longitudinal acceleration, lateral acceleration and wheel speed of the rear tires and 3 D.O.F(Degree Of Freedom) vehicle dynamics model. The performance of proposed estimation algorithm is evaluated by simulation based on Carsim dynamics tool and T-car based experiment.

State Estimation for Underwater Vehicles by Means of Cascade Observers (계단식 관측기에 의한 수중 차의 상태추정)

  • Kim, Dong-Hun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.2
    • /
    • pp.168-173
    • /
    • 2009
  • This paper investigates the estimation problem of vehicle velocity and propeller angular velocity on the underwater vehicle. Inspired by but different from a high-gain observer, the cascade observer features a cascade structure and adaptive observer gains. In doing so the cascade observer attempts to overcome some of the typical problems that may pose to a high-gain observer. As in the case of a high-gain observer, the cascade observer structure is simple and universal in the sense that it is independent of the system dynamics and parameters. A cascade observer is used for the estimation of velocity from measured position. In the 1st step of the observer, the output is estimated, and the 1st order derivative of measured output is estimated via the 2nd step of the observer. Also, nth order derivative of the output is estimated in the (n+1)th step of the observer. It is shown that the proposed observer guarantees globally asymptotical stability. By simulation results, the proposed observer scheme for the estimations of vehicle velocity and propeller angular velocity shows better performance than the scheme based on the existing observer.

Unsupervised Monocular Depth Estimation Using Self-Attention for Autonomous Driving (자율주행을 위한 Self-Attention 기반 비지도 단안 카메라 영상 깊이 추정)

  • Seung-Jun Hwang;Sung-Jun Park;Joong-Hwan Baek
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.2
    • /
    • pp.182-189
    • /
    • 2023
  • Depth estimation is a key technology in 3D map generation for autonomous driving of vehicles, robots, and drones. The existing sensor-based method has high accuracy but is expensive and has low resolution, while the camera-based method is more affordable with higher resolution. In this study, we propose self-attention-based unsupervised monocular depth estimation for UAV camera system. Self-Attention operation is applied to the network to improve the global feature extraction performance. In addition, we reduce the weight size of the self-attention operation for a low computational amount. The estimated depth and camera pose are transformed into point cloud. The point cloud is mapped into 3D map using the occupancy grid of Octree structure. The proposed network is evaluated using synthesized images and depth sequences from the Mid-Air dataset. Our network demonstrates a 7.69% reduction in error compared to prior studies.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Head Orientation-based Gaze Tracking (얼굴의 움직임을 이용한 응시점 추적)

  • ;R.S. Ramakrishna
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1999.10b
    • /
    • pp.401-403
    • /
    • 1999
  • 본 논문에서 우리는 제약이 없는 배경화면에서 얼굴의 움직임을 이용한 응시점 추적을 위해 얼굴의 특징점(눈, 코, 그리고 입)들을 찾고 head orientation을 구하는 효?거이고 빠른 방법을 제안한다. 얼굴을 찾는 방법이 많이 연구 되어 오고 있으나 많은 부분이 효과적이지 못하거나 제한적인 사항을 필요로 한다. 본 논문에서 제안한 방법은 이진화된 이미지에 기초하고 완전 그래프 매칭을 이용한 유사성을 구하는 방법이다. 즉, 임의의 임계치 값에 의해 이진화된 이미지를 레이블링 한 후 각 쌍의 블록에 대한 유사성을 구한다. 이때 두 눈과 가장 유사성을 갖는 두 블록을 눈으로 선택한다. 눈을 찾은 후 입과 코를 찾아간다. 360$\times$240 이미지의 평균 처리 속도는 0.2초 이내이고 다음 탐색영역을 예상하여 탐색 영역을 줄일 경우 평균 처리속도는 0.15초 이내였다. 그리고 본 논문에서는 얼굴의 움직임을 구하기 위해 각 특징점들이 이루는 각을 기준으로 한 템플릿 매칭을 이용했다. 실험은 다양한 조명환경과 여러 사용자를 대상으로 이루어졌고 속도와 정확성면에서 좋은 결과를 보였다. 도한, 명안정보만을 사용하므로 흑백가메라에서도 사용가능하여 경제적 효과도 기대할 수 있다.

  • PDF