• Title/Summary/Keyword: pose monitoring

Search Result 65, Processing Time 0.043 seconds

Laser pose calibration of ViSP for precise 6-DOF structural displacement monitoring

  • Shin, Jae-Uk;Jeon, Haemin;Choi, Suyoung;Kim, Youngjae;Myung, Hyun
    • Smart Structures and Systems
    • /
    • v.18 no.4
    • /
    • pp.801-818
    • /
    • 2016
  • To estimate structural displacement, a visually servoed paired structured light system (ViSP) was proposed in previous studies. The ViSP is composed of two sides facing each other, each with one or two laser pointers, a 2-DOF manipulator, a camera, and a screen. By calculating the positions of the laser beams projected onto the screens and rotation angles of the manipulators, relative 6-DOF displacement between two sides can be estimated. Although the performance of the system has been verified through various simulations and experimental tests, it has a limitation that the accuracy of the displacement measurement depends on the alignment of the laser pointers. In deriving the kinematic equation of the ViSP, the laser pointers were assumed to be installed perfectly normal to the same side screen. In reality, however, this is very difficult to achieve due to installation errors. In other words, the pose of laser pointers should be calibrated carefully before measuring the displacement. To calibrate the initial pose of the laser pointers, a specially designed jig device is made and employed. Experimental tests have been performed to validate the performance of the proposed calibration method and the results show that the estimated displacement with the initial pose calibration increases the accuracy of the 6-DOF displacement estimation.

Particle Filter Based Robust Multi-Human 3D Pose Estimation for Vehicle Safety Control (차량 안전 제어를 위한 파티클 필터 기반의 강건한 다중 인체 3차원 자세 추정)

  • Park, Joonsang;Park, Hyungwook
    • Journal of Auto-vehicle Safety Association
    • /
    • v.14 no.3
    • /
    • pp.71-76
    • /
    • 2022
  • In autonomous driving cars, 3D pose estimation can be one of the effective methods to enhance safety control for OOP (Out of Position) passengers. There have been many studies on human pose estimation using a camera. Previous methods, however, have limitations in automotive applications. Due to unexplainable failures, CNN methods are unreliable, and other methods perform poorly. This paper proposes robust real-time multi-human 3D pose estimation architecture in vehicle using monocular RGB camera. Using particle filter, our approach integrates CNN 2D/3D pose measurements with available information in vehicle. Computer simulations were performed to confirm the accuracy and robustness of the proposed algorithm.

Dynamic 3D Worker Pose Registration for Safety Monitoring in Manufacturing Environment based on Multi-domain Vision System (다중 도메인 비전 시스템 기반 제조 환경 안전 모니터링을 위한 동적 3D 작업자 자세 정합 기법)

  • Ji Dong Choi;Min Young Kim;Byeong Hak Kim
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.6
    • /
    • pp.303-310
    • /
    • 2023
  • A single vision system limits the ability to accurately understand the spatial constraints and interactions between robots and dynamic workers caused by gantry robots and collaborative robots during production manufacturing. In this paper, we propose a 3D pose registration method for dynamic workers based on a multi-domain vision system for safety monitoring in manufacturing environments. This method uses OpenPose, a deep learning-based posture estimation model, to estimate the worker's dynamic two-dimensional posture in real-time and reconstruct it into three-dimensional coordinates. The 3D coordinates of the reconstructed multi-domain vision system were aligned using the ICP algorithm and then registered to a single 3D coordinate system. The proposed method showed effective performance in a manufacturing process environment with an average registration error of 0.0664 m and an average frame rate of 14.597 per second.

Multi-Scale, Multi-Object and Real-Time Face Detection and Head Pose Estimation Using Deep Neural Networks (다중크기와 다중객체의 실시간 얼굴 검출과 머리 자세 추정을 위한 심층 신경망)

  • Ahn, Byungtae;Choi, Dong-Geol;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.3
    • /
    • pp.313-321
    • /
    • 2017
  • One of the most frequently performed tasks in human-robot interaction (HRI), intelligent vehicles, and security systems is face related applications such as face recognition, facial expression recognition, driver state monitoring, and gaze estimation. In these applications, accurate head pose estimation is an important issue. However, conventional methods have been lacking in accuracy, robustness or processing speed in practical use. In this paper, we propose a novel method for estimating head pose with a monocular camera. The proposed algorithm is based on a deep neural network for multi-task learning using a small grayscale image. This network jointly detects multi-view faces and estimates head pose in hard environmental conditions such as illumination change and large pose change. The proposed framework quantitatively and qualitatively outperforms the state-of-the-art method with an average head pose mean error of less than $4.5^{\circ}$ in real-time.

Pose Estimation of an Object from X-ray Images Based on Principal Axis Analysis

  • Roh, Young-Jun;Cho, Hyung-Suck
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.97.4-97
    • /
    • 2002
  • 1. Introduction Pose estimation of a three dimensional object has been studied in robot vision area, and it is needed in a number of industrial applications such as process monitoring and control, assembly and PCB inspection. In this research, we propose a new pose estimation method based on principal axes analysis. Here, it is assumed that the locations of x-ray source and the image plane are predetermined and the object geometry is known. To this end, we define a dispersion matrix of an object, which is a discrete form of inertia matrix of the object. It can be determined here from a set of x-ray images, at least three images are required. Then, the pose information is obtained fro...

  • PDF

Pose-graph optimized displacement estimation for structural displacement monitoring

  • Lee, Donghwa;Jeon, Haemin;Myung, Hyun
    • Smart Structures and Systems
    • /
    • v.14 no.5
    • /
    • pp.943-960
    • /
    • 2014
  • A visually servoed paired structured light system (ViSP) was recently proposed as a novel estimation method of the 6-DOF (Degree-Of-Freedom) relative displacement in civil structures. In order to apply the ViSP to massive structures, multiple ViSP modules should be installed in a cascaded manner. In this configuration, the estimation errors are propagated through the ViSP modules. In order to resolve this problem, a displacement estimation error back-propagation (DEEP) method was proposed. However, the DEEP method has some disadvantages: the displacement range of each ViSP module must be constrained and displacement errors are corrected sequentially, and thus the entire estimation errors are not considered concurrently. To address this problem, a pose-graph optimized displacement estimation (PODE) method is proposed in this paper. The PODE method is based on a graph-based optimization technique that considers entire errors at the same time. Moreover, this method does not require any constraints on the movement of the ViSP modules. Simulations and experiments are conducted to validate the performance of the proposed method. The results show that the PODE method reduces the propagation errors in comparison with a previous work.

Human Pose-based Labor Productivity Measurement Model

  • Lee, Byoungmin;Yoon, Sebeen;Jo, Soun;Kim, Taehoon;Ock, Jongho
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.839-846
    • /
    • 2022
  • Traditionally, the construction industry has shown low labor productivity and productivity growth. To improve labor productivity, it must first be accurately measured. The existing method uses work-sampling techniques through observation of workers' activities at certain time intervals on site. However, a disadvantage of this method is that the results may differ depending on the observer's judgment and may be inaccurate in the case of a large number of missed scenarios. Therefore, this study proposes a model to automate labor productivity measurement by monitoring workers' actions using a deep learning-based pose estimation method. The results are expected to contribute to productivity improvement on construction sites.

  • PDF

Markerless camera pose estimation framework utilizing construction material with standardized specification

  • Harim Kim;Heejae Ahn;Sebeen Yoon;Taehoon Kim;Thomas H.-K. Kang;Young K. Ju;Minju Kim;Hunhee Cho
    • Computers and Concrete
    • /
    • v.33 no.5
    • /
    • pp.535-544
    • /
    • 2024
  • In the rapidly advancing landscape of computer vision (CV) technology, there is a burgeoning interest in its integration with the construction industry. Camera calibration is the process of deriving intrinsic and extrinsic parameters that affect when the coordinates of the 3D real world are projected onto the 2D plane, where the intrinsic parameters are internal factors of the camera, and extrinsic parameters are external factors such as the position and rotation of the camera. Camera pose estimation or extrinsic calibration, which estimates extrinsic parameters, is essential information for CV application at construction since it can be used for indoor navigation of construction robots and field monitoring by restoring depth information. Traditionally, camera pose estimation methods for cameras relied on target objects such as markers or patterns. However, these methods, which are marker- or pattern-based, are often time-consuming due to the requirement of installing a target object for estimation. As a solution to this challenge, this study introduces a novel framework that facilitates camera pose estimation using standardized materials found commonly in construction sites, such as concrete forms. The proposed framework obtains 3D real-world coordinates by referring to construction materials with certain specifications, extracts the 2D coordinates of the corresponding image plane through keypoint detection, and derives the camera's coordinate through the perspective-n-point (PnP) method which derives the extrinsic parameters by matching 3D and 2D coordinate pairs. This framework presents a substantial advancement as it streamlines the extrinsic calibration process, thereby potentially enhancing the efficiency of CV technology application and data collection at construction sites. This approach holds promise for expediting and optimizing various construction-related tasks by automating and simplifying the calibration procedure.

Robot Posture Estimation Using Circular Image of Inner-Pipe (원형관로 영상을 이용한 관로주행 로봇의 자세 추정)

  • Yoon, Ji-Sup;Kang , E-Sok
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.51 no.6
    • /
    • pp.258-266
    • /
    • 2002
  • This paper proposes the methodology of the image processing algorithm that estimates the pose of the inner-pipe crawling robot. The inner-pipe crawling robot is usually equipped with a lighting device and a camera on its head for monitoring and inspection purpose of defects on the pipe wall and/or the maintenance operation. The proposed methodology is using these devices without introducing the extra sensors and is based on the fact that the position and the intensity of the reflected light from the inner wall of the pipe vary with the robot posture and the camera. The proposed algorithm is divided into two parts, estimating the translation and rotation angle of the camera, followed by the actual pose estimation of the robot . Based on the fact that the vanishing point of the reflected light moves into the opposite direction from the camera rotation, the camera rotation angle can be estimated. And, based on the fact that the most bright parts of the reflected light moves into the same direction with the camera translation, the camera position most bright parts of the reflected light moves into the same direction with the camera translation, the camera position can be obtained. To investigate the performance of the algorithm, the algorithm is applied to a sewage maintenance robot.

Personalized VDT Syndrome Prevention System Using PoseNet (PoseNet을 이용한 개인 맞춤형 VDT 증후군 예방 시스템)

  • Young-bok Cho
    • Journal of Practical Engineering Education
    • /
    • v.16 no.2
    • /
    • pp.115-119
    • /
    • 2024
  • With the increase in the number of ICT industry workers, there is a demand for research on preventing VDT syndrome. However, existing posture correction products mostly rely heavily on cameras or sensors in wearable devices. In this paper, we have developed a posture correction system that utilizes built-in cameras and circular pressure sensors to collect posture information. Additionally, the system provides a personalized service by capturing the correct posture of the user initially and monitoring the user's posture based on that input. By precisely correcting postures during users' daily tasks, this system aims to prevent and improve VDT syndrome, ultimately enhancing the efficiency of ICT industry workers.