• 제목/요약/키워드: pose estimation

검색결과 388건 처리시간 0.025초

6 DOF Pose Estimation of Polyhedral Objects Based on Geometric Features in X-ray Images

  • Kim, Jae-Wan;Roh, Young-Jun;Cho, Hyung-S.;Jeon, Hyoung-Jo;Kim, Hyeong-Cheol
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.63.4-63
    • /
    • 2001
  • An x-ray vision can be a unique method to monitor and analyze the motion of mechanical parts in real time which are invisible from outside. Our problem is to identify the pose, i.e. the position and orientation of an object from x-ray projection images. It is assumed here that the x-ray imaging conditions that include the relative coordinates of the x-ray source and the image plane are predetermined and the object geometry is known. In this situation, an x-ray image of an object at a given pose can be estimated computationally by using a priori known x-ray projection image model. It is based on the assumption that a pose of an object can be determined uniquely to a given x-ray projection image. Thus, once we have the numerical model of x-ray imaging process, x-ray image of the known object at any pose could be estimated ...

  • PDF

자동 3차원 얼굴 포즈 정규화 기법 (Automatic 3D Head Pose-Normalization using 2D and 3D Interaction)

  • 유선진;김중락;이상윤
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2007년도 하계종합학술대회 논문집
    • /
    • pp.211-212
    • /
    • 2007
  • Pose-variation factors present a significant problem in 2D face recognition. To solve this problem, there are various approaches for a 3D face acquisition system which was able to generate multi-view images. However, this created another pose estimation problem in terms of normalizing the 3D face data. This paper presents a 3D head pose-normalization method using 2D and 3D interaction. The proposed method uses 2D information with the AAM(Active Appearance Model) and 3D information with a 3D normal vector. In order to verify the performance of the proposed method, we designed an experiment using 2.5D face recognition. Experimental results showed that the proposed method is robust against pose variation.

  • PDF

TRT Pose를 이용한 모바일 로봇의 사람 추종 기법 (Development of Human Following Method of Mobile Robot Using TRT Pose)

  • 최준현;주경진;윤상석;김종욱
    • 대한임베디드공학회논문지
    • /
    • 제15권6호
    • /
    • pp.281-287
    • /
    • 2020
  • In this paper, we propose a method for estimating a walking direction by which a mobile robots follows a person using TRT (Tensor RT) pose, which is motion recognition based on deep learning. Mobile robots can measure individual movements by recognizing key points on the person's pelvis and determine the direction in which the person tries to move. Using these information and the distance between robot and human, the mobile robot can follow the person stably keeping a safe distance from people. The TRT Pose only extracts key point information to prevent privacy issues while a camera in the mobile robot records video. To validate the proposed technology, experiment is carried out successfully where human walks away or toward the mobile robot in zigzag form and the robot continuously follows human with prescribed distance.

An Improved Approach for 3D Hand Pose Estimation Based on a Single Depth Image and Haar Random Forest

  • Kim, Wonggi;Chun, Junchul
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제9권8호
    • /
    • pp.3136-3150
    • /
    • 2015
  • A vision-based 3D tracking of articulated human hand is one of the major issues in the applications of human computer interactions and understanding the control of robot hand. This paper presents an improved approach for tracking and recovering the 3D position and orientation of a human hand using the Kinect sensor. The basic idea of the proposed method is to solve an optimization problem that minimizes the discrepancy in 3D shape between an actual hand observed by Kinect and a hypothesized 3D hand model. Since each of the 3D hand pose has 23 degrees of freedom, the hand articulation tracking needs computational excessive burden in minimizing the 3D shape discrepancy between an observed hand and a 3D hand model. For this, we first created a 3D hand model which represents the hand with 17 different parts. Secondly, Random Forest classifier was trained on the synthetic depth images generated by animating the developed 3D hand model, which was then used for Haar-like feature-based classification rather than performing per-pixel classification. Classification results were used for estimating the joint positions for the hand skeleton. Through the experiment, we were able to prove that the proposed method showed improvement rates in hand part recognition and a performance of 20-30 fps. The results confirmed its practical use in classifying hand area and successfully tracked and recovered the 3D hand pose in a real time fashion.

다중 도메인 비전 시스템 기반 제조 환경 안전 모니터링을 위한 동적 3D 작업자 자세 정합 기법 (Dynamic 3D Worker Pose Registration for Safety Monitoring in Manufacturing Environment based on Multi-domain Vision System)

  • 최지동;김민영;김병학
    • 대한임베디드공학회논문지
    • /
    • 제18권6호
    • /
    • pp.303-310
    • /
    • 2023
  • A single vision system limits the ability to accurately understand the spatial constraints and interactions between robots and dynamic workers caused by gantry robots and collaborative robots during production manufacturing. In this paper, we propose a 3D pose registration method for dynamic workers based on a multi-domain vision system for safety monitoring in manufacturing environments. This method uses OpenPose, a deep learning-based posture estimation model, to estimate the worker's dynamic two-dimensional posture in real-time and reconstruct it into three-dimensional coordinates. The 3D coordinates of the reconstructed multi-domain vision system were aligned using the ICP algorithm and then registered to a single 3D coordinate system. The proposed method showed effective performance in a manufacturing process environment with an average registration error of 0.0664 m and an average frame rate of 14.597 per second.

수직이착륙 무인항공기 자동 착륙을 위한 영상기반 항법 (Vision-based Navigation for VTOL Unmanned Aerial Vehicle Landing)

  • 이상훈;송진모;배종수
    • 한국군사과학기술학회지
    • /
    • 제18권3호
    • /
    • pp.226-233
    • /
    • 2015
  • Pose estimation is an important operation for many vision tasks. This paper presents a method of estimating the camera pose, using a known landmark for the purpose of autonomous vertical takeoff and landing(VTOL) unmanned aerial vehicle(UAV) landing. The proposed method uses a distinctive methodology to solve the pose estimation problem. We propose to combine extrinsic parameters from known and unknown 3-D(three-dimensional) feature points, and inertial estimation of camera 6-DOF(Degree Of Freedom) into one linear inhomogeneous equation. This allows us to use singular value decomposition(SVD) to neatly solve the given optimization problem. We present experimental results that demonstrate the ability of the proposed method to estimate camera 6DOF with the ease of implementation.

멀티카메라를 이용한 영상정보 기반의 소형무인기 실내비행시험환경 연구 (Vision-based Small UAV Indoor Flight Test Environment Using Multi-Camera)

  • 원대연;오현동;허성식;박봉균;안종선;심현철;탁민제
    • 한국항공우주학회지
    • /
    • 제37권12호
    • /
    • pp.1209-1216
    • /
    • 2009
  • 본 논문에서는 실내 공간에 설치된 복수의 카메라로부터 획득한 영상정보를 소형무인기의 자세 추정 및 제어에 이용하는 시스템에 대한 연구를 기술하였다. 제안된 시스템은 실외 비행시험의 제한을 극복하고 효율적인 비행시험 환경을 구축하기 위한 것으로 무인기의 위치 및 자세를 측정하기 위해 별도의 센서를 탑재할 필요가 없어 저가의 장비로 테스트베드를 구성할 수 있다는 장점을 갖는다. 시스템 구현을 위해 요구되는 카메라 보정, 마커 검출, 자세 추정 기법을 소개하였으며 테스트베드를 이용한 실험 결과를 통해 제안된 방법의 타당성 및 성능을 보였다.

원형관로 영상을 이용한 관로주행 로봇의 자세 추정 (Robot Posture Estimation Using Circular Image of Inner-Pipe)

  • 윤지섭;강이석
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제51권6호
    • /
    • pp.258-266
    • /
    • 2002
  • This paper proposes the methodology of the image processing algorithm that estimates the pose of the inner-pipe crawling robot. The inner-pipe crawling robot is usually equipped with a lighting device and a camera on its head for monitoring and inspection purpose of defects on the pipe wall and/or the maintenance operation. The proposed methodology is using these devices without introducing the extra sensors and is based on the fact that the position and the intensity of the reflected light from the inner wall of the pipe vary with the robot posture and the camera. The proposed algorithm is divided into two parts, estimating the translation and rotation angle of the camera, followed by the actual pose estimation of the robot . Based on the fact that the vanishing point of the reflected light moves into the opposite direction from the camera rotation, the camera rotation angle can be estimated. And, based on the fact that the most bright parts of the reflected light moves into the same direction with the camera translation, the camera position most bright parts of the reflected light moves into the same direction with the camera translation, the camera position can be obtained. To investigate the performance of the algorithm, the algorithm is applied to a sewage maintenance robot.

YOLOv5 및 OpenPose를 이용한 건설현장 근로자 탐지성능 향상에 대한 연구 (A Study on the Improvement of Construction Site Worker Detection Performance Using YOLOv5 and OpenPose)

  • 윤영근;오태근
    • 문화기술의 융합
    • /
    • 제8권5호
    • /
    • pp.735-740
    • /
    • 2022
  • 건설업은 사망자 수가 가장 많이 발생하는 산업이며, 다양한 제도 개선에도 사망자는 크게 줄어들지 않고 있다. 이에 따라, CCTV 영상에 인공지능(AI)을 적용한 실시간 안전관리가 부각되고 있다. 건설현장의 영상에 대한 AI를 적용한 근로자 탐지연구가 진행되고 있지만, 건설업의 특성상 복잡한 배경 등의 문제로 인해 성능 발현에 제한이 있다. 본 연구에서는 근로자의 탐지 및 자세 추정에 대한 성능 향상을 위해 YOLO 모델과 OpenPose 모델을 융합하여, 복잡 다양한 조건에서의 근로자에 대한 탐지 성능을 향상시켰다. 이는 향후 근로자의 불안전안 행동 및 건강관리 측면에서 활용도가 높을 것으로 예상된다.

서비스 자동화 시스템을 위한 물체 자세 인식 및 동작 계획 (Object Pose Estimation and Motion Planning for Service Automation System)

  • 권영우;이동영;강호선;최지욱;이인호
    • 로봇학회논문지
    • /
    • 제19권2호
    • /
    • pp.176-187
    • /
    • 2024
  • Recently, automated solutions using collaborative robots have been emerging in various industries. Their primary functions include Pick & Place, Peg in the Hole, fastening and assembly, welding, and more, which are being utilized and researched in various fields. The application of these robots varies depending on the characteristics of the grippers attached to the end of the collaborative robots. To grasp a variety of objects, a gripper with a high degree of freedom is required. In this paper, we propose a service automation system using a multi-degree-of-freedom gripper, collaborative robots, and vision sensors. Assuming various products are placed at a checkout counter, we use three cameras to recognize the objects, estimate their pose, and create grasping points for grasping. The grasping points are grasped by the multi-degree-of-freedom gripper, and experiments are conducted to recognize barcodes, a key task in service automation. To recognize objects, we used a CNN (Convolutional Neural Network) based algorithm and point cloud to estimate the object's 6D pose. Using the recognized object's 6d pose information, we create grasping points for the multi-degree-of-freedom gripper and perform re-grasping in a direction that facilitates barcode scanning. The experiment was conducted with four selected objects, progressing through identification, 6D pose estimation, and grasping, recording the success and failure of barcode recognition to prove the effectiveness of the proposed system.