• 제목/요약/키워드: Pose and distance accuracy

검색결과 24건 처리시간 0.022초

3차원 발 자세 추정을 위한 새로운 형상 기술자 (Shape Descriptor for 3D Foot Pose Estimation)

  • 송호근;강기현;정다운;윤용인
    • 한국정보통신학회논문지
    • /
    • 제14권2호
    • /
    • pp.469-478
    • /
    • 2010
  • 본 논문은 3차원 발 자세를 추정하기 위한 효과적 형상 기술자를 제안하였다. 처리 시간을 단축시키기 위하여 특수 제작된 3차원 발 모형을 2차원 투영하여 발 형상 데이터베이스를 구축하고, 3차원 자세 요약정보를 메타 정보로 추가한 2.5차원 영상 데이터베이스를 구성하였다. 그리고 특징 공간 크기가 작고 다른 형상 기술자에 비하여 자세 추정 성능이 뛰어난 수정된 Centroid Contour Distance를 제안하였다. 제안된 기술자의 성능을 분석하기 위하여, 검색 정확도와 시공간 복잡도를 계산하고 기존의 방식들과 비교하였다. 실험 결과를 통하여 제안된 기술자는 특징 추출 시간과 자세 추정 정확도면에서 기존의 방식들보다 효과적인 것으로 나타났다.

초 중량물 핸드링 로봇의 성능평가에 관한 연구 (A Study on the Performance Evaluation of Heavy Duty Handling Robot using Laser Tracker)

  • 고해주;정윤교;신혁;유한식
    • 한국기계가공학회지
    • /
    • 제9권3호
    • /
    • pp.1-7
    • /
    • 2010
  • The aim of this research is to evaluate movement and path characteristics of developed heavy duty handling robot using laser tracker(API T3) according to the ISO 9283 robot performance evaluation criteria. As carry out 3D modeling and simulation using CATIA, a test cube was set up to select moving and measuring range of robot. Performance test for pose and distance accuracy, path and path velocity accuracy under payload zero and 440kgf was accomplished. The resulted output data show the reliability of the developed robot.

Kinect센서를 이용한 물체 인식 및 자세 추정을 위한 정확도 개선 방법 (A Method for Improving Accuracy of Object Recognition and Pose Estimation by Using Kinect sensor)

  • 김안나;이건규;강기태;김용범;최혁렬
    • 로봇학회논문지
    • /
    • 제10권1호
    • /
    • pp.16-23
    • /
    • 2015
  • This paper presents a method of improving the pose recognition accuracy of objects by using Kinect sensor. First, by using the SURF algorithm, which is one of the most widely used local features point algorithms, we modify inner parameters of the algorithm for efficient object recognition. The proposed method is adjusting the distance between the box filter, modifying Hessian matrix, and eliminating improper key points. In the second, the object orientation is estimated based on the homography. Finally the novel approach of Auto-scaling method is proposed to improve accuracy of object pose estimation. The proposed algorithm is experimentally tested with objects in the plane and its effectiveness is validated.

Experimental Study of Spacecraft Pose Estimation Algorithm Using Vision-based Sensor

  • Hyun, Jeonghoon;Eun, Youngho;Park, Sang-Young
    • Journal of Astronomy and Space Sciences
    • /
    • 제35권4호
    • /
    • pp.263-277
    • /
    • 2018
  • This paper presents a vision-based relative pose estimation algorithm and its validation through both numerical and hardware experiments. The algorithm and the hardware system were simultaneously designed considering actual experimental conditions. Two estimation techniques were utilized to estimate relative pose; one was a nonlinear least square method for initial estimation, and the other was an extended Kalman Filter for subsequent on-line estimation. A measurement model of the vision sensor and equations of motion including nonlinear perturbations were utilized in the estimation process. Numerical simulations were performed and analyzed for both the autonomous docking and formation flying scenarios. A configuration of LED-based beacons was designed to avoid measurement singularity, and its structural information was implemented in the estimation algorithm. The proposed algorithm was verified again in the experimental environment by using the Autonomous Spacecraft Test Environment for Rendezvous In proXimity (ASTERIX) facility. Additionally, a laser distance meter was added to the estimation algorithm to improve the relative position estimation accuracy. Throughout this study, the performance required for autonomous docking could be presented by confirming the change in estimation accuracy with respect to the level of measurement error. In addition, hardware experiments confirmed the effectiveness of the suggested algorithm and its applicability to actual tasks in the real world.

공장환경에서 AGV를 위한 인공표식 기반의 포즈그래프 SLAM (Artificial Landmark based Pose-Graph SLAM for AGVs in Factory Environments)

  • 허환;송재복
    • 로봇학회논문지
    • /
    • 제10권2호
    • /
    • pp.112-118
    • /
    • 2015
  • This paper proposes a pose-graph based SLAM method using an upward-looking camera and artificial landmarks for AGVs in factory environments. The proposed method provides a way to acquire the camera extrinsic matrix and improves the accuracy of feature observation using a low-cost camera. SLAM is conducted by optimizing AGV's explored path using the artificial landmarks installed on the ceiling at various locations. As the AGV explores, the pose nodes are added based on the certain distance from odometry and the landmark nodes are registered when AGV recognizes the fiducial marks. As a result of the proposed scheme, a graph network is created and optimized through a G2O optimization tool so that the accumulated error due to the slip is minimized. The experiment shows that the proposed method is robust for SLAM in real factory environments.

Multi-Human Behavior Recognition Based on Improved Posture Estimation Model

  • Zhang, Ning;Park, Jin-Ho;Lee, Eung-Joo
    • 한국멀티미디어학회논문지
    • /
    • 제24권5호
    • /
    • pp.659-666
    • /
    • 2021
  • With the continuous development of deep learning, human behavior recognition algorithms have achieved good results. However, in a multi-person recognition environment, the complex behavior environment poses a great challenge to the efficiency of recognition. To this end, this paper proposes a multi-person pose estimation model. First of all, the human detectors in the top-down framework mostly use the two-stage target detection model, which runs slow down. The single-stage YOLOv3 target detection model is used to effectively improve the running speed and the generalization of the model. Depth separable convolution, which further improves the speed of target detection and improves the model's ability to extract target proposed regions; Secondly, based on the feature pyramid network combined with context semantic information in the pose estimation model, the OHEM algorithm is used to solve difficult key point detection problems, and the accuracy of multi-person pose estimation is improved; Finally, the Euclidean distance is used to calculate the spatial distance between key points, to determine the similarity of postures in the frame, and to eliminate redundant postures.

A New 3D Active Camera System for Robust Face Recognition by Correcting Pose Variation

  • Kim, Young-Ouk;Jang, Sung-Ho;Park, Chang-Woo;Sung, Ha-Gyeong;Kwon, Oh-Yun;Paik, Joon-Ki
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.1485-1490
    • /
    • 2004
  • Recently, we have remarkable developments in intelligent robot systems. The remarkable features of intelligent robot are that it can track user, does face recognition and vital for many surveillance based systems. Advantage of face recognition when compared with other biometrics recognition is that coerciveness and contact that usually exist when we acquire characteristics do not exist in face recognition. However, the accuracy of face recognition is lower than other biometric recognition due to decrease in dimension from of image acquisition step and various changes associated with face pose and background. Factors that deteriorate performance of face recognition are many such as distance from camera to face, lighting change, pose change, and change of facial expression. In this paper, we implement a new 3D active camera system to prevent various pose variation that influence face recognition performance and propose face recognition algorithm for intelligent surveillance system and mobile robot system.

  • PDF

3차원 비전 기술을 이용한 라벨부착 소형 물체의 정밀 자세 측정 (Accurate Pose Measurement of Label-attached Small Objects Using a 3D Vision Technique)

  • 김응수;김계경;;박순용
    • 제어로봇시스템학회논문지
    • /
    • 제22권10호
    • /
    • pp.839-846
    • /
    • 2016
  • Bin picking is a task of picking a small object from a bin. For accurate bin picking, the 3D pose information, position, and orientation of a small object is required because the object is mixed with other objects of the same type in the bin. Using this 3D pose information, a robotic gripper can pick an object using exact distance and orientation measurements. In this paper, we propose a 3D vision technique for accurate measurement of 3D position and orientation of small objects, on which a paper label is stuck to the surface. We use a maximally stable extremal regions (MSERs) algorithm to detect the label areas in a left bin image acquired from a stereo camera. In each label area, image features are detected and their correlation with a right image is determined by a stereo vision technique. Then, the 3D position and orientation of the objects are measured accurately using a transformation from the camera coordinate system to the new label coordinate system. For stable measurement during a bin picking task, the pose information is filtered by averaging at fixed time intervals. Our experimental results indicate that the proposed technique yields pose accuracy between 0.4~0.5mm in positional measurements and $0.2-0.6^{\circ}$ in angle measurements.

멀티뷰 카메라를 사용한 외부 카메라 보정 (Extrinsic calibration using a multi-view camera)

  • 김기영;김세환;박종일;우운택
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 신호처리소사이어티 추계학술대회 논문집
    • /
    • pp.187-190
    • /
    • 2003
  • In this paper, we propose an extrinsic calibration method for a multi-view camera to get an optimal pose in 3D space. Conventional calibration algorithms do not guarantee the calibration accuracy at a mid/long distance because pixel errors increase as the distance between camera and pattern goes far. To compensate for the calibration errors, firstly, we apply the Tsai's algorithm to each lens so that we obtain initial extrinsic parameters Then, we estimate extrinsic parameters by using distance vectors obtained from structural cues of a multi-view camera. After we get the estimated extrinsic parameters of each lens, we carry out a non-linear optimization using the relationship between camera coordinate and world coordinate iteratively. The optimal camera parameters can be used in generating 3D panoramic virtual environment and supporting AR applications.

  • PDF

가상 객체 합성을 위한 단일 프레임에서의 안정된 카메라 자세 추정 (Reliable Camera Pose Estimation from a Single Frame with Applications for Virtual Object Insertion)

  • 박종승;이범종
    • 정보처리학회논문지B
    • /
    • 제13B권5호
    • /
    • pp.499-506
    • /
    • 2006
  • 본 논문에서는 실시간 증강현실 시스템에서의 가상 객체 삽입을 위한 빠르고 안정된 카메라 자세 추정 방법을 제안한다. 단일 프레임에서 마커의 특징점 추출을 통해 카메라의 회전행렬과 이동벡터를 추정한다. 카메라 자세 추정을 위해 정사영 투영모델에서의 분해기법을 사용한다. 정사영 투영모델에서의 분해기법은 객체의 모든 특징점의 깊이좌표가 동일하다고 가정하기 때문에 깊이좌표의 기준이 되는 참조점의 설정과 점의 분포에 따라 카메라 자세 계산의 정확도가 달라진다. 본 논문에서는 실제 환경에서 일반적으로 잘 동작하고 융통성 있는 참조점 설정 방법과 이상점 제거 방법을 제안한다. 제안된 카메라 자세추정 방법에 기반하여 탐색된 마커 위치에 가상객체를 삽입하기 위한 비디오 증강 시스템을 구현하였다. 실 환경에서의 다양한 비디오에 대한 실험 결과, 제안된 카메라 자세 추정 기법은 기존의 자세추정 기법만큼 빠르고 기존의 방법보다 안정적이고 다양한 증강현실 시스템 응용에 적용될 수 있음을 보여주었다.