• Title/Summary/Keyword: Pose Angle Estimation

Search Result 26, Processing Time 0.029 seconds

Indoor Location and Pose Estimation Algorithm using Artificial Attached Marker (인공 부착 마커를 활용한 실내 위치 및 자세 추정 알고리즘)

  • Ahn, Byeoung Min;Ko, Yun-Ho;Lee, Ji Hong
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.2
    • /
    • pp.240-251
    • /
    • 2016
  • This paper presents a real-time indoor location and pose estimation method that utilizes simple artificial markers and image analysis techniques for the purpose of warehouse automation. The conventional indoor localization methods cannot work robustly in warehouses where severe environmental changes usually occur due to the movement of stocked goods. To overcome this problem, the proposed framework places artificial markers having different interior pattern on the predefined position of the warehouse floor. The proposed algorithm obtains marker candidate regions from a captured image by a simple binarization and labeling procedure. Then it extracts maker interior pattern information from each candidate region in order to decide whether the candidate region is a true marker or not. The extracted interior pattern information and the outer boundary of the marker are used to estimate location and heading angle of the localization system. Experimental results show that the proposed localization method can provide high performance which is almost equivalent to that of the conventional method using an expensive LIDAR sensor and AMCL algorithm.

Head Pose Estimation Using Error Compensated Singular Value Decomposition for 3D Face Recognition (3차원 얼굴 인식을 위한 오류 보상 특이치 분해 기반 얼굴 포즈 추정)

  • 송환종;양욱일;손광훈
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.6
    • /
    • pp.31-40
    • /
    • 2003
  • Most face recognition systems are based on 2D images and applied in many applications. However, it is difficult to recognize a face when the pose varies severely. Therefore, head pose estimation is an inevitable procedure to improve recognition rate when a face is not frontal. In this paper, we propose a novel head pose estimation algorithm for 3D face recognition. Given the 3D range image of an unknown face as an input, we automatically extract facial feature points based on the face curvature. We propose an Error Compensated Singular Value Decomposition (EC-SVD) method based on the extracted facial feature points. We obtain the initial rotation angle based on the SVD method, and perform a refinement procedure to compensate for remained errors. The proposed algorithm is performed by exploiting the extracted facial features in the normaized 3D face space. In addition, we propose a 3D nearest neighbor classifier in order to select face candidates for 3D face recognition. From simulation results, we proved the efficiency and validity of the proposed algorithm.

Robust pupil detection and gaze tracking under occlusion of eyes

  • Lee, Gyung-Ju;Kim, Jin-Suh;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.10
    • /
    • pp.11-19
    • /
    • 2016
  • The size of a display is large, The form becoming various of that do not apply to previous methods of gaze tracking and if setup gaze-track-camera above display, can solve the problem of size or height of display. However, This method can not use of infrared illumination information of reflected cornea using previous methods. In this paper, Robust pupil detecting method for eye's occlusion, corner point of inner eye and center of pupil, and using the face pose information proposes a method for calculating the simply position of the gaze. In the proposed method, capture the frame for gaze tracking that according to position of person transform camera mode of wide or narrow angle. If detect the face exist in field of view(FOV) in wide mode of camera, transform narrow mode of camera calculating position of face. The frame captured in narrow mode of camera include gaze direction information of person in long distance. The method for calculating the gaze direction consist of face pose estimation and gaze direction calculating step. Face pose estimation is estimated by mapping between feature point of detected face and 3D model. To calculate gaze direction the first, perform ellipse detect using splitting from iris edge information of pupil and if occlusion of pupil, estimate position of pupil with deformable template. Then using center of pupil and corner point of inner eye, face pose information calculate gaze position at display. In the experiment, proposed gaze tracking algorithm in this paper solve the constraints that form of a display, to calculate effectively gaze direction of person in the long distance using single camera, demonstrate in experiments by distance.

Robot Posture Estimation Using Inner-Pipe Image

  • Sup, Yoon-Ji;Sok, Kang-E
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.173.1-173
    • /
    • 2001
  • This paper proposes the methodology in image processing algorithm that estimates the pose of the pipe crawling robot. The pipe crawling robots are usually equipped with a lighting device and a camera on its head for monitoring and inspection purpose. The proposed methodology is using these devices without introducing the extra sensors and is based on the fact that the position and the intensity of the reflected light varies with the robot posture. The algorithm is divided into two parts, estimating the translation and rotation angle of the camera, followed by the actual pose estimation of the robot. To investigate the performance of the algorithm, the algorithm is applied to a sewage maintenance robot.

  • PDF

Stereo-based Robust Human Detection on Pose Variation Using Multiple Oriented 2D Elliptical Filters (방향성 2차원 타원형 필터를 이용한 스테레오 기반 포즈에 강인한 사람 검출)

  • Cho, Sang-Ho;Kim, Tae-Wan;Kim, Dae-Jin
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.10
    • /
    • pp.600-607
    • /
    • 2008
  • This paper proposes a robust human detection method irrespective of their pose variation using the multiple oriented 2D elliptical filters (MO2DEFs). The MO2DEFs can detect the humans regardless of their poses unlike existing object oriented scale adaptive filter (OOSAF). To overcome OOSAF's limitation, we introduce the MO2DEFs whose shapes look like the oriented ellipses. We perform human detection by applying four different 2D elliptical filters with specific orientations to the 2D spatial-depth histogram and then by taking the thresholds over the filtered histograms. In addition, we determine the human pose by using convolution results which are computed by using the MO2DEFs. We verify the human candidates by either detecting the face or matching head-shoulder shapes over the estimated rotation. The experimental results showed that the accuracy of pose angle estimation was about 88%, the human detection using the MO2DEFs outperformed that of using the OOSAF by $15{\sim}20%$ especially in case of the posed human.

A Study on the Improvement of Pose Information of Objects by Using Trinocular Vision System (Trinocular Vision System을 이용한 물체 자세정보 인식 향상방안)

  • Kim, Jong Hyeong;Jang, Kyoungjae;Kwon, Hyuk-dong
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.26 no.2
    • /
    • pp.223-229
    • /
    • 2017
  • Recently, robotic bin-picking tasks have drawn considerable attention, because flexibility is required in robotic assembly tasks. Generally, stereo camera systems have been used widely for robotic bin-picking, but these have two limitations: First, computational burden for solving correspondence problem on stereo images increases calculation time. Second, errors in image processing and camera calibration reduce accuracy. Moreover, the errors in robot kinematic parameters directly affect robot gripping. In this paper, we propose a method of correcting the bin-picking error by using trinocular vision system which consists of two stereo cameras andone hand-eye camera. First, the two stereo cameras, with wide viewing angle, measure object's pose roughly. Then, the 3rd hand-eye camera approaches the object, and corrects the previous measurement of the stereo camera system. Experimental results show usefulness of the proposed method.

Golf Green Slope Estimation Using a Cross Laser Structured Light System and an Accelerometer

  • Pham, Duy Duong;Dang, Quoc Khanh;Suh, Young Soo
    • Journal of Electrical Engineering and Technology
    • /
    • v.11 no.2
    • /
    • pp.508-518
    • /
    • 2016
  • In this paper, we propose a method combining an accelerometer with a cross structured light system to estimate the golf green slope. The cross-line laser provides two laser planes whose functions are computed with respect to the camera coordinate frame using a least square optimization. By capturing the projections of the cross-line laser on the golf slope in a static pose using a camera, two 3D curves’ functions are approximated as high order polynomials corresponding to the camera coordinate frame. Curves’ functions are then expressed in the world coordinate frame utilizing a rotation matrix that is estimated based on the accelerometer’s output. The curves provide some important information of the green such as the height and the slope’s angle. The curves estimation accuracy is verified via some experiments which use OptiTrack camera system as a ground-truth reference.

The Estimation of Craniovertebral Angle using Wearable Sensor for Monitoring of Neck Posture in Real-Time (실시간 목 자세 모니터링을 위한 웨어러블 센서를 이용한 두개척추각 추정)

  • Lee, Jaehyun;Chee, Youngjoon
    • Journal of Biomedical Engineering Research
    • /
    • v.39 no.6
    • /
    • pp.278-283
    • /
    • 2018
  • Nowdays, many people suffer from the neck pain due to forward head posture(FHP) and text neck(TN). To assess the severity of the FHP and TN the craniovertebral angle(CVA) is used in clinincs. However, it is difficult to monitor the neck posture using the CVA in daily life. We propose a new method using the cervical flexion angle(CFA) obtained from a wearable sensor to monitor neck posture in daily life. 15 participants were requested to pose FHP and TN. The CFA from the wearable sensor was compared with the CVA observed from a 3D motion camera system to analyze their correlation. The determination coefficients between CFA and CVA were 0.80 in TN and 0.57 in FHP, and 0.69 in TN and FHP. From the monitoring the neck posture while using laptop computer for 20 minutes, this wearable sensor can estimate the CVA with the mean squared error of 2.1 degree.

Vehicle Dynamics and Road Slope Estimation based on Cascade Extended Kalman Filter (Cascade Extended Kalman Filter 기반의 차량동특성 및 도로종단경사 추정)

  • Kim, Moon-Sik;Kim, Chang-Il;Lee, Kwang-Soo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.9
    • /
    • pp.208-214
    • /
    • 2014
  • Vehicle dynamic states used in various advanced driving safety systems are influenced by road geometry. Among the road geometry information, the vehicle pitch angle influenced by road slope and acceleration-deceleration is essential parameter used in pose estimation including the navigation system, advanced adaptive cruise control and others on sag road. Although the road slope data is essential parameter, the method measuring the parameter is not commercialized. The digital map including the road geometry data and high-precision DGPS system such as DGPS(Differential Global Positioning System) based RTK(Real-Time Kinematics) are used unusually. In this paper, low-cost cascade extended Kalman filter(CEKF) based road slope estimation method is proposed. It use cascade two EKFs. The EKFs use several measured vehicle states such as yaw rate, longitudinal acceleration, lateral acceleration and wheel speed of the rear tires and 3 D.O.F(Degree Of Freedom) vehicle dynamics model. The performance of proposed estimation algorithm is evaluated by simulation based on Carsim dynamics tool and T-car based experiment.

A Model-based 3-D Pose Estimation Method from Line Correspondences of Polyhedral Objects

  • Kang, Dong-Joong;Ha, Jong-Eun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.762-766
    • /
    • 2003
  • In this paper, we present a new approach to solve the problem of estimating the camera 3-D location and orientation from a matched set of 3-D model and 2-D image features. An iterative least-square method is used to solve both rotation and translation simultaneously. Because conventional methods that solved for rotation first and then translation do not provide good solutions, we derive an error equation using roll-pitch-yaw angle to present the rotation matrix. To minimize the error equation, Levenberg-Marquardt algorithm is introduced with uniform sampling strategy of rotation space to avoid stuck in local minimum. Experimental results using real images are presented.

  • PDF