• Title/Summary/Keyword: pose estimation

Search Result 388, Processing Time 0.023 seconds

Simultaneous Estimation of Landmark Location and Robot Pose Using Particle Filter Method (파티클 필터 방법을 이용한 특징점과 로봇 위치의 동시 추정)

  • Kim, Tae-Gyun;Ko, Nak-Yong;Noh, Sung-Woo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.3
    • /
    • pp.353-360
    • /
    • 2012
  • This paper describes a SLAM method which estimates landmark locations and robot pose simultaneously. The particle filter can deal with nonlinearity of robot motion as well as the non Gaussian property of robot motion uncertainty and sensor error. The state to be estimated includes the locations of landmarks in addition to the robot pose. In the experiment, four beacons which transmit ultrasonic signal are used as landmarks. The robot receives the ultrasonic signals from the beacons and detects the distance to them. The method uses rang scanning sensor to build geometric feature of the environment. Since robot location and heading are estimated by the particle filter, the scanned range data can be converted to the geometric map. The performance of the method is compared with that of the deadreckoning and trilateration.

Design and Evaluation of Intelligent Helmet Display System (지능형 헬멧시현시스템 설계 및 시험평가)

  • Hwang, Sang-Hyun
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.45 no.5
    • /
    • pp.417-428
    • /
    • 2017
  • In this paper, we describe the architectural design, unit component hardware design and core software design(Helmet Pose Tracking Software and Terrain Elevation Data Correction Software) of IHDS(Intelligent Helmet Display System), and describe the results of unit test and integration test. According to the trend of the latest helmet display system, the specifications which includes 3D map display, FLIR(Forward Looking Infra-Red) display, hybrid helmet pose tracking, visor reflection type of binocular optical system, NVC(Night Vision Camera) display, lightweight composite helmet shell were applied to the design. Especially, we proposed unique design concepts such as the automatic correction of altitude error of 3D map data, high precision image registration, multi-color lighting optical system, transmissive image emitting surface using diffraction optical element, tracking camera minimizing latency time of helmet pose estimation and air pockets for helmet fixation on head. After completing the prototype of all system components, unit tests and system integration tests were performed to verify the functions and performance.

CNN3D-Based Bus Passenger Prediction Model Using Skeleton Keypoints (Skeleton Keypoints를 활용한 CNN3D 기반의 버스 승객 승하차 예측모델)

  • Jang, Jin;Kim, Soo Hyung
    • Smart Media Journal
    • /
    • v.11 no.3
    • /
    • pp.90-101
    • /
    • 2022
  • Buses are a popular means of transportation. As such, thorough preparation is needed for passenger safety management. However, the safety system is insufficient because there are accidents such as a death accident occurred when the bus departed without recognizing the elderly approaching to get on in 2018. There is a safety system that prevents pinching accidents through sensors on the back door stairs, but such a system does not prevent accidents that occur in the process of getting on and off like the above accident. If it is possible to predict the intention of bus passengers to get on and off, it will help to develop a safety system to prevent such accidents. However, studies predicting the intention of passengers to get on and off are insufficient. Therefore, in this paper, we propose a 1×1 CNN3D-based getting on and off intention prediction model using skeleton keypoints of passengers extracted from the camera image attached to the bus through UDP-Pose. The proposed model shows approximately 1~2% higher accuracy than the RNN and LSTM models in predicting passenger's getting on and off intentions.

Lane Map-based Vehicle Localization for Robust Lateral Control of an Automated Vehicle (자율주행 차량의 강건한 횡 방향 제어를 위한 차선 지도 기반 차량 위치추정)

  • Kim, Dongwook;Jung, Taeyoung;Yi, Kyong-Su
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.2
    • /
    • pp.108-114
    • /
    • 2015
  • Automated driving systems require a high level of performance regarding environmental perception, especially in urban environments. Today's on-board sensors such as radars or cameras do not reach a satisfying level of development from the point of view of robustness and availability. Thus, map data is often used as an additional data input to support these systems. An accurate digital map is used as a powerful additional sensor. In this paper, we propose a new approach for vehicle localization using a lane map and a single-layer LiDAR. The maps are created beforehand using a highly accurate DGPS and a single-layer LiDAR. A pose estimation of the vehicle was derived from an iterative closest point (ICP) match of LiDAR's intensity data to the lane map, and the estimated pose was used as an observation inside a Kalmanfilter framework. The achieved accuracy of the proposed localization algorithm is evaluated with a highly accurate DGPS to investigate the performance with respect to lateral vehicle control.

A Novel Multi-view Face Detection Method Based on Improved Real Adaboost Algorithm

  • Xu, Wenkai;Lee, Eung-Joo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.11
    • /
    • pp.2720-2736
    • /
    • 2013
  • Multi-view face detection has become an active area for research in the last few years. In this paper, a novel multi-view human face detection algorithm based on improved real Adaboost is presented. Real Adaboost algorithm is improved by weighted combination of weak classifiers and the approximately best combination coefficients are obtained. After that, we proved that the function of sample weight adjusting method and weak classifier training method is to guarantee the independence of weak classifiers. A coarse-to-fine hierarchical face detector combining the high efficiency of Haar feature with pose estimation phase based on our real Adaboost algorithm is proposed. This algorithm reduces training time cost greatly compared with classical real Adaboost algorithm. In addition, it speeds up strong classifier converging and reduces the number of weak classifiers. For frontal face detection, the experiments on MIT+CMU frontal face test set result a 96.4% correct rate with 528 false alarms; for multi-view face in real time test set result a 94.7 % correct rate. The experimental results verified the effectiveness of the proposed approach.

Getting On and Off an Elevator Safely for a Mobile Robot Using RGB-D Sensors (RGB-D 센서를 이용한 이동로봇의 안전한 엘리베이터 승하차)

  • Kim, Jihwan;Jung, Minkuk;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.1
    • /
    • pp.55-61
    • /
    • 2020
  • Getting on and off an elevator is one of the most important parts for multi-floor navigation of a mobile robot. In this study, we proposed the method for the pose recognition of elevator doors, safe path planning, and motion estimation of a robot using RGB-D sensors in order to safely get on and off the elevator. The accurate pose of the elevator doors is recognized using a particle filter algorithm. After the elevator door is open, the robot builds an occupancy grid map including the internal environments of the elevator to generate a safe path. The safe path prevents collision with obstacles in the elevator. While the robot gets on and off the elevator, the robot uses the optical flow algorithm of the floor image to detect the state that the robot cannot move due to an elevator door sill. The experimental results in various experiments show that the proposed method enables the robot to get on and off the elevator safely.

Keypoints-Based 2D Virtual Try-on Network System

  • Pham, Duy Lai;Ngyuen, Nhat Tan;Chung, Sun-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.2
    • /
    • pp.186-203
    • /
    • 2020
  • Image-based Virtual Try-On Systems are among the most potential solution for virtual fitting which tries on a target clothes into a model person image and thus have attracted considerable research efforts. In many cases, current solutions for those fails in achieving naturally looking virtual fitted image where a target clothes is transferred into the body area of a model person of any shape and pose while keeping clothes context like texture, text, logo without distortion and artifacts. In this paper, we propose a new improved image-based virtual try-on network system based on keypoints, which we name as KP-VTON. The proposed KP-VTON first detects keypoints in the target clothes and reliably predicts keypoints in the clothes of a model person image by utilizing a dense human pose estimation. Then, through TPS transformation calculated by utilizing the keypoints as control points, the warped target clothes image, which is matched into the body area for wearing the target clothes, is obtained. Finally, a new try-on module adopting Attention U-Net is applied to handle more detailed synthesis of virtual fitted image. Extensive experiments on a well-known dataset show that the proposed KP-VTON performs better the state-of-the-art virtual try-on systems.

Implementation of a sensor fusion system for autonomous guided robot navigation in outdoor environments (실외 자율 로봇 주행을 위한 센서 퓨전 시스템 구현)

  • Lee, Seung-H.;Lee, Heon-C.;Lee, Beom-H.
    • Journal of Sensor Science and Technology
    • /
    • v.19 no.3
    • /
    • pp.246-257
    • /
    • 2010
  • Autonomous guided robot navigation which consists of following unknown paths and avoiding unknown obstacles has been a fundamental technique for unmanned robots in outdoor environments. The unknown path following requires techniques such as path recognition, path planning, and robot pose estimation. In this paper, we propose a novel sensor fusion system for autonomous guided robot navigation in outdoor environments. The proposed system consists of three monocular cameras and an array of nine infrared range sensors. The two cameras equipped on the robot's right and left sides are used to recognize unknown paths and estimate relative robot pose on these paths through bayesian sensor fusion method, and the other camera equipped at the front of the robot is used to recognize abrupt curves and unknown obstacles. The infrared range sensor array is used to improve the robustness of obstacle avoidance. The forward camera and the infrared range sensor array are fused through rule-based method for obstacle avoidance. Experiments in outdoor environments show the mobile robot with the proposed sensor fusion system performed successfully real-time autonomous guided navigation.

The Container Pose Measurement Using Computer Vision (컴퓨터 비젼을 이용한 컨테이너 자세 측정)

  • 주기세
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.3
    • /
    • pp.702-707
    • /
    • 2004
  • This article is concerned with container pose estimation using CCD a camera and a range sensor. In particular, the issues of characteristic point extraction and image noise reduction are described. The Euler-Lagrange equation for gaussian and random noise reduction is introduced. The alternating direction implicit(ADI) method for solving Euler-Lagrange equation based on partial differential equation(PDE) is applied. The vertex points as characteristic points of a container and a spreader are founded using k order curvature calculation algorithm since the golden and the bisection section algorithm can't solve the local minimum and maximum problems. The proposed algorithm in image preprocess is effective in image denoise. Furthermore, this proposed system using a camera and a range sensor is very low price since the previous system can be used without reconstruction.

A Quantification Method of Human Body Motion Similarity using Dynamic Time Warping for Keypoints Extracted from Video Streams (동영상에서 추출한 키포인트 정보의 동적 시간워핑(DTW)을 이용한 인체 동작 유사도의 정량화 기법)

  • Im, June-Seok;Kim, Jin-Heon
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.1109-1116
    • /
    • 2020
  • The matching score evaluating human copying ability can be a good measure to check children's developmental stages, or sports movements like golf swing and dance, etc. It also can be used as HCI for AR, VR applications. This paper presents a method to evaluate the motion similarity between demonstrator who initiates movement and participant who follows the demonstrator action. We present a quantification method of the similarity which utilizes Euclidean L2 distance of Openpose keypoins vector similarity. The proposed method adapts DTW, thus can flexibly cope with the time delayed motions.