• Title/Summary/Keyword: vision/inertial sensor fusion

Search Result 12, Processing Time 0.022 seconds

Motion and Structure Estimation Using Fusion of Inertial and Vision Data for Helmet Tracker

  • Heo, Se-Jong;Shin, Ok-Shik;Park, Chan-Gook
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.11 no.1
    • /
    • pp.31-40
    • /
    • 2010
  • For weapon cueing and Head-Mounted Display (HMD), it is essential to continuously estimate the motion of the helmet. The problem of estimating and predicting the position and orientation of the helmet is approached by fusing measurements from inertial sensors and stereo vision system. The sensor fusion approach in this paper is based on nonlinear filtering, especially expended Kalman filter(EKF). To reduce the computation time and improve the performance in vision processing, we separate the structure estimation and motion estimation. The structure estimation tracks the features which are the part of helmet model structure in the scene and the motion estimation filter estimates the position and orientation of the helmet. This algorithm is tested with using synthetic and real data. And the results show that the result of sensor fusion is successful.

Hybrid Inertial and Vision-Based Tracking for VR applications (가상 현실 어플리케이션을 위한 관성과 시각기반 하이브리드 트래킹)

  • Gu, Jae-Pil;An, Sang-Cheol;Kim, Hyeong-Gon;Kim, Ik-Jae;Gu, Yeol-Hoe
    • Proceedings of the KIEE Conference
    • /
    • 2003.11b
    • /
    • pp.103-106
    • /
    • 2003
  • In this paper, we present a hybrid inertial and vision-based tracking system for VR applications. One of the most important aspects of VR (Virtual Reality) is providing a correspondence between the physical and virtual world. As a result, accurate and real-time tracking of an object's position and orientation is a prerequisite for many applications in the Virtual Environments. Pure vision-based tracking has low jitter and high accuracy but cannot guarantee real-time pose recovery under all circumstances. Pure inertial tracking has high update rates and full 6DOF recovery but lacks long-term stability due to sensor noise. In order to overcome the individual drawbacks and to build better tracking system, we introduce the fusion of vision-based and inertial tracking. Sensor fusion makes the proposal tracking system robust, fast, accurate, and low jitter and noise. Hybrid tracking is implemented with Kalman Filter that operates in a predictor-corrector manner. Combining bluetooth serial communication module gives the system a full mobility and makes the system affordable, lightweight energy-efficient. and practical. Full 6DOF recovery and the full mobility of proposal system enable the user to interact with mobile device like PDA and provide the user with natural interface.

  • PDF

Pose Estimation of Ground Test Bed using Ceiling Landmark and Optical Flow Based on Single Camera/IMU Fusion (천정부착 랜드마크와 광류를 이용한 단일 카메라/관성 센서 융합 기반의 인공위성 지상시험장치의 위치 및 자세 추정)

  • Shin, Ok-Shik;Park, Chan-Gook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.1
    • /
    • pp.54-61
    • /
    • 2012
  • In this paper, the pose estimation method for the satellite GTB (Ground Test Bed) using vision/MEMS IMU (Inertial Measurement Unit) integrated system is presented. The GTB for verifying a satellite system on the ground is similar to the mobile robot having thrusters and a reaction wheel as actuators and floating on the floor by compressed air. The EKF (Extended Kalman Filter) is also used for fusion of MEMS IMU and vision system that consists of a single camera and infrared LEDs that is ceiling landmarks. The fusion filter generally utilizes the position of feature points from the image as measurement. However, this method can cause position error due to the bias of MEMS IMU when the camera image is not obtained if the bias is not properly estimated through the filter. Therefore, it is proposed that the fusion method which uses the position of feature points and the velocity of the camera determined from optical flow of feature points. It is verified by experiments that the performance of the proposed method is robust to the bias of IMU compared to the method that uses only the position of feature points.

Vision Aided Inertial Sensor Bias Compensation for Firing Lane Alignment (사격 차선 정렬을 위한 영상 기반의 관성 센서 편차 보상)

  • Arshad, Awais;Park, Junwoo;Bang, Hyochoong;Kim, Yun-young;Kim, Heesu;Lee, Yongseon;Choi, Sungho
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.50 no.9
    • /
    • pp.617-625
    • /
    • 2022
  • This study investigates the use of movable calibration target for gyroscopic and accelerometer bias compensation of inertial measurement units for firing lane alignment. Calibration source is detected with the help of vision sensor and its information in fused with other sensors on launcher for error correction. An algorithm is proposed and tested in simulation. It has been shown that it is possible to compensate sensor biases in firing launcher in few seconds by accurately estimating the location of calibration target in inertial frame of reference.

Development of 3D Point Cloud Mapping System Using 2D LiDAR and Commercial Visual-inertial Odometry Sensor (2차원 라이다와 상업용 영상-관성 기반 주행 거리 기록계를 이용한 3차원 점 구름 지도 작성 시스템 개발)

  • Moon, Jongsik;Lee, Byung-Yoon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.3
    • /
    • pp.107-111
    • /
    • 2021
  • A 3D point cloud map is an essential elements in various fields, including precise autonomous navigation system. However, generating a 3D point cloud map using a single sensor has limitations due to the price of expensive sensor. In order to solve this problem, we propose a precise 3D mapping system using low-cost sensor fusion. Generating a point cloud map requires the process of estimating the current position and attitude, and describing the surrounding environment. In this paper, we utilized a commercial visual-inertial odometry sensor to estimate the current position and attitude states. Based on the state value, the 2D LiDAR measurement values describe the surrounding environment to create a point cloud map. To analyze the performance of the proposed algorithm, we compared the performance of the proposed algorithm and the 3D LiDAR-based SLAM (simultaneous localization and mapping) algorithm. As a result, it was confirmed that a precise 3D point cloud map can be generated with the low-cost sensor fusion system proposed in this paper.

Overview of sensor fusion techniques for vehicle positioning (차량정밀측위를 위한 복합측위 기술 동향)

  • Park, Jin-Won;Choi, Kae-Won
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.11 no.2
    • /
    • pp.139-144
    • /
    • 2016
  • This paper provides an overview of recent trends in sensor fusion technologies for vehicle positioning. The GNSS by itself cannot satisfy precision and reliability required by autonomous driving. We survey sensor fusion techniques that combine the outputs from the GNSS and the inertial navigation sensors such as an odometer and a gyroscope. Moreover, we overview landmark-based positioning that matches landmarks detected by a lidar or a stereo vision to high-precision digital maps.

Road Surface Marking Detection for Sensor Fusion-based Positioning System (센서 융합 기반 정밀 측위를 위한 노면 표시 검출)

  • Kim, Dongsuk;Jung, Hogi
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.22 no.7
    • /
    • pp.107-116
    • /
    • 2014
  • This paper presents camera-based road surface marking detection methods suited to sensor fusion-based positioning system that consists of low-cost GPS (Global Positioning System), INS (Inertial Navigation System), EDM (Extended Digital Map), and vision system. The proposed vision system consists of two parts: lane marking detection and RSM (Road Surface Marking) detection. The lane marking detection provides ROIs (Region of Interest) that are highly likely to contain RSM. The RSM detection generates candidates in the regions and classifies their types. The proposed system focuses on detecting RSM without false detections and performing real time operation. In order to ensure real time operation, the gating varies for lane marking detection and changes detection methods according to the FSM (Finite State Machine) about the driving situation. Also, a single template matching is used to extract features for both lane marking detection and RSM detection, and it is efficiently implemented by horizontal integral image. Further, multiple step verification is performed to minimize false detections.

A Time Synchronization Scheme for Vision/IMU/OBD by GPS (GPS를 활용한 Vision/IMU/OBD 시각동기화 기법)

  • Lim, JoonHoo;Choi, Kwang Ho;Yoo, Won Jae;Kim, La Woo;Lee, Yu Dam;Lee, Hyung Keun
    • Journal of Advanced Navigation Technology
    • /
    • v.21 no.3
    • /
    • pp.251-257
    • /
    • 2017
  • Recently, hybrid positioning system combining GPS, vision sensor, and inertial sensor has drawn many attentions to estimate accurate vehicle positions. Since accurate multi-sensor fusion requires efficient time synchronization, this paper proposes an efficient method to obtain time synchronized measurements of vision sensor, inertial sensor, and OBD device based on GPS time information. In the proposed method, the time and position information is obtained by the GPS receiver, the attitude information is obtained by the inertial sensor, and the speed information is obtained by the OBD device. The obtained time, position, speed, and attitude information is converted to the color information. The color information is inserted to several corner pixels of the corresponding image frame. An experiment was performed with real measurements to evaluate the feasibility of the proposed method.

Autonomous Navigation of KUVE (KIST Unmanned Vehicle Electric) (KUVE (KIST 무인 주행 전기 자동차)의 자율 주행)

  • Chun, Chang-Mook;Suh, Seung-Beum;Lee, Sang-Hoon;Roh, Chi-Won;Kang, Sung-Chul;Kang, Yeon-Sik
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.7
    • /
    • pp.617-624
    • /
    • 2010
  • This article describes the system architecture of KUVE (KIST Unmanned Vehicle Electric) and unmanned autonomous navigation of it in KIST. KUVE, which is an electric light-duty vehicle, is equipped with two laser range finders, a vision camera, a differential GPS system, an inertial measurement unit, odometers, and control computers for autonomous navigation. KUVE estimates and tracks the boundary of road such as curb and line using a laser range finder and a vision camera. It follows predetermined trajectory if there is no detectable boundary of road using the DGPS, IMU, and odometers. KUVE has over 80% of success rate of autonomous navigation in KIST.

AprilTag and Stereo Visual Inertial Odometry (A-SVIO) based Mobile Assets Localization at Indoor Construction Sites

  • Khalid, Rabia;Khan, Muhammad;Anjum, Sharjeel;Park, Junsung;Lee, Doyeop;Park, Chansik
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.344-352
    • /
    • 2022
  • Accurate indoor localization of construction workers and mobile assets is essential in safety management. Existing positioning methods based on GPS, wireless, vision, or sensor based RTLS are erroneous or expensive in large-scale indoor environments. Tightly coupled sensor fusion mitigates these limitations. This research paper proposes a state-of-the-art positioning methodology, addressing the existing limitations, by integrating Stereo Visual Inertial Odometry (SVIO) with fiducial landmarks called AprilTags. SVIO determines the relative position of the moving assets or workers from the initial starting point. This relative position is transformed to an absolute position when AprilTag placed at various entry points is decoded. The proposed solution is tested on the NVIDIA ISAAC SIM virtual environment, where the trajectory of the indoor moving forklift is estimated. The results show accurate localization of the moving asset within any indoor or underground environment. The system can be utilized in various use cases to increase productivity and improve safety at construction sites, contributing towards 1) indoor monitoring of man machinery coactivity for collision avoidance and 2) precise real-time knowledge of who is doing what and where.

  • PDF