• Title/Summary/Keyword: VIO

Search Result 18, Processing Time 0.022 seconds

Visual-Inertial Odometry Based on Depth Estimation and Kernel Filtering Strategy (깊이 추정 및 커널 필터링 기반 Visual-Inertial Odometry)

  • Jimin Song;HyungGi Jo;Sang Jun Lee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.4
    • /
    • pp.185-193
    • /
    • 2024
  • Visual-inertial odometry (VIO) is a method that leverages sensor data from a camera and an inertial measurement unit (IMU) for state estimation. Whereas conventional VIO has limited capability to estimate scale of translation, the performance of recent approaches has been improved by utilizing depth maps obtained from RGB-D camera, especially in indoor environments. However, the depth map obtained from the RGB-D camera tends to rapidly lose accuracy as the distance increases, and therefore, it is required to develop alternative method to improve the VIO performance in wide environments. In this paper, we argue that leveraging depth map estimated from a deep neural network has benefits to state estimation. To improve the reliability of depth information utilized in VIO algorithm, we propose a kernel-based sampling strategy to filter out depth values with low confidence. The proposed method aims to improve the robustness and accuracy of VIO algorithms by selectively utilizing reliable values of estimated depth maps. Experiments were conducted on real-world custom dataset acquired from underground parking lot environments. Experimental results demonstrate that the proposed method is effective to improve the performance of VIO, exhibiting potential for the use of depth estimation network for state estimation.

Visual Inertial Odometry for 3-Dimensional Pose Estimation (3차원 포즈 추정을 위한 시각 관성 주행 거리 측정)

  • Boeun Lee;Nak Yong Ko
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.13 no.4
    • /
    • pp.379-387
    • /
    • 2024
  • Real-time localization is essential for autonomous driving of robots. This paper presents the implementation and a performance analysis of a localization algorithm. To estimate the position and attitude of a robot, a visual inertial odometry (VIO) algorithm based on a multi-state constraint Kalman filter is used. The sensors employed in this study are a stereo camera and an inertial measurement unit (IMU). The performance is analyzed through experiments using three different camera view directions: floor-view, front-view, and ceiling-view. The number of detected features also affects navigation performance. Even if the number of recognized feature points is large, performance degrades if the correspondence between feature points is not accurately identified. The results show that VIO improves navigation performance even with low-cost sensors, thus facilitating map building as well as autonomous navigation.

Integrated Navigation Algorithm using Velocity Incremental Vector Approach with ORB-SLAM and Inertial Measurement (속도증분벡터를 활용한 ORB-SLAM 및 관성항법 결합 알고리즘 연구)

  • Kim, Yeonjo;Son, Hyunjin;Lee, Young Jae;Sung, Sangkyung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.68 no.1
    • /
    • pp.189-198
    • /
    • 2019
  • In recent years, visual-inertial odometry(VIO) algorithms have been extensively studied for the indoor/urban environments because it is more robust to dynamic scenes and environment changes. In this paper, we propose loosely coupled(LC) VIO algorithm that utilizes the velocity vectors from both visual odometry(VO) and inertial measurement unit(IMU) as a filter measurement of Extended Kalman filter. Our approach improves the estimation performance of a filter without adding extra sensors while maintaining simple integration framework, which treats VO as a black box. For the VO algorithm, we employed a fundamental part of the ORB-SLAM, which uses ORB features. We performed an outdoor experiment using an RGB-D camera to evaluate the accuracy of the presented algorithm. Also, we evaluated our algorithm with the public dataset to compare with other visual navigation systems.

Infrared Visual Inertial Odometry via Gaussian Mixture Model Approximation of Thermal Image Histogram (열화상 이미지 히스토그램의 가우시안 혼합 모델 근사를 통한 열화상-관성 센서 오도메트리)

  • Jaeho Shin;Myung-Hwan Jeon;Ayoung Kim
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.3
    • /
    • pp.260-270
    • /
    • 2023
  • We introduce a novel Visual Inertial Odometry (VIO) algorithm designed to improve the performance of thermal-inertial odometry. Thermal infrared image, though advantageous for feature extraction in low-light conditions, typically suffers from a high noise level and significant information loss during the 8-bit conversion. Our algorithm overcomes these limitations by approximating a 14-bit raw pixel histogram into a Gaussian mixture model. The conversion method effectively emphasizes image regions where texture for visual tracking is abundant while reduces unnecessary background information. We incorporate the robust learning-based feature extraction and matching methods, SuperPoint and SuperGlue, and zero velocity detection module to further reduce the uncertainty of visual odometry. Tested across various datasets, the proposed algorithm shows improved performance compared to other state-of-the-art VIO algorithms, paving the way for robust thermal-inertial odometry.

Improvement of Plane Tracking Accuracy in AR Game Using Magnetic Field Sensor (자기장 센서를 사용한 AR 게임에서의 평면 추적 정확도 개선)

  • Lee, Won-Jun;Park, Jong-Seung
    • Journal of Korea Game Society
    • /
    • v.19 no.5
    • /
    • pp.91-102
    • /
    • 2019
  • In this paper, we propose an improved method of plane tracking in developing AR games for smartphones using magnetic field sensor. The previous method based on ARCore is a VIO method using a mixture of SLAM and IMU of smartphones. The disadvantages of accelerometers and gyroscopes in IMUs cause errors in tracking the plane. We propose an improved method of planar tracking by adding the magnetic field sensor as well as the existing IMU sensors. Experimental results shows that our method reduces the error of the smartphone posture estimation.

Design of an Asynchronous eFuse One-Time Programmable Memory IP of 1 Kilo Bits Based on a Logic Process (Logic 공정 기반의 비동기식 1Kb eFuse OTP 메모리 IP 설계)

  • Lee, Jae-Hyung;Kang, Min-Cheol;Jin, Liyan;Jang, Ji-Hye;Ha, Pan-Bong;Kim, Young-Hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.7
    • /
    • pp.1371-1378
    • /
    • 2009
  • We propose a low-power eFuse one-time programmable (OTP) memory cell based on a logic process. The eFuse OTP memory cell uses separate transistors optimized at program and read mode, and reduces an operation current at read mode by reducing parasitic capacitances existing at both WL and BL. Asynchronous interface, separate I/O, BL SA circuit of digital sensing method are used for a low-power and small-area eFuse OTP memory IP. It is shown by a computer simulation that operation currents at a logic power supply voltage of VDD and at I/O interface power supply voltage of VIO are 349.5${\mu}$A and 3.3${\mu}$A, respectively. The layout size of the designed eFuse OTP memory IP with Dongbu HiTek's 0.18${\mu}$m generic process is 300 ${\times}$557${\mu}m^2$.

Odometry Using Strong Features of Recognized Text (인식된 문자의 강한 특징점을 활용하는 측위시스템)

  • Song, Do-hoon;Park, Jong-il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.219-222
    • /
    • 2021
  • 본 논문에서는 시각-관성 측위시스템(Visual-Inertial Odometry, VIO)에서 광학 문자 인식(Optical Character Recognition, OCR)을 활용해 문자의 영역을 찾아내고, 그 위치를 기억해 측위시스템에서 다시 인식되었을 때 비교하기 위해 위치와 특징점을 저장하고자 한다. 먼저, 실시간으로 움직이는 카메라의 영상에서 문자를 찾아내고, 카메라의 상대적인 위치를 이용하여 문자가 인식된 위치와 특징점을 저장하는 방법을 제안한다. 또한 저장된 문자가 다시 탐색되었을 때, 문자가 재인식되었는 지 판별하기 위한 방법을 제안한다. 인공적인 마커나 미리 학습된 객체를 사용하지 않고 상황에 따른 문자를 사용하는 이 방법은 문자가 존재하는 범용적인 공간에서 사용이 가능하다.

  • PDF

Draft Genome Sequences of Three Janthinobacterium lividum Strains Producing Violacein

  • Yu Jeong Lee;Jae-Cheol Lee;Kira Moon;Aslan Hwanhwi Lee;Byung Hee Chun
    • Microbiology and Biotechnology Letters
    • /
    • v.52 no.2
    • /
    • pp.215-217
    • /
    • 2024
  • Purple pigment producing bacterium strains AMJK, AMJM, and AMRM were isolated from sediment in sinan-gun, Korea and their draft genomes were sequenced using Illumina Hiseq 4000 platform. The lengths of AMJK, AMJM, and AMRM genomes were 6,380,747 bp, 6,381,259 bp, and 6,380,870 bp, respectively and G+C contents were 62.82%, 64.15%, and 62.82%, respectively. Comparative analysis of genomic identity showed that three strains were closely related to the group of Janthinobacterium lividum. Functional analysis of AMJK, AMJM, and AMRM genomes showed that all strains harbor genes related to producing violacein (VioABCDE).