• Title/Summary/Keyword: Monocular Camera

Search Result 111, Processing Time 0.023 seconds

Development of Visual Odometry Estimation for an Underwater Robot Navigation System

  • Wongsuwan, Kandith;Sukvichai, Kanjanapan
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.4
    • /
    • pp.216-223
    • /
    • 2015
  • The autonomous underwater vehicle (AUV) is being widely researched in order to achieve superior performance when working in hazardous environments. This research focuses on using image processing techniques to estimate the AUV's egomotion and the changes in orientation, based on image frames from different time frames captured from a single high-definition web camera attached to the bottom of the AUV. A visual odometry application is integrated with other sensors. An internal measurement unit (IMU) sensor is used to determine a correct set of answers corresponding to a homography motion equation. A pressure sensor is used to resolve image scale ambiguity. Uncertainty estimation is computed to correct drift that occurs in the system by using a Jacobian method, singular value decomposition, and backward and forward error propagation.

An Anti-Glare Technique for Drivers Based on Monocular RGB Camera and Smart Film (자동차 운전자를 위한 단일 RGB 카메라와 스마트 필름 기반 눈부심 측정 및 완화 기법)

  • Kim, Jinu;Bae, Sang-Jun;Kim, Dongho
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.626-629
    • /
    • 2019
  • 운전 중 운전자의 눈부심은 도로 상황 인식에 대해 악영향을 미치고, 운전자가 운전 중 필요로 하는 도로의 요소들을 적절히 고려할 수 있는 시간의 부재로 이어져 결국 교통사고로까지 이어질 수 있다. 본 논문에서는 자동차 운전자를 위한 단일 RGB 카메라와 스마트 필름 기반 눈부심 측정 및 완화 기법으로, RGB 카메라를 이용한 눈부심 검출 및 스마트 필름과의 연동으로 눈부심을 완화할 수 있는 기법에 대해 제안한다. 추후 본 기법으로 운전 중 다양한 원인으로 인해 발생할 수 있는 눈부심과 그에 따른 교통사고의 위험을 경감시키기 위한 도구로 활용될 수 있을 것으로 기대한다.

Real-Time Monocular Camera Pose Estimation which is Robust to Dynamic Environment (동적 환경에 강인한 단안 카메라의 실시간 자세 추정 기법)

  • Bak, Junhyeong;Park, In Kyu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.322-323
    • /
    • 2021
  • 증강현실이나 자율 주행, 드론 등의 기술에서 현재 위치와 시점을 파악하기 위해서는 실시간 카메라 자세 추정이 필요하다. 이를 위해 가장 일반적인 방식인 연속적인 단안 영상으로부터 카메라 자세를 추정하는 방식은 두 영상의 정적 객체 간에 견고한 특징점 매칭이 이루어져야한다. 하지만 일반적인 영상들은 다양한 이동 객체가 존재하는 동적 환경이므로 정적 객체만의 매칭을 보장하기 어렵다는 문제가 있다. 본 논문은 이 같은 동적 환경 문제를 해결하기 위해, 신경망 기반의 객체 분할 기법으로 영상 속 객체를 추출하고, 객체별 특징점 매칭 및 자세 추정 결과로 정적 객체를 특정해 매칭하는 방법을 제안한다. 또한, 제안하는 정적 객체 특정 방식에 적합한 신경망 기반 특징점 추출 방법을 사용하면 동적 환경에 보다 강인한 카메라 자세 추정이 가능함을 실험을 통해 확인한다.

  • PDF

High accuracy map matching method using monocular cameras and low-end GPS-IMU systems (단안 카메라와 저정밀 GPS-IMU 신호를 융합한 맵매칭 방법)

  • Kim, Yong-Gyun;Koo, Hyung-Il;Kang, Seok-Won;Kim, Joon-Won;Kim, Jae-Gwan
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.4
    • /
    • pp.34-40
    • /
    • 2018
  • This paper presents a new method to estimate the pose of a moving object accurately using a monocular camera and a low-end GPS+IMU sensor system. For this goal, we adopted a deep neural network for the semantic segmentation of input images and compared the results with a semantic map of a neighborhood. In this map matching, we use weight tables to deal with label inconsistency effectively. Signals from a low-end GPS+IMU sensor system are used to limit search spaces and minimize the proposed function. For the evaluation, we added noise to the signals from a high-end GPS-IMU system. The results show that the pose can be recovered from the noisy signals. We also show that the proposed method is effective in handling non-open-sky situations.

Indoor Localization by Matching of the Types of Vertices (모서리 유형의 정합을 이용한 실내 환경에서의 자기위치검출)

  • Ahn, Hyun-Sik
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.46 no.6
    • /
    • pp.65-72
    • /
    • 2009
  • This paper presents a vision based localization method for indoor mobile robots using the types of vertices from a monocular image. In the images captured from a camera of a robot, the types of vertices are determined by searching vertical edges and their branch edges with a geometric constraints. For obtaining correspondence between the comers of a 2-D map and the vertex of images, the type of vertices and geometrical constraints induced from a geometric analysis. The vertices are matched with the comers by a heuristic method using the type and position of the vertices and the comers. With the matched pairs, nonlinear equations derived from the perspective and rigid transformations are produced. The pose of the robot is computed by solving the equations using a least-squares optimization technique. Experimental results show that the proposed localization method is effective and applicable to the localization of indoor environments.

Lane Detection on Non-flat Road Using Piecewise Linear Model (굴곡진 도로에서의 구간 선형 모델을 이용한 차선 검출)

  • Jeong, Min-Young;Kim, Gyeonghwan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39A no.6
    • /
    • pp.322-332
    • /
    • 2014
  • This paper proposes a robust lane detection algorithm for non-flat roads by combining a piecewise linear model and dynamic programming. Compared with other lane models, the piecewise linear model can represent 3D shapes of roads from the scenes acquired by monocular camera since it can form a curved surface through a set of planar road. To represent the real road, the planar roads are created by various angles and positions at each section. And dynamic programming determines an optimal combination of planar roads based on lane properties. Experiment results demonstrate the robustness of proposed algorithm against non-flat road, curved road, and camera vibration.

Biomimetic approach object detection sensors using multiple imaging (다중 영상을 이용한 생체모방형 물체 접근 감지 센서)

  • Choi, Myoung Hoon;Kim, Min;Jeong, Jae-Hoon;Park, Won-Hyeon;Lee, Dong Heon;Byun, Gi-Sik;Kim, Gwan-Hyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.05a
    • /
    • pp.91-93
    • /
    • 2016
  • From the 2-D image extracting three-dimensional information as the latter is in the bilateral sibeop using two camera method and when using a monocular camera as a very important step generally as "stereo vision". There in today's CCTV and automatic object tracking system used in many medium much to know the site conditions or work developed more clearly by using a stereo camera that mimics the eyes of humans to maximize the efficiency of avoidance / control start and multiple jobs can do. Object tracking system of the existing 2D image will have but can not recognize the distance to the transition could not be recognized by the observer display using a parallax of a stereo image, and the object can be more effectively controlled.

  • PDF

3D feature point extraction technique using a mobile device (모바일 디바이스를 이용한 3차원 특징점 추출 기법)

  • Kim, Jin-Kyum;Seo, Young-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.256-257
    • /
    • 2022
  • In this paper, we introduce a method of extracting three-dimensional feature points through the movement of a single mobile device. Using a monocular camera, a 2D image is acquired according to the camera movement and a baseline is estimated. Perform stereo matching based on feature points. A feature point and a descriptor are acquired, and the feature point is matched. Using the matched feature points, the disparity is calculated and a depth value is generated. The 3D feature point is updated according to the camera movement. Finally, the feature point is reset at the time of scene change by using scene change detection. Through the above process, an average of 73.5% of additional storage space can be secured in the key point database. By applying the algorithm proposed to the depth ground truth value of the TUM Dataset and the RGB image, it was confirmed that the\re was an average distance difference of 26.88mm compared with the 3D feature point result.

  • PDF

3D Stereoscopic Augmented Reality with a Monocular Camera (단안카메라 기반 삼차원 입체영상 증강현실)

  • Rho, Seungmin;Lee, Jinwoo;Hwang, Jae-In;Kim, Junho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.3
    • /
    • pp.11-20
    • /
    • 2016
  • This paper introduces an effective method for generating 3D stereoscopic images that gives immersive 3D experiences to viewers using mobile-based binocular HMDs. Most of previous AR systems with monocular cameras have a common limitation that the same real-world images are provided to the viewer's eyes without parallax. In this paper, based on the assumption that viewers focus on the marker in the scenario of marker based AR, we recovery the binocular disparity about a camera image and a virtual object using the pose information of the marker. The basic idea is to generate the binocular disparity for real-world images and a virtual object, where the images are placed on the 2D plane in 3D defined by the pose information of the marker. For non-marker areas in the images, we apply blur effects to reduce the visual discomfort by decreasing their sharpness. Our user studies show that the proposed method for 3D stereoscopic image provides high depth feeling to viewers compared to the previous binocular AR systems. The results show that our system provides high depth feelings, high sense of reality, and visual comfort, compared to the previous binocular AR systems.

RGB Camera-based Real-time 21 DoF Hand Pose Tracking (RGB 카메라 기반 실시간 21 DoF 손 추적)

  • Choi, Junyeong;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.19 no.6
    • /
    • pp.942-956
    • /
    • 2014
  • This paper proposes a real-time hand pose tracking method using a monocular RGB camera. Hand tracking has high ambiguity since a hand has a number of degrees of freedom. Thus, to reduce the ambiguity the proposed method adopts the step-by-step estimation scheme: a palm pose estimation, a finger yaw motion estimation, and a finger pitch motion estimation, which are performed in consecutive order. Assuming a hand to be a plane, the proposed method utilizes a planar hand model, which facilitates a hand model regeneration. The hand model regeneration modifies the hand model to fit a current user's hand, and improves robustness and accuracy of the tracking results. The proposed method can work in real-time and does not require GPU-based processing. Thus, it can be applied to various platforms including mobile devices such as Google Glass. The effectiveness and performance of the proposed method will be verified through various experiments.