• Title/Summary/Keyword: Monocular Camera

Search Result 111, Processing Time 0.021 seconds

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

Method to Improve Localization and Mapping Accuracy on the Urban Road Using GPS, Monocular Camera and HD Map (GPS와 단안카메라, HD Map을 이용한 도심 도로상에서의 위치측정 및 맵핑 정확도 향상 방안)

  • Kim, Young-Hun;Kim, Jae-Myeong;Kim, Gi-Chang;Choi, Yun-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1095-1109
    • /
    • 2021
  • The technology used to recognize the location and surroundings of autonomous vehicles is called SLAM. SLAM standsfor Simultaneously Localization and Mapping and hasrecently been actively utilized in research on autonomous vehicles,starting with robotic research. Expensive GPS, INS, LiDAR, RADAR, and Wheel Odometry allow precise magnetic positioning and mapping in centimeters. However, if it can secure similar accuracy as using cheaper Cameras and GPS data, it will contribute to advancing the era of autonomous driving. In this paper, we present a method for converging monocular camera with RTK-enabled GPS data to perform RMSE 33.7 cm localization and mapping on the urban road.

Monocular Vision and Odometry-Based SLAM Using Position and Orientation of Ceiling Lamps (천장 조명의 위치와 방위 정보를 이용한 모노카메라와 오도메트리 정보 기반의 SLAM)

  • Hwang, Seo-Yeon;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.2
    • /
    • pp.164-170
    • /
    • 2011
  • This paper proposes a novel monocular vision-based SLAM (Simultaneous Localization and Mapping) method using both position and orientation information of ceiling lamps. Conventional approaches used corner or line features as landmarks in their SLAM algorithms, but these methods were often unable to achieve stable navigation due to a lack of reliable visual features on the ceiling. Since lamp features are usually placed some distances from each other in indoor environments, they can be robustly detected and used as reliable landmarks. We used both the position and orientation of a lamp feature to accurately estimate the robot pose. Its orientation is obtained by calculating the principal axis from the pixel distribution of the lamp area. Both corner and lamp features are used as landmarks in the EKF (Extended Kalman Filter) to increase the stability of the SLAM process. Experimental results show that the proposed scheme works successfully in various indoor environments.

Multi-focus 3D Display (다초점 3차원 영상 표시 장치)

  • Kim, Seong-Gyu;Kim, Dong-Uk;Gwon, Yong-Mu;Son, Jeong-Yeong
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2008.07a
    • /
    • pp.119-120
    • /
    • 2008
  • A HMD type multi-focus 3D display system is developed and proof about satisfaction of eye accommodation is tested. Four LEDs(Light Emitting Diode) and a DMD are used to generate four parallax images at single eye and any mechanical part is not included in this system. The multi-focus means the ability of monocular depth cue to various depth levels. By achieving multi-focus function, we developed a 3D display system for only one eye, which can satisfy the accommodation to displayed virtual objects within defined depth. We could achieve a result that focus adjustment is possible at 5 step depths in sequence within 2m depth for only one eye. Additionally, the change level of burring depending on the focusing depth is tested by captured photos and moving pictures of video camera and several subjects. And the HMD type multi-focus 3D display can be applied to a monocular 3D display and monocular AR 3D display.

  • PDF

3D Depth Information Extraction Algorithm Based on Motion Estimation in Monocular Video Sequence (단안 영상 시퀸스에서 움직임 추정 기반의 3차원 깊이 정보 추출 알고리즘)

  • Park, Jun-Ho;Jeon, Dae-Seong;Yun, Yeong-U
    • The KIPS Transactions:PartB
    • /
    • v.8B no.5
    • /
    • pp.549-556
    • /
    • 2001
  • The general problems of recovering 3D for 2D imagery require the depth information for each picture element form focus. The manual creation of those 3D models is consuming time and cost expensive. The goal in this paper is to simplify the depth estimation algorithm that extracts the depth information of every region from monocular image sequence with camera translation to implement 3D video in realtime. The paper is based on the property that the motion of every point within image which taken from camera translation depends on the depth information. Full-search motion estimation based on block matching algorithm is exploited at first step and ten, motion vectors are compensated for the effect by camera rotation and zooming. We have introduced the algorithm that estimates motion of object by analysis of monocular motion picture and also calculates the averages of frame depth and relative depth of region to the average depth. Simulation results show that the depth of region belongs to a near object or a distant object is in accord with relative depth that human visual system recognizes.

  • PDF

Unsupervised Monocular Depth Estimation Using Self-Attention for Autonomous Driving (자율주행을 위한 Self-Attention 기반 비지도 단안 카메라 영상 깊이 추정)

  • Seung-Jun Hwang;Sung-Jun Park;Joong-Hwan Baek
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.2
    • /
    • pp.182-189
    • /
    • 2023
  • Depth estimation is a key technology in 3D map generation for autonomous driving of vehicles, robots, and drones. The existing sensor-based method has high accuracy but is expensive and has low resolution, while the camera-based method is more affordable with higher resolution. In this study, we propose self-attention-based unsupervised monocular depth estimation for UAV camera system. Self-Attention operation is applied to the network to improve the global feature extraction performance. In addition, we reduce the weight size of the self-attention operation for a low computational amount. The estimated depth and camera pose are transformed into point cloud. The point cloud is mapped into 3D map using the occupancy grid of Octree structure. The proposed network is evaluated using synthesized images and depth sequences from the Mid-Air dataset. Our network demonstrates a 7.69% reduction in error compared to prior studies.

Passive Ranging Based on Planar Homography in a Monocular Vision System

  • Wu, Xin-mei;Guan, Fang-li;Xu, Ai-jun
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.155-170
    • /
    • 2020
  • Passive ranging is a critical part of machine vision measurement. Most of passive ranging methods based on machine vision use binocular technology which need strict hardware conditions and lack of universality. To measure the distance of an object placed on horizontal plane, we present a passive ranging method based on monocular vision system by smartphone. Experimental results show that given the same abscissas, the ordinatesis of the image points linearly related to their actual imaging angles. According to this principle, we first establish a depth extraction model by assuming a linear function and substituting the actual imaging angles and ordinates of the special conjugate points into the linear function. The vertical distance of the target object to the optical axis is then calculated according to imaging principle of camera, and the passive ranging can be derived by depth and vertical distance to the optical axis of target object. Experimental results show that ranging by this method has a higher accuracy compare with others based on binocular vision system. The mean relative error of the depth measurement is 0.937% when the distance is within 3 m. When it is 3-10 m, the mean relative error is 1.71%. Compared with other methods based on monocular vision system, the method does not need to calibrate before ranging and avoids the error caused by data fitting.

Development of monocular video deflectometer based on inclination sensors

  • Wang, Shuo;Zhang, Shuiqiang;Li, Xiaodong;Zou, Yu;Zhang, Dongsheng
    • Smart Structures and Systems
    • /
    • v.24 no.5
    • /
    • pp.607-616
    • /
    • 2019
  • The video deflectometer based on digital image correlation is a non-contacting optical measurement method which has become a useful tool for characterization of the vertical deflections of large structures. In this study, a novel imaging model has been established which considers the variations of pitch angles in the full image. The new model allows deflection measurement at a wide working distance with high accuracy. A monocular video deflectometer has been accordingly developed with an inclination sensor, which facilitates dynamic determination of the orientations and rotation of the optical axis of the camera. This layout has advantages over the video deflectometers based on theodolites with respect to convenience. Experiments have been presented to show the accuracy of the new imaging model and the performance of the monocular video deflectometer in outdoor applications. Finally, this equipment has been applied to the measurement of the vertical deflection of Yingwuzhou Yangtze River Bridge in real time at a distance of hundreds of meters. The results show good agreement with the embedded GPS outputs.

A Study of Depth Estimate using GPGPU in Monocular Image (GPGPU를 이용한 단일 영상에서의 깊이 추정에 관한 연구)

  • Yoo, Tae Hoon;Lee, Gang Seong;Park, Young Soo;Lee, Jong Yong;Lee, Sang Hun
    • Journal of Digital Convergence
    • /
    • v.11 no.12
    • /
    • pp.345-352
    • /
    • 2013
  • In this paper, a depth estimate method is proposed using GPU(Graphics Processing Unit) in monocular image. a monocular image is a 2D image with missing 3D depth information due to the camera projection and we used a monocular cue to recover the lost depth information by the projection present. The proposed algorithm uses an energy function which takes a variety of cues to create a more generalized and reliable depth map. But, a processing time is late because energy function is defined from the various monocular cues. Therefore, we propose a depth estimate method using GPGPU(General Purpose Graphics Processing Unit). The objective effectiveness of the algorithm is shown using PSNR(Peak Signal to Noise Ratio), a processing time is decrease by 61.22%.

Monocular Vision-Based Guidance and Control for a Formation Flight

  • Cheon, Bong-kyu;Kim, Jeong-ho;Min, Chan-oh;Han, Dong-in;Cho, Kyeum-rae;Lee, Dae-woo;Seong, kie-jeong
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.16 no.4
    • /
    • pp.581-589
    • /
    • 2015
  • This paper describes a monocular vision-based formation flight technology using two fixed wing unmanned aerial vehicles. To measuring relative position and attitude of a leader aircraft, a monocular camera installed in the front of the follower aircraft captures an image of the leader, and position and attitude are measured from the image using the KLT feature point tracker and POSIT algorithm. To verify the feasibility of this vision processing algorithm, a field test was performed using two light sports aircraft, and our experimental results show that the proposed monocular vision-based measurement algorithm is feasible. Performance verification for the proposed formation flight technology was carried out using the X-Plane flight simulator. The formation flight simulation system consists of two PCs playing the role of leader and follower. When the leader flies by the command of user, the follower aircraft tracks the leader by designed guidance and a PI control law, and all the information about leader was measured using monocular vision. This simulation shows that guidance using relative attitude information tracks the leader aircraft better than not using attitude information. This simulation shows absolute average errors for the relative position as follows: X-axis: 2.88 m, Y-axis: 2.09 m, and Z-axis: 0.44 m.