• Title/Summary/Keyword: Monocular

Search Result 236, Processing Time 0.024 seconds

A Study on Real-Time Localization and Map Building of Mobile Robot using Monocular Camera (단일 카메라를 이용한 이동 로봇의 실시간 위치 추정 및 지도 작성에 관한 연구)

  • Jung, Dae-Seop;Choi, Jong-Hoon;Jang, Chul-Woong;Jang, Mun-Suk;Kong, Jung-Shik;Lee, Eung-Hyuk;Shim, Jae-Hong
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.536-538
    • /
    • 2006
  • The most important factor of mobile robot is to build a map for surrounding environment and estimate its localization. This paper proposes a real-time localization and map building method through 3-D reconstruction using scale invariant feature from monocular camera. Mobile robot attached monocular camera looking wall extracts scale invariant features in each image using SIFT(Scale Invariant Feature Transform) as it follows wall. Matching is carried out by the extracted features and matching feature map that is transformed into absolute coordinates using 3-D reconstruction of point and geometrical analysis of surrounding environment build, and store it map database. After finished feature map building, the robot finds some points matched with previous feature map and find its pose by affine parameter in real time. Position error of the proposed method was maximum. 8cm and angle error was within $10^{\circ}$.

  • PDF

Involvement of GABAergic Mechanism in the PIasticity Phenomenon of Chicken (닭의 Plasticity 현상에서 GABAergic 기작의 관련)

  • 김명순
    • The Korean Journal of Zoology
    • /
    • v.33 no.2
    • /
    • pp.133-140
    • /
    • 1990
  • In monocular vision, bead and eye optokinetic nystagmus (OKN) display directionnal asymmetry, in lower vertebrates such as chickens, T-N stimulation being more efficient in evoking this visuomotor reflex than N-T stimulation. The N-T component of monocular OKN is significantly weaker in chickens. Coil recordings and observation showed that in adult chickens, prolonged monocular visual deprivation by unilateral eyelid suture provoked significant and progressive increase of the N-T component in chickens. This plasticity phenomenon involved the eye and head OKN in chickens. The administration of THIP, a GABA agonist, abolished reversibly the increase of the N-T component in chickens. This fact suggests that the GABAergic system could be involved in determining this plasticity phenomenon observed in adult lower vertebrates.

  • PDF

Passive Ranging Based on Planar Homography in a Monocular Vision System

  • Wu, Xin-mei;Guan, Fang-li;Xu, Ai-jun
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.155-170
    • /
    • 2020
  • Passive ranging is a critical part of machine vision measurement. Most of passive ranging methods based on machine vision use binocular technology which need strict hardware conditions and lack of universality. To measure the distance of an object placed on horizontal plane, we present a passive ranging method based on monocular vision system by smartphone. Experimental results show that given the same abscissas, the ordinatesis of the image points linearly related to their actual imaging angles. According to this principle, we first establish a depth extraction model by assuming a linear function and substituting the actual imaging angles and ordinates of the special conjugate points into the linear function. The vertical distance of the target object to the optical axis is then calculated according to imaging principle of camera, and the passive ranging can be derived by depth and vertical distance to the optical axis of target object. Experimental results show that ranging by this method has a higher accuracy compare with others based on binocular vision system. The mean relative error of the depth measurement is 0.937% when the distance is within 3 m. When it is 3-10 m, the mean relative error is 1.71%. Compared with other methods based on monocular vision system, the method does not need to calibrate before ranging and avoids the error caused by data fitting.

VFH+ based Obstacle Avoidance using Monocular Vision of Unmanned Surface Vehicle (무인수상선의 단일 카메라를 이용한 VFH+ 기반 장애물 회피 기법)

  • Kim, Taejin;Choi, Jinwoo;Lee, Yeongjun;Choi, Hyun-Taek
    • Journal of Ocean Engineering and Technology
    • /
    • v.30 no.5
    • /
    • pp.426-430
    • /
    • 2016
  • Recently, many unmanned surface vehicles (USVs) have been developed and researched for various fields such as the military, environment, and robotics. In order to perform purpose specific tasks, common autonomous navigation technologies are needed. Obstacle avoidance is important for safe autonomous navigation. This paper describes a vector field histogram+ (VFH+) based obstacle avoidance method that uses the monocular vision of an unmanned surface vehicle. After creating a polar histogram using VFH+, an open space without the histogram is selected in the moving direction. Instead of distance sensor data, monocular vision data are used for make the polar histogram, which includes obstacle information. An object on the water is recognized as an obstacle because this method is for USV. The results of a simulation with sea images showed that we can verify a change in the moving direction according to the position of objects.

Development of monocular video deflectometer based on inclination sensors

  • Wang, Shuo;Zhang, Shuiqiang;Li, Xiaodong;Zou, Yu;Zhang, Dongsheng
    • Smart Structures and Systems
    • /
    • v.24 no.5
    • /
    • pp.607-616
    • /
    • 2019
  • The video deflectometer based on digital image correlation is a non-contacting optical measurement method which has become a useful tool for characterization of the vertical deflections of large structures. In this study, a novel imaging model has been established which considers the variations of pitch angles in the full image. The new model allows deflection measurement at a wide working distance with high accuracy. A monocular video deflectometer has been accordingly developed with an inclination sensor, which facilitates dynamic determination of the orientations and rotation of the optical axis of the camera. This layout has advantages over the video deflectometers based on theodolites with respect to convenience. Experiments have been presented to show the accuracy of the new imaging model and the performance of the monocular video deflectometer in outdoor applications. Finally, this equipment has been applied to the measurement of the vertical deflection of Yingwuzhou Yangtze River Bridge in real time at a distance of hundreds of meters. The results show good agreement with the embedded GPS outputs.

Monocular Camera based Real-Time Object Detection and Distance Estimation Using Deep Learning (딥러닝을 활용한 단안 카메라 기반 실시간 물체 검출 및 거리 추정)

  • Kim, Hyunwoo;Park, Sanghyun
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.4
    • /
    • pp.357-362
    • /
    • 2019
  • This paper proposes a model and train method that can real-time detect objects and distances estimation based on a monocular camera by applying deep learning. It used YOLOv2 model which is applied to autonomous or robot due to the fast image processing speed. We have changed and learned the loss function so that the YOLOv2 model can detect objects and distances at the same time. The YOLOv2 loss function added a term for learning bounding box values x, y, w, h, and distance values z as 클래스ification losses. In addition, the learning was carried out by multiplying the distance term with parameters for the balance of learning. we trained the model location, recognition by camera and distance data measured by lidar so that we enable the model to estimate distance and objects from a monocular camera, even when the vehicle is going up or down hill. To evaluate the performance of object detection and distance estimation, MAP (Mean Average Precision) and Adjust R square were used and performance was compared with previous research papers. In addition, we compared the original YOLOv2 model FPS (Frame Per Second) for speed measurement with FPS of our model.

Designing Vision Experiment Using Active-Shutter Glasses System (보급형 액티브 셔터 방식 안경을 이용한 시각 실험 설계)

  • Kang, Hae-In;Hyun, Joo-Seok
    • Science of Emotion and Sensibility
    • /
    • v.15 no.4
    • /
    • pp.477-488
    • /
    • 2012
  • The effort of implementing realistic 3-D depth on 2-D images has been continued persistently with a theoretical understanding of depth perception and its related technical development. The present article briefly reviews a number of popular stereoscopes for studying stereoscopic depth perception according to their implementation principles, and introduces a behavioral experiment as a technical example in which the active-shutter glasses were used. In the present study, participants were tested for their visual memory against perceived depth among a set of items. The depth of the memory and test items was manipulated to be 1) monocular, 2) binocular, or 3) both-monocular-and-binocular respectively. The memory performance was worst in the binocular-depth condition, and best however in the both-monocular-and-binocular condition. These results indicate that visual memory may benefit more from monocular depth than stereoscopic depth, and further suggest that the storage of depth information into visual memory would require both binocular and monocular information for its optimal memory performance.

  • PDF

Monocular Vision-Based Guidance and Control for a Formation Flight

  • Cheon, Bong-kyu;Kim, Jeong-ho;Min, Chan-oh;Han, Dong-in;Cho, Kyeum-rae;Lee, Dae-woo;Seong, kie-jeong
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.16 no.4
    • /
    • pp.581-589
    • /
    • 2015
  • This paper describes a monocular vision-based formation flight technology using two fixed wing unmanned aerial vehicles. To measuring relative position and attitude of a leader aircraft, a monocular camera installed in the front of the follower aircraft captures an image of the leader, and position and attitude are measured from the image using the KLT feature point tracker and POSIT algorithm. To verify the feasibility of this vision processing algorithm, a field test was performed using two light sports aircraft, and our experimental results show that the proposed monocular vision-based measurement algorithm is feasible. Performance verification for the proposed formation flight technology was carried out using the X-Plane flight simulator. The formation flight simulation system consists of two PCs playing the role of leader and follower. When the leader flies by the command of user, the follower aircraft tracks the leader by designed guidance and a PI control law, and all the information about leader was measured using monocular vision. This simulation shows that guidance using relative attitude information tracks the leader aircraft better than not using attitude information. This simulation shows absolute average errors for the relative position as follows: X-axis: 2.88 m, Y-axis: 2.09 m, and Z-axis: 0.44 m.

Landmark Initialization for Unscented Kalman Filter Sensor Fusion in Monocular Camera Localization

  • Hartmann, Gabriel;Huang, Fay;Klette, Reinhard
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.13 no.1
    • /
    • pp.1-11
    • /
    • 2013
  • The determination of the pose of the imaging camera is a fundamental problem in computer vision. In the monocular case, difficulties in determining the scene scale and the limitation to bearing-only measurements increase the difficulty in estimating camera pose accurately. Many mobile phones now contain inertial measurement devices, which may lend some aid to the task of determining camera pose. In this study, by means of simulation and real-world experimentation, we explore an approach to monocular camera localization that incorporates both observations of the environment and measurements from accelerometers and gyroscopes. The unscented Kalman filter was implemented for this task. Our main contribution is a novel approach to landmark initialization in a Kalman filter; we characterize the tolerance to noise that this approach allows.

Monocular Vision and Odometry-Based SLAM Using Position and Orientation of Ceiling Lamps (천장 조명의 위치와 방위 정보를 이용한 모노카메라와 오도메트리 정보 기반의 SLAM)

  • Hwang, Seo-Yeon;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.2
    • /
    • pp.164-170
    • /
    • 2011
  • This paper proposes a novel monocular vision-based SLAM (Simultaneous Localization and Mapping) method using both position and orientation information of ceiling lamps. Conventional approaches used corner or line features as landmarks in their SLAM algorithms, but these methods were often unable to achieve stable navigation due to a lack of reliable visual features on the ceiling. Since lamp features are usually placed some distances from each other in indoor environments, they can be robustly detected and used as reliable landmarks. We used both the position and orientation of a lamp feature to accurately estimate the robot pose. Its orientation is obtained by calculating the principal axis from the pixel distribution of the lamp area. Both corner and lamp features are used as landmarks in the EKF (Extended Kalman Filter) to increase the stability of the SLAM process. Experimental results show that the proposed scheme works successfully in various indoor environments.