• Title/Summary/Keyword: Monocular vision

Search Result 103, Processing Time 0.032 seconds

Monocular 3D Vision Unit for Correct Depth Perception by Accommodation

  • Hosomi, Takashi;Sakamoto, Kunio;Nomura, Shusaku;Hirotomi, Tetsuya;Shiwaku, Kuninori;Hirakawa, Masahito
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2009.10a
    • /
    • pp.1334-1337
    • /
    • 2009
  • The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display system utilizes binocular stereopsis. The authors have developed a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  • PDF

Passive Ranging Based on Planar Homography in a Monocular Vision System

  • Wu, Xin-mei;Guan, Fang-li;Xu, Ai-jun
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.155-170
    • /
    • 2020
  • Passive ranging is a critical part of machine vision measurement. Most of passive ranging methods based on machine vision use binocular technology which need strict hardware conditions and lack of universality. To measure the distance of an object placed on horizontal plane, we present a passive ranging method based on monocular vision system by smartphone. Experimental results show that given the same abscissas, the ordinatesis of the image points linearly related to their actual imaging angles. According to this principle, we first establish a depth extraction model by assuming a linear function and substituting the actual imaging angles and ordinates of the special conjugate points into the linear function. The vertical distance of the target object to the optical axis is then calculated according to imaging principle of camera, and the passive ranging can be derived by depth and vertical distance to the optical axis of target object. Experimental results show that ranging by this method has a higher accuracy compare with others based on binocular vision system. The mean relative error of the depth measurement is 0.937% when the distance is within 3 m. When it is 3-10 m, the mean relative error is 1.71%. Compared with other methods based on monocular vision system, the method does not need to calibrate before ranging and avoids the error caused by data fitting.

Monocular Vision-Based Guidance and Control for a Formation Flight

  • Cheon, Bong-kyu;Kim, Jeong-ho;Min, Chan-oh;Han, Dong-in;Cho, Kyeum-rae;Lee, Dae-woo;Seong, kie-jeong
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.16 no.4
    • /
    • pp.581-589
    • /
    • 2015
  • This paper describes a monocular vision-based formation flight technology using two fixed wing unmanned aerial vehicles. To measuring relative position and attitude of a leader aircraft, a monocular camera installed in the front of the follower aircraft captures an image of the leader, and position and attitude are measured from the image using the KLT feature point tracker and POSIT algorithm. To verify the feasibility of this vision processing algorithm, a field test was performed using two light sports aircraft, and our experimental results show that the proposed monocular vision-based measurement algorithm is feasible. Performance verification for the proposed formation flight technology was carried out using the X-Plane flight simulator. The formation flight simulation system consists of two PCs playing the role of leader and follower. When the leader flies by the command of user, the follower aircraft tracks the leader by designed guidance and a PI control law, and all the information about leader was measured using monocular vision. This simulation shows that guidance using relative attitude information tracks the leader aircraft better than not using attitude information. This simulation shows absolute average errors for the relative position as follows: X-axis: 2.88 m, Y-axis: 2.09 m, and Z-axis: 0.44 m.

VFH+ based Obstacle Avoidance using Monocular Vision of Unmanned Surface Vehicle (무인수상선의 단일 카메라를 이용한 VFH+ 기반 장애물 회피 기법)

  • Kim, Taejin;Choi, Jinwoo;Lee, Yeongjun;Choi, Hyun-Taek
    • Journal of Ocean Engineering and Technology
    • /
    • v.30 no.5
    • /
    • pp.426-430
    • /
    • 2016
  • Recently, many unmanned surface vehicles (USVs) have been developed and researched for various fields such as the military, environment, and robotics. In order to perform purpose specific tasks, common autonomous navigation technologies are needed. Obstacle avoidance is important for safe autonomous navigation. This paper describes a vector field histogram+ (VFH+) based obstacle avoidance method that uses the monocular vision of an unmanned surface vehicle. After creating a polar histogram using VFH+, an open space without the histogram is selected in the moving direction. Instead of distance sensor data, monocular vision data are used for make the polar histogram, which includes obstacle information. An object on the water is recognized as an obstacle because this method is for USV. The results of a simulation with sea images showed that we can verify a change in the moving direction according to the position of objects.

The Analysis of the P-VEP on the Normal Monocular Vision and Amblyopia in Binocular (앙안에서 정상 단안시와 약시안의 P-VEP 분석)

  • Kim, Douk-Hoon;Kim, Gyu-Su;Sung, A-Young;Park, Won-Hak
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.10 no.1
    • /
    • pp.41-46
    • /
    • 2005
  • The aim of the study was performed the wave analysis of P-VEP on the normal monocular vision and amblyopia in binocular. The P-VEP of three channels were recorded by the Nicolet system. Five adults (three males, two females, mean=22 years, range=19 to 24) subjects were recorded The subjects were researched the history including the systemic health, medication, genetics, allergy and ocular disease. Visual acuity and stereopsis were recorded for each subject monocularly and binocularly. Also subjects viewed the P-VEP stimulus both monocularly and binocularly through the corrected visual acuity during the VEP were recorded. The results of study suggest that the visual acuity of binocularly is better than with monocularly and the stereopsis was about over 140 sec. On the other hand, the analysis of P-VEP suggest that the amplitude of wave is larger when the monocular eye receives the P-VEP stimulus compared with the binocular eye. However the amplitude of wave in amblyopia had more smaller than the normal monocular The latency period of P-VEP was similar to results between the normal eye and binocular vision. But the amblyopia was a long period compared with the normal monocular and binocular vision. In conclusion, this study indicated that the visual acuity of binocularly have a better than the normal monocular vision, But in the P-VEP test, the amplitude of wave on normal monocularly vision appears to be better through the binocularly. But the amblyopia appeared the low amplitude wave of P-VEP and decreased the visual acuity.

  • PDF

Monocular Vision based Relative Position Measurement of an Aircraft (단안카메라를 이용한 항공기의 상대 위치 측정)

  • Kim, Jeong-Ho;Lee, Chang-Yong;Lee, Mi-Hyun;Han, Dong-In;Lee, Dae-Woo
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.43 no.4
    • /
    • pp.289-295
    • /
    • 2015
  • This paper describes a ground monocular vision-based measurement algorithm measuring relative range and position of aircraft using the information of wingspan and optical parameters for the camera. A technique obtaining an aircraft image is also described in this paper. This technique can be used as external measurement for autonomous landing instead of ILS. To verify the performance of these algorithms, flight experiment is performed using light sport aircraft with GPS and monocular camera. Finally we obtained the reasonable RMSE of 1.85m is obtained.

Estimation of Angular Acceleration By a Monocular Vision Sensor

  • Lim, Joonhoo;Kim, Hee Sung;Lee, Je Young;Choi, Kwang Ho;Kang, Sung Jin;Chun, Sebum;Lee, Hyung Keun
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.3 no.1
    • /
    • pp.1-10
    • /
    • 2014
  • Recently, monitoring of two-body ground vehicles carrying extremely hazardous materials has been considered as one of the most important national issues. This issue induces large cost in terms of national economy and social benefit. To monitor and counteract accidents promptly, an efficient methodology is required. For accident monitoring, GPS can be utilized in most cases. However, it is widely known that GPS cannot provide sufficient continuity in urban cannons and tunnels. To complement the weakness of GPS, this paper proposes an accident monitoring method based on a monocular vision sensor. The proposed method estimates angular acceleration from a sequence of image frames captured by a monocular vision sensor. The possibility of using angular acceleration is investigated to determine the occurrence of accidents such as jackknifing and rollover. By an experiment based on actual measurements, the feasibility of the proposed method is evaluated.

Monocular Vision and Odometry-Based SLAM Using Position and Orientation of Ceiling Lamps (천장 조명의 위치와 방위 정보를 이용한 모노카메라와 오도메트리 정보 기반의 SLAM)

  • Hwang, Seo-Yeon;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.2
    • /
    • pp.164-170
    • /
    • 2011
  • This paper proposes a novel monocular vision-based SLAM (Simultaneous Localization and Mapping) method using both position and orientation information of ceiling lamps. Conventional approaches used corner or line features as landmarks in their SLAM algorithms, but these methods were often unable to achieve stable navigation due to a lack of reliable visual features on the ceiling. Since lamp features are usually placed some distances from each other in indoor environments, they can be robustly detected and used as reliable landmarks. We used both the position and orientation of a lamp feature to accurately estimate the robot pose. Its orientation is obtained by calculating the principal axis from the pixel distribution of the lamp area. Both corner and lamp features are used as landmarks in the EKF (Extended Kalman Filter) to increase the stability of the SLAM process. Experimental results show that the proposed scheme works successfully in various indoor environments.

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

A Distance Measurement System Using a Laser Pointer and a Monocular Vision Sensor (레이저포인터와 단일카메라를 이용한 거리측정 시스템)

  • Jeon, Yeongsan;Park, Jungkeun;Kang, Taesam;Lee, Jeong-Oog
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.41 no.5
    • /
    • pp.422-428
    • /
    • 2013
  • Recently, many unmanned aerial vehicle (UAV) studies have focused on small UAVs, because they are cost effective and suitable in dangerous indoor environments where human entry is limited. Map building through distance measurement is a key technology for the autonomous flight of small UAVs. In many researches for unmanned systems, distance could be measured by using laser range finders or stereo vision sensors. Even though a laser range finder provides accurate distance measurements, it has a disadvantage of high cost. Calculating the distance using a stereo vision sensor is straightforward. However, the sensor is large and heavy, which is not suitable for small UAVs with limited payload. This paper suggests a low-cost distance measurement system using a laser pointer and a monocular vision sensor. A method to measure distance using the suggested system is explained and some experiments on map building are conducted with these distance measurements. The experimental results are compared to the actual data and the reliability of the suggested system is verified.