• Title/Summary/Keyword: Monocular

Search Result 237, Processing Time 0.022 seconds

Study of Accomodation-lag using Monocular Estimation Method(MEM) (단안평가법(MEM)을 이용한 조절지체에 대한 연구)

  • Park, Eun-kyoo;Seo, Jung-Ick
    • Journal of the Korea society of information convergence
    • /
    • v.6 no.2
    • /
    • pp.51-55
    • /
    • 2013
  • The accomodation is made to see near objects. This accomodation have different characteristics from individual to individual. Difference also occurs accommodation of the theory and real. This is accomodative-lag. Depth of focus directly affects the accomodative-lag. Depth of focus is affected by the refractive power and the size of the pupil. Depth of focus becomes deeper as the size of the pupil is small, the refractive power is increased. The accomodative-lag occur more as depth of focus is deep. In this paper, a study was made of the relationship of the accomodative-lag and refractive power. A Monocular Estimation Method(MEM) use for measuring the accomodative-lag. Results were measured by MEM, it tended to increase the refractive power so as to increase the accodative-lag. The accomodative-lag amount was measured at 0.51D. Men were measured at 0.52D, women were measured at 0.49D. The accomodative-lag by gender tended also increases the amount of refractive power increases.

  • PDF

3D Depth Information Extraction Algorithm Based on Motion Estimation in Monocular Video Sequence (단안 영상 시퀸스에서 움직임 추정 기반의 3차원 깊이 정보 추출 알고리즘)

  • Park, Jun-Ho;Jeon, Dae-Seong;Yun, Yeong-U
    • The KIPS Transactions:PartB
    • /
    • v.8B no.5
    • /
    • pp.549-556
    • /
    • 2001
  • The general problems of recovering 3D for 2D imagery require the depth information for each picture element form focus. The manual creation of those 3D models is consuming time and cost expensive. The goal in this paper is to simplify the depth estimation algorithm that extracts the depth information of every region from monocular image sequence with camera translation to implement 3D video in realtime. The paper is based on the property that the motion of every point within image which taken from camera translation depends on the depth information. Full-search motion estimation based on block matching algorithm is exploited at first step and ten, motion vectors are compensated for the effect by camera rotation and zooming. We have introduced the algorithm that estimates motion of object by analysis of monocular motion picture and also calculates the averages of frame depth and relative depth of region to the average depth. Simulation results show that the depth of region belongs to a near object or a distant object is in accord with relative depth that human visual system recognizes.

  • PDF

A Distance Measurement System Using a Laser Pointer and a Monocular Vision Sensor (레이저포인터와 단일카메라를 이용한 거리측정 시스템)

  • Jeon, Yeongsan;Park, Jungkeun;Kang, Taesam;Lee, Jeong-Oog
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.41 no.5
    • /
    • pp.422-428
    • /
    • 2013
  • Recently, many unmanned aerial vehicle (UAV) studies have focused on small UAVs, because they are cost effective and suitable in dangerous indoor environments where human entry is limited. Map building through distance measurement is a key technology for the autonomous flight of small UAVs. In many researches for unmanned systems, distance could be measured by using laser range finders or stereo vision sensors. Even though a laser range finder provides accurate distance measurements, it has a disadvantage of high cost. Calculating the distance using a stereo vision sensor is straightforward. However, the sensor is large and heavy, which is not suitable for small UAVs with limited payload. This paper suggests a low-cost distance measurement system using a laser pointer and a monocular vision sensor. A method to measure distance using the suggested system is explained and some experiments on map building are conducted with these distance measurements. The experimental results are compared to the actual data and the reliability of the suggested system is verified.

VFH-based Navigation using Monocular Vision (단일 카메라를 이용한 VFH기반의 실시간 주행 기술 개발)

  • Park, Se-Hyun;Hwang, Ji-Hye;Ju, Jin-Sun;Ko, Eun-Jeong;Ryu, Juang-Tak;Kim, Eun-Yi
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.16 no.2
    • /
    • pp.65-72
    • /
    • 2011
  • In this paper, a real-time monocular vision based navigation system is developed for the disabled people, where online background learning and vector field histogram are used for identifying obstacles and recognizing avoidable paths. The proposed system is performed by three steps: obstacle classification, occupancy grid map generation and VFH-based path recommendation. Firstly, the obstacles are discriminated from images by subtracting with background model which is learned in real time. Thereafter, based on the classification results, an occupancy map sized at $32{\times}24$ is produced, each cell of which represents its own risk by 10 gray levels. Finally, the polar histogram is drawn from the occupancy map, then the sectors corresponding to the valley are chosen as safe paths. To assess the effectiveness of the proposed system, it was tested with a variety of obstacles at indoors and outdoors, then it showed the a'ccuracy of 88%. Moreover, it showed the superior performance when comparing with sensor based navigation systems, which proved the feasibility of the proposed system in using assistive devices of disabled people.

An Approach to 3D Object Localization Based on Monocular Vision

  • Jung, Sung-Hoon;Jang, Do-Won;Kim, Min-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.12
    • /
    • pp.1658-1667
    • /
    • 2008
  • Reconstruction of 3D objects from a single view image is generally an ill-posed problem because of the projection distortion. A monocular vision based 3D object localization method is proposed in this paper, which approximates an object on the ground to a simple bounding solid and works automatically without any prior information about the object. A spherical or cylindrical object determined based on a circularity measure is approximated to a bounding cylinder, while the other general free-shaped objects to a bounding box or a bounding cylinder appropriately. For a general object, its silhouette on the ground is first computed by back-projecting its projected image in image plane onto the ground plane and then a base rectangle on the ground is determined by using the intuition that touched parts of the object on the ground should appear at lower part of the silhouette. The base rectangle is adjusted and extended until a derived bounding box from it can enclose the general object sufficiently. Height of the bounding box is also determined enough to enclose the general object. When the general object looks like a round-shaped object, a bounding cylinder that encloses the bounding box minimally is selected instead of the bounding box. A bounding solid can be utilized to localize a 3D object on the ground and to roughly estimate its volume. Usefulness of our approach is presented with experimental results on real image objects and limitations of our approach are discussed.

  • PDF

5D Light Field Synthesis from a Monocular Video (단안 비디오로부터의 5차원 라이트필드 비디오 합성)

  • Bae, Kyuho;Ivan, Andre;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.755-764
    • /
    • 2019
  • Currently commercially available light field cameras are difficult to acquire 5D light field video since it can only acquire the still images or high price of the device. In order to solve these problems, we propose a deep learning based method for synthesizing the light field video from monocular video. To solve the problem of obtaining the light field video training data, we use UnrealCV to acquire synthetic light field data by realistic rendering of 3D graphic scene and use it for training. The proposed deep running framework synthesizes the light field video with each sub-aperture image (SAI) of $9{\times}9$ from the input monocular video. The proposed network consists of a network for predicting the appearance flow from the input image converted to the luminance image, and a network for predicting the optical flow between the adjacent light field video frames obtained from the appearance flow.

Unsupervised Monocular Depth Estimation Using Self-Attention for Autonomous Driving (자율주행을 위한 Self-Attention 기반 비지도 단안 카메라 영상 깊이 추정)

  • Seung-Jun Hwang;Sung-Jun Park;Joong-Hwan Baek
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.2
    • /
    • pp.182-189
    • /
    • 2023
  • Depth estimation is a key technology in 3D map generation for autonomous driving of vehicles, robots, and drones. The existing sensor-based method has high accuracy but is expensive and has low resolution, while the camera-based method is more affordable with higher resolution. In this study, we propose self-attention-based unsupervised monocular depth estimation for UAV camera system. Self-Attention operation is applied to the network to improve the global feature extraction performance. In addition, we reduce the weight size of the self-attention operation for a low computational amount. The estimated depth and camera pose are transformed into point cloud. The point cloud is mapped into 3D map using the occupancy grid of Octree structure. The proposed network is evaluated using synthesized images and depth sequences from the Mid-Air dataset. Our network demonstrates a 7.69% reduction in error compared to prior studies.

A Case of Monocular Partial Oculomotor Nerve Palsy in a Patient with Midbrain Hemorrhage (중뇌 출혈 환자에서 나타난 단안의 부분 동안신경마비 여환 치험 1례)

  • Lee, Hyun-Joong;Lee, Bo-Yun;Lee, Young-eun;Yang, Seung-Bo;Cho, Seung-Yeon;Park, Jung-Mi;Ko, Chang-Nam;Park, Seong-Uk
    • The Journal of the Society of Stroke on Korean Medicine
    • /
    • v.16 no.1
    • /
    • pp.103-109
    • /
    • 2015
  • This report is about a case of monocular partial oculomotor nerve palsy in a patient with midbrain hemorrhage. The patient developed diplopia while driving. The Brain MRI film demonstrated a hemorrhage in the right midbrain and left corona radiata and microbleedings in both cerebral and cerebellar hemispheres, basal ganglia, midbrain, pons. We used Korean medicine treatment modalities including acupuncture, electroacupuncture, pharmacoacupuncture and herb medicines. As a result, limitation of upward gaze was recovered to about 90% of normal range.

  • PDF

Repeatability of Monocular Spherical Endpoints Test (단안 구면 끝점검사의 반복성 검증)

  • Kim, Sang-Yeob;Moon, Byeong-Yeon;Cho, Hyun Gug
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.17 no.2
    • /
    • pp.209-213
    • /
    • 2012
  • Purpose: To assess the repeatability of the monocular spherical endpoints, a test was performed with four methods which are the retinoscopy, the MPMVA (maximum plus maximum visual acuity) method, the R/G duochrome method, and the crossed cylinder method. Methods: The monocular spherical endpoints was measured by four kinds of method (Retinoscopy, MPMVA method, R/G duochrome method, Crossed cylinder method) on 20 subjects (40 eyes) of average age 23.0 year-old men and women. After a week, retest was performed by same procedure and the test-retest repeatability was assessed by using the Bland-Altman plot analysis. Results: The test-retest mean difference of retinoscopy was the smallest diopters of -0.03 and that of R/G duochrome method was the largest diopters of -0.19. The upper/lower 95% limits of agreement for repeatability was the narrowest in retinoscopy and was the widest in crossed cylinder method. When compared the spherical endpoints of each eye between by retinoscopy and by other three methods, the error rate of ${\pm}0.25D$ in total eyes was 85% in MPMVA method, 80% in R/G duochrome method, and 24% in crossed cylinder method. Conclusions: Test-retest repeatability is the highest in the retinoscopy, and the retinoscopy, the MPMVA method, and R/G duochrome method are suitable for monocular spherical endpoints test.

High-Quality Depth Map Generation of Humans in Monocular Videos (단안 영상에서 인간 오브젝트의 고품질 깊이 정보 생성 방법)

  • Lee, Jungjin;Lee, Sangwoo;Park, Jongjin;Noh, Junyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.20 no.2
    • /
    • pp.1-11
    • /
    • 2014
  • The quality of 2D-to-3D conversion depends on the accuracy of the assigned depth to scene objects. Manual depth painting for given objects is labor intensive as each frame is painted. Specifically, a human is one of the most challenging objects for a high-quality conversion, as a human body is an articulated figure and has many degrees of freedom (DOF). In addition, various styles of clothes, accessories, and hair create a very complex silhouette around the 2D human object. We propose an efficient method to estimate visually pleasing depths of a human at every frame in a monocular video. First, a 3D template model is matched to a person in a monocular video with a small number of specified user correspondences. Our pose estimation with sequential joint angular constraints reproduces a various range of human motions (i.e., spine bending) by allowing the utilization of a fully skinned 3D model with a large number of joints and DOFs. The initial depth of the 2D object in the video is assigned from the matched results, and then propagated toward areas where the depth is missing to produce a complete depth map. For the effective handling of the complex silhouettes and appearances, we introduce a partial depth propagation method based on color segmentation to ensure the detail of the results. We compared the result and depth maps painted by experienced artists. The comparison shows that our method produces viable depth maps of humans in monocular videos efficiently.