• Title/Summary/Keyword: 단안

Search Result 158, Processing Time 0.027 seconds

Possibility about evaluation test of Substitution of NPC Alteration for AC/A Ratio (폭주근점 변화의 AC/A비 대체 평가지표에 대한 가능성)

  • Yoo, Hyun;Lee, Eun-Hee
    • Journal of Digital Convergence
    • /
    • v.14 no.10
    • /
    • pp.375-380
    • /
    • 2016
  • This study aimed to evaluate the possibility for assessment of binocular visual function about alteration for near point of convergence (NPC). Study subjects were total 30 (emmetropia 16, myopia 14), who had no eye disease except for the phoria and monocular corrected vision 1.0 and over. Near point of accommodation (NPA), NPC and phoria test were measured both at ordinary and stimulation of +1D. Ordinary NPC was 1.77cm shorter than NPC stimulated +1D and the NPC of emmetropia was shorter than that of myopia. The difference between NPC stimulated +1D and ordinary were increased in the emmetropia. As the NPC increased, the AC/A ratio was elevated and the feature of near exophoria appeared. The results suggested that the alteration of NPC might substitute AC/A ratio for reference variable of binocular vision. If converged and analyzed comparatively with another binocular visual evaluation test, the alteration of NPC could be developed for substitution of evaluation test.

RGB Camera-based Real-time 21 DoF Hand Pose Tracking (RGB 카메라 기반 실시간 21 DoF 손 추적)

  • Choi, Junyeong;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.19 no.6
    • /
    • pp.942-956
    • /
    • 2014
  • This paper proposes a real-time hand pose tracking method using a monocular RGB camera. Hand tracking has high ambiguity since a hand has a number of degrees of freedom. Thus, to reduce the ambiguity the proposed method adopts the step-by-step estimation scheme: a palm pose estimation, a finger yaw motion estimation, and a finger pitch motion estimation, which are performed in consecutive order. Assuming a hand to be a plane, the proposed method utilizes a planar hand model, which facilitates a hand model regeneration. The hand model regeneration modifies the hand model to fit a current user's hand, and improves robustness and accuracy of the tracking results. The proposed method can work in real-time and does not require GPU-based processing. Thus, it can be applied to various platforms including mobile devices such as Google Glass. The effectiveness and performance of the proposed method will be verified through various experiments.

Hybrid Stereoscopic Camera System (이종 카메라를 이용한 스테레오 카메라 시스템)

  • Shin, Hyoung-Chul;Kim, Sang-Hoon;Sohn, Kwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.16 no.4
    • /
    • pp.602-613
    • /
    • 2011
  • In this paper, we propose a hybrid stereoscopic camera system which acquires and utilizes stereoscopic images from two different camera modules, the main-camera module and the sub-camera module. Hybrid stereoscopic camera can effectively reduce the price and the size of a stereoscopic camera by using a relatively small and cheap sub-camera module such as a mobile phone camera. Images from the two different camera modules are very different from each other in aspects of color, angle of view, scale, resolution and so on. The proposed system performs an efficient hybrid stereoscopic image registration algorithm that transforms hybrid stereoscopic images into normal stereoscopic images based-on camera geometry. As experimental results, the registered stereoscopic images and applications of the proposed system are shown to demonstrate the performance and the functionality of the proposed camera system.

Study on Proximal Convergence/Accommodation(PC/A) Ratio by Comparison of Gradient AC/A Ratio and Calculated AC/A Ratio (Gradient AC/A비와 Calculated AC/A비의 비교에 의한 근접성 폭주비(PC/A)에 관한 연구)

  • Han, Gyeong-Ae;Sung, A-Young
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.9 no.2
    • /
    • pp.223-231
    • /
    • 2004
  • In most previous studies, the assessment of accommodative convergence to accommodative stimulus (AC/A) ratio was commonly made by measuring gradient AC/A ratio. This study deals with the proximal convergence/accommodation(PC/A)ratio measured by comparing values of the gradient AC/A ratio and the calculated AC/A ratio to prevail the clinical use of the AC/A ratio. Visual acuities of All 124 subjects had been corrected to at least 1.0 with either eye through their habitual refractive correction and the MEM dynamic retinoscopy was performed to estimate their accommodative response. And then the PC/A ratio was calculated by making use of the calculated AC/A ratio and the gradient AC/A ratio. This study showed that the difference between the mean calculated AC/A ratio and the mean gradient AC/A ratio in subgroups may be attributable to proximal convergence. Consequently, further studies on proximity cues including the PC/A ratio could be helpful to prevail the clinical use of the AC/A ratio.

  • PDF

Investigation of Instrument for Photostress Recovery Time Test in the Eye (눈의 광피로회복시간 검사를 위한 도구의 탐색)

  • Kim, Sang-Yeob;Moon, Byeong-Yeon;Cho, Hyun Gug
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.18 no.2
    • /
    • pp.193-196
    • /
    • 2013
  • Purpose: This study was investigated to find out a useful instrument instead of direct ophthalmoscope for ocular photostress recovery time (PSRT) test. Methods: The PSRT test was performed using direct ophthalmoscope, trans illuminator, pen light, and camera flash for 48 subjects (average age 22.88 years, 96 eyes) who were corrected to 0.8~1.2 of visual acuity. Results: Each mean of PSRT measured by direct ophthalmoscope, trans illuminator, pen light, and camera flash was $27.90{\pm}18.40$ sec, $23.73{\pm}12.99$ sec, $21.31{\pm}15.57$ sec, and $18.98{\pm}11.64$ sec, respectively. The difference of PSRT between the eyes corrected more than 1.0 and the other eyes corrected under 1.0 of visual acuity was not found significantly. And there was no difference between dominant eyes and nondominant eyes of PSRT. Conclusions: Though the nearest instrument to direct ophthalmoscope was trans illuminator, pen light and camera flash could be the useful instruments for PSRT test.

Multi-focus 3D display of see-through Head-Mounted Display type (투시형 두부 장착형 디스플레이방식의 다초점 3차원 디스플레이)

  • Kim, Dong-Wook;Yoon, Seon-Kyu;Kim, Sung-Kyu
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.441-447
    • /
    • 2006
  • See-through HMD type 3D display can provide an advantage of us seeing virtual 3D data used stereoscopic display simultaneously with real object(MR-Mixed Reality). But, when user sees stereoscopic display for a long time, not only eye fatigue phenomenon happens but also de-focus phenomenon of data happens by fixed focal point of virtual data. Dissatisfaction of focus adjustment of eye can be considered as the important reason of this phenomenon. In this paper, We proposed an application of multi-focus in see-through HMD as a solution of this problem. As a result, we confirmed that the focus adjustment coincide between the object of real world and the virtual data by multi-focus in monocular condition.

Attitudes Estimation for the Vision-based UAV using Optical Flow (광류를 이용한 영상기반 무인항공기의 자세 추정)

  • Jo, Seon-Yeong;Kim, Jong-Hun;Kim, Jung-Ho;Cho, Kyeum-Rae;Lee, Dae-Woo
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.38 no.4
    • /
    • pp.342-351
    • /
    • 2010
  • UAV (Unmanned Aerial Vehicle) have an INS(Inertial Navigation System) equipment and also have an electro-optical Equipment for mission. This paper proposes the vision based attitude estimation algorithm using Kalman Filter and Optical flow for UAV. Optical flow is acquired from the movie of camera which is equipped on UAV and UAV's attitude is measured from optical flow. In this paper, Kalman Filter has been used for the settlement of the low reliability and estimation of UAV's attitude. Algorithm verification was performed through experiments. The experiment has been used rate table and real flight video. Then, this paper shows the verification result of UAV's attitude estimation algorithm. When the rate table was tested, the error was in 2 degree and the tendency was similar with AHRS measurement states. However, on the experiment of real flight movie, maximum yaw error was 21 degree and Maximum pitch error was 7.8 degree.

Estimating Surface Orientation Using Statistical Model From Texture Gradient in Monocular Vision (단안의 무늬 그래디언트로 부터 통계학적 모델을 이용한 면 방향 추정)

  • Chung, Sung-Chil;Choi, Yeon-Sung;Choi, Jong-Soo
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.26 no.7
    • /
    • pp.157-165
    • /
    • 1989
  • To recover three dimensional information in Shape from Texture, the distorting effects of projection must be distinguished from properties of the texture on which the distortion acts. In this paper, we show an approximated maximum likelihood estimation method in which we find surface orientation of the visible surface (hemisphere) in gaussian sphere using local analysis of the texture. In addition, assuming that an orthogonal projection and a circle is an image formation system and a texel (texture element) respectively, we drive the surface orientation from the distribution of variation by means of orthogonal projection of a tangent direction which exists regularly in the arc length of a circle. We present the orientation parameters of textured surface with slant and tilt in gradient space, and also the surface normal of the resulted surface orientationas as needle map. This algorithm is applied to geographic contour (artificially generated chejudo) and synthetic texture.

  • PDF

Development of A Multi-sensor Fusion-based Traffic Information Acquisition System with Robust to Environmental Changes using Mono Camera, Radar and Infrared Range Finder (환경변화에 강인한 단안카메라 레이더 적외선거리계 센서 융합 기반 교통정보 수집 시스템 개발)

  • Byun, Ki-hoon;Kim, Se-jin;Kwon, Jang-woo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.16 no.2
    • /
    • pp.36-54
    • /
    • 2017
  • The purpose of this paper is to develop a multi-sensor fusion-based traffic information acquisition system with robust to environmental changes. it combines the characteristics of each sensor and is more robust to the environmental changes than the video detector. Moreover, it is not affected by the time of day and night, and has less maintenance cost than the inductive-loop traffic detector. This is accomplished by synthesizing object tracking informations based on a radar, vehicle classification informations based on a video detector and reliable object detections of a infrared range finder. To prove the effectiveness of the proposed system, I conducted experiments for 6 hours over 5 days of the daytime and early evening on the pedestrian - accessible road. According to the experimental results, it has 88.7% classification accuracy and 95.5% vehicle detection rate. If the parameters of this system is optimized to adapt to the experimental environment changes, it is expected that it will contribute to the advancement of ITS.

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.