• Title/Summary/Keyword: Monocular

Search Result 236, Processing Time 0.023 seconds

Real Time 3D Face Pose Discrimination Based On Active IR Illumination (능동적 적외선 조명을 이용한 실시간 3차원 얼굴 방향 식별)

  • 박호식;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.3
    • /
    • pp.727-732
    • /
    • 2004
  • In this paper, we introduce a new approach for real-time 3D face pose discrimination based on active IR illumination from a monocular view of the camera. Under the IR illumination, the pupils appear bright. We develop algorithms for efficient and robust detection and tracking pupils in real time. Based on the geometric distortions of pupils under different face orientations, an eigen eye feature space is built based on training data that captures the relationship between 3D face orientation and the geometric features of the pupils. The 3D face pose for an input query image is subsequently classified using the eigen eye feature space. From the experiment, we obtained the range of results of discrimination from the subjects which close to the camera are from 94,67%, minimum from 100%, maximum.

A Framework for Real Time Vehicle Pose Estimation based on synthetic method of obtaining 2D-to-3D Point Correspondence

  • Yun, Sergey;Jeon, Moongu
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.04a
    • /
    • pp.904-907
    • /
    • 2014
  • In this work we present a robust and fast approach to estimate 3D vehicle pose that can provide results under a specific traffic surveillance conditions. Such limitations are expressed by single fixed CCTV camera that is located relatively high above the ground, its pitch axes is parallel to the reference plane and the camera focus assumed to be known. The benefit of our framework that it does not require prior training, camera calibration and does not heavily rely on 3D model shape as most common technics do. Also it deals with a bad shape condition of the objects as we focused on low resolution surveillance scenes. Pose estimation task is presented as PnP problem to solve it we use well known "POSIT" algorithm [1]. In order to use this algorithm at least 4 non coplanar point's correspondence is required. To find such we propose a set of techniques based on model and scene geometry. Our framework can be applied in real time video sequence. Results for estimated vehicle pose are shown in real image scene.

Augmented Reality system Using Depth-map (Depth-Map을 이용한 객체 증강 시스템)

  • Ban, Kyeong-Jin;Kim, Jong-Chan;Kim, Kyoung-Ok;Kim, Eung-Kon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.10a
    • /
    • pp.343-344
    • /
    • 2010
  • markerless system to a two-dimensional imaging is used to estimate the depth map as a stereo vision system uses expensive equipment. We estimate the depth map from monocular image enhancement and object extracted relative to the vanishing point is estimated depth map. Augmented objects in order to get better virtual immersion depending on the distance of the objects should be drawn in different sizes. In this paper, creating images obtained from the vanishing point, and in-depth information on the augmented object, augmented with different sizes and improved engagement of inter-object interaction.

  • PDF

A Case of Systemic Lupus Erythematosus Presenting with Amaurosis Fugax without Antiphospholipid Antibodies Syndrome (항인지질항체증후군을 동반하지 않은 일과성 단안 실명으로 발현된 전신성 홍반성 루푸스 1 예)

  • Kim, Jung-Hyun;Hah, Jung-Sang;Park, Mee-Young;Lee, Se-Jin;Lee, Jun
    • Journal of Yeungnam Medical Science
    • /
    • v.23 no.1
    • /
    • pp.113-117
    • /
    • 2006
  • Systemic lupus erythematosus (SLE) is a chronic autoimmune disease that may affect many organ systems including the nervous system. The immune response in patients with SLE can cause inflammation and other damage that can cause significant injury to the arteries and tissues. A 48-year-old woman was admitted to the hospital because of transient monocular blindness. Magnetic resonance imaging and conventional angiography showed severe stenosis of the distal intracranial internal carotid artery. The patient was diagnosed as having SLE but the antiphospholipid antibodies were negative. Amaurosis fugax has not been previously reported as an initial manifestation of SLE in Korea. We report a patient with a retinal transient ischemic attack as the first manifestation of SLE.

  • PDF

Trifocal versus Bifocal Diffractive Intraocular Lens Implantation after Cataract Surgery or Refractive Lens Exchange: a Meta-analysis

  • Yoon, Chang Ho;Shin, In-Soo;Kim, Mee Kum
    • Journal of Korean Medical Science
    • /
    • v.33 no.44
    • /
    • pp.275.1-275.15
    • /
    • 2018
  • Background: We compared the efficacy between trifocal and bifocal diffractive intraocular lens (IOL) implantation. Methods: Through PubMed, MEDLINE, EMBASE, and CENTRAL, we searched potentially relevant articles published from 1990 to 2018. Defocus curves, visual acuities (VAs) were measured as primary outcomes. Spectacle dependence, postoperative refraction, contrast sensitivity (CS), glare, and higher-order aberrations (HOAs) were measured as secondary outcomes. Effects were pooled using random-effects method. Results: We included 11 clinical trials, with a total of 787 eyes (395 subjects). The trifocal IOL group showed better binocular distance VA corrected with defocus levels of -0.5, -1.0, -1.5, and -2.5 diopter than the bifocal IOL group (All $P{\leq}0.004$). The trifocal IOL group showed better monocular uncorrected distance and intermediate VAs (mean difference [MD], -0.04 logarithm of the minimum angle of resolution [logMAR]; 95% confidence interval [CI], -0.07, -0.01; P = 0.006 and MD, -0.07 logMAR; 95% CI, -0.13, -0.01; P = 0.03, respectively). Postoperative refraction, glare, CS, and HOAs were not significantly different from each other. Conclusion: The overall findings indicate that trifocal diffractive IOL implantation is better than the bifocal diffractive IOL in intermediate VA, and provides similar or better in distance and near VAs without any major deterioration in the visual quality.

Improved Object Recognition using Multi-view Camera for ADAS (ADAS용 다중화각 카메라를 이용한 객체 인식 향상)

  • Park, Dong-hun;Kim, Hakil
    • Journal of Broadcast Engineering
    • /
    • v.24 no.4
    • /
    • pp.573-579
    • /
    • 2019
  • To achieve fully autonomous driving, the perceptual skills of the surrounding environment must be superior to those of humans. The $60^{\circ}$ angle, $120^{\circ}$ wide angle cameras, which are used primarily in autonomous driving, have their disadvantages depending on the viewing angle. This paper uses a multi-angle object recognition system to overcome each of the disadvantages of wide and narrow-angle cameras. Also, the aspect ratio of data acquired with wide and narrow-angle cameras was analyzed to modify the SSD(Single Shot Detector) algorithm, and the acquired data was learned to achieve higher performance than when using only monocular cameras.

Long Distance Vehicle Recognition and Tracking using Shadow (그림자를 이용한 원거리 차량 인식 및 추적)

  • Ahn, Young-Sun;Kwak, Seong-Woo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.1
    • /
    • pp.251-256
    • /
    • 2019
  • This paper presents an algorithm for recognizing and tracking a vehicle at a distance using a monocular camera installed at the center of the windshield of a vehicle to operate an autonomous vehicle in a racing. The vehicle is detected using the Haar feature, and the size and position of the vehicle are determined by detecting the shadows at the bottom of the vehicle. The region around the recognized vehicle is determined as ROI (Region Of Interest) and the vehicle shadow within the ROI is found and tracked in the next frame. Then the position, relative speed and direction of the vehicle are predicted. Experimental results show that the vehicle is recognized with a recognition rate of over 90% at a distance of more than 100 meters.

Real-time geometry identification of moving ships by computer vision techniques in bridge area

  • Li, Shunlong;Guo, Yapeng;Xu, Yang;Li, Zhonglong
    • Smart Structures and Systems
    • /
    • v.23 no.4
    • /
    • pp.359-371
    • /
    • 2019
  • As part of a structural health monitoring system, the relative geometric relationship between a ship and bridge has been recognized as important for bridge authorities and ship owners to avoid ship-bridge collision. This study proposes a novel computer vision method for the real-time geometric parameter identification of moving ships based on a single shot multibox detector (SSD) by using transfer learning techniques and monocular vision. The identification framework consists of ship detection (coarse scale) and geometric parameter calculation (fine scale) modules. For the ship detection, the SSD, which is a deep learning algorithm, was employed and fine-tuned by ship image samples downloaded from the Internet to obtain the rectangle regions of interest in the coarse scale. Subsequently, for the geometric parameter calculation, an accurate ship contour is created using morphological operations within the saturation channel in hue, saturation, and value color space. Furthermore, a local coordinate system was constructed using projective geometry transformation to calculate the geometric parameters of ships, such as width, length, height, localization, and velocity. The application of the proposed method to in situ video images, obtained from cameras set on the girder of the Wuhan Yangtze River Bridge above the shipping channel, confirmed the efficiency, accuracy, and effectiveness of the proposed method.

A Monocular Vision Based Technique for Estimating Direction of 3D Parallel Lines and Its Application to Measurement of Pallets (모노 비전 기반 3차원 평행직선의 방향 추정 기법 및 파렛트 측정 응용)

  • Kim, Minhwan;Byun, Sungmin;Kim, Jin
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.11
    • /
    • pp.1254-1262
    • /
    • 2018
  • Many parallel lines may be shown in our real life and they are useful for analyzing structure of objects or buildings. In this paper, a vision based technique for estimating three-dimensional direction of parallel lines is suggested, which uses a calibrated camera and is applicable to an image being captured from the camera. Correctness of the technique is theoretically described and discussed in this paper. The technique is well applicable to measurement of orientation of a pallet in a warehouse, because a pair of parallel lines is well detected in the front plane of the pallet. Thereby the technique enables a forklift with a well-calibrated camera to engage the pallet automatically. Such a forklift in a warehouse can engage a pallet on a storing rack as well as one on the ground. Usefulness of the suggested technique for other applications is also discussed. We conducted an experiment of measuring a real commercial pallet with various orientation and distance and found for the technique to work correctly and accurately.

A Deep Convolutional Neural Network Based 6-DOF Relocalization with Sensor Fusion System (센서 융합 시스템을 이용한 심층 컨벌루션 신경망 기반 6자유도 위치 재인식)

  • Jo, HyungGi;Cho, Hae Min;Lee, Seongwon;Kim, Euntai
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.87-93
    • /
    • 2019
  • This paper presents a 6-DOF relocalization using a 3D laser scanner and a monocular camera. A relocalization problem in robotics is to estimate pose of sensor when a robot revisits the area. A deep convolutional neural network (CNN) is designed to regress 6-DOF sensor pose and trained using both RGB image and 3D point cloud information in end-to-end manner. We generate the new input that consists of RGB and range information. After training step, the relocalization system results in the pose of the sensor corresponding to each input when a new input is received. However, most of cases, mobile robot navigation system has successive sensor measurements. In order to improve the localization performance, the output of CNN is used for measurements of the particle filter that smooth the trajectory. We evaluate our relocalization method on real world datasets using a mobile robot platform.