• 제목/요약/키워드: Monocular

검색결과 237건 처리시간 0.023초

능동적 적외선 조명을 이용한 실시간 3차원 얼굴 방향 식별 (Real Time 3D Face Pose Discrimination Based On Active IR Illumination)

  • 박호식;배철수
    • 한국정보통신학회논문지
    • /
    • 제8권3호
    • /
    • pp.727-732
    • /
    • 2004
  • 본 논문에서는 능동적 적외선 조명을 이용한 3차원 얼굴 방향 식별을 위한 새로운 방법을 제안하고자 한다. 적외선 조명 하에서 밝게 나타나는 동공을 효과적으로 실시간 검출하여 추적할 수 있는 알고리즘을 제안한다. 다른 방향의 얼굴들에서 동공의 기하학적 왜곡을 탐지하여, 3차원 얼굴 방향과 동공의 기하학적 특성 사이의 관계를 나타낸 학습 데이터를 사용하여 고유한 눈 특징 공간을 구축하였고, 입력된 질의 영상에 대한 3차원 얼굴 방향을 고유한 눈 특징 공간을 사용하여 실시간으로 얼굴 방향을 측정할 수 있었다. 실험결과 카메라에 근접한 실험 대상자들에 대하여 최소 94.67%, 최고 100%의 식별 결과를 나타내었다.

A Framework for Real Time Vehicle Pose Estimation based on synthetic method of obtaining 2D-to-3D Point Correspondence

  • Yun, Sergey;Jeon, Moongu
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2014년도 춘계학술발표대회
    • /
    • pp.904-907
    • /
    • 2014
  • In this work we present a robust and fast approach to estimate 3D vehicle pose that can provide results under a specific traffic surveillance conditions. Such limitations are expressed by single fixed CCTV camera that is located relatively high above the ground, its pitch axes is parallel to the reference plane and the camera focus assumed to be known. The benefit of our framework that it does not require prior training, camera calibration and does not heavily rely on 3D model shape as most common technics do. Also it deals with a bad shape condition of the objects as we focused on low resolution surveillance scenes. Pose estimation task is presented as PnP problem to solve it we use well known "POSIT" algorithm [1]. In order to use this algorithm at least 4 non coplanar point's correspondence is required. To find such we propose a set of techniques based on model and scene geometry. Our framework can be applied in real time video sequence. Results for estimated vehicle pose are shown in real image scene.

Depth-Map을 이용한 객체 증강 시스템 (Augmented Reality system Using Depth-map)

  • 반경진;김종찬;김경옥;김응곤
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국해양정보통신학회 2010년도 추계학술대회
    • /
    • pp.343-344
    • /
    • 2010
  • 마커리스 시스템의 경우 2차원 영상에서 깊이 값을 추정하기 위해서는 스테레오 비젼과 같이 고가의 장비를 통해 깊이 값을 추정하였다. 이에 단안 영상에서 깊이 값을 추정하여 객체를 증강하기 위해 소실점을 추출하고 상대적 깊이 값을 추정한다. 객체 증강에 있어 향상된 몰입감을 얻기 위해서는 가상의 객체들이 거리에 따라 서로 다른 크기로 그려져야 한다. 본 논문에서는 획득한 영상에서 소실점을 생성하고 깊이정보를 이용하여 증강된 객체를 서로 다른 크기로 증강하여 객체간 상호 몰입감을 향상시켰다.

  • PDF

항인지질항체증후군을 동반하지 않은 일과성 단안 실명으로 발현된 전신성 홍반성 루푸스 1 예 (A Case of Systemic Lupus Erythematosus Presenting with Amaurosis Fugax without Antiphospholipid Antibodies Syndrome)

  • 김정현;하정상;박미영;이세진;이준
    • Journal of Yeungnam Medical Science
    • /
    • 제23권1호
    • /
    • pp.113-117
    • /
    • 2006
  • Systemic lupus erythematosus (SLE) is a chronic autoimmune disease that may affect many organ systems including the nervous system. The immune response in patients with SLE can cause inflammation and other damage that can cause significant injury to the arteries and tissues. A 48-year-old woman was admitted to the hospital because of transient monocular blindness. Magnetic resonance imaging and conventional angiography showed severe stenosis of the distal intracranial internal carotid artery. The patient was diagnosed as having SLE but the antiphospholipid antibodies were negative. Amaurosis fugax has not been previously reported as an initial manifestation of SLE in Korea. We report a patient with a retinal transient ischemic attack as the first manifestation of SLE.

  • PDF

Trifocal versus Bifocal Diffractive Intraocular Lens Implantation after Cataract Surgery or Refractive Lens Exchange: a Meta-analysis

  • Yoon, Chang Ho;Shin, In-Soo;Kim, Mee Kum
    • Journal of Korean Medical Science
    • /
    • 제33권44호
    • /
    • pp.275.1-275.15
    • /
    • 2018
  • Background: We compared the efficacy between trifocal and bifocal diffractive intraocular lens (IOL) implantation. Methods: Through PubMed, MEDLINE, EMBASE, and CENTRAL, we searched potentially relevant articles published from 1990 to 2018. Defocus curves, visual acuities (VAs) were measured as primary outcomes. Spectacle dependence, postoperative refraction, contrast sensitivity (CS), glare, and higher-order aberrations (HOAs) were measured as secondary outcomes. Effects were pooled using random-effects method. Results: We included 11 clinical trials, with a total of 787 eyes (395 subjects). The trifocal IOL group showed better binocular distance VA corrected with defocus levels of -0.5, -1.0, -1.5, and -2.5 diopter than the bifocal IOL group (All $P{\leq}0.004$). The trifocal IOL group showed better monocular uncorrected distance and intermediate VAs (mean difference [MD], -0.04 logarithm of the minimum angle of resolution [logMAR]; 95% confidence interval [CI], -0.07, -0.01; P = 0.006 and MD, -0.07 logMAR; 95% CI, -0.13, -0.01; P = 0.03, respectively). Postoperative refraction, glare, CS, and HOAs were not significantly different from each other. Conclusion: The overall findings indicate that trifocal diffractive IOL implantation is better than the bifocal diffractive IOL in intermediate VA, and provides similar or better in distance and near VAs without any major deterioration in the visual quality.

ADAS용 다중화각 카메라를 이용한 객체 인식 향상 (Improved Object Recognition using Multi-view Camera for ADAS)

  • 박동훈;김학일
    • 방송공학회논문지
    • /
    • 제24권4호
    • /
    • pp.573-579
    • /
    • 2019
  • 완전한 자율 주행에 이르기 위해서는 주변 환경을 인지하는 인지 능력이 사람보다 뛰어나야 한다. 자율 주행에서 주로 사용되는 $60^{\circ}$ 협각, $120^{\circ}$ 광각 카메라는 시야각에 따른 각각의 단점이 존재한다. 본 논문의 목적은 광각, 협각 카메라가 가진 각각의 단점을 극복하기 위하여, 다중화각 차량 전방 카메라 시스템을 이용하여 더 넓은 영역의 전방을 대상으로 더 정확히 객체를 인식할 수 있는 심층신경망 알고리즘을 개발하는 것이다. 광각, 협각 카메라로 취득된 데이터의 종횡비를 분석해 SSD(Single Shot Detector) 알고리즘을 수정하였고, 취득된 데이터를 학습하여 단안 카메라만을 사용할 때 보다 높은 성능을 달성하였다.

그림자를 이용한 원거리 차량 인식 및 추적 (Long Distance Vehicle Recognition and Tracking using Shadow)

  • 안영선;곽성우
    • 한국전자통신학회논문지
    • /
    • 제14권1호
    • /
    • pp.251-256
    • /
    • 2019
  • 본 논문에서는 무인자율주행자동차를 레이싱 경기에 운용하기 위해 차량의 전면유리 중앙에 설치된 단안카메라를 사용하여 원거리에 있는 차량을 인식하고 추적하는 알고리즘을 제안한다. 차량은 하르(Haar) 특징을 사용하여 탐지하고, 차량바닥에 있는 그림자를 검출하여 차량의 크기와 위치를 판단한다. 인식된 차량의 주변을 ROI(: Region Of Interest)로 설정하여 다음 프레임들에서는 ROI 내부의 차량 그림자를 찾아 추적한다. 이를 통하여 차량의 위치, 상대속도와 이동방향을 예측한다. 실험결과는 100m이상의 거리에서 90%이상의 인식율로 차량을 인식하였다.

Real-time geometry identification of moving ships by computer vision techniques in bridge area

  • Li, Shunlong;Guo, Yapeng;Xu, Yang;Li, Zhonglong
    • Smart Structures and Systems
    • /
    • 제23권4호
    • /
    • pp.359-371
    • /
    • 2019
  • As part of a structural health monitoring system, the relative geometric relationship between a ship and bridge has been recognized as important for bridge authorities and ship owners to avoid ship-bridge collision. This study proposes a novel computer vision method for the real-time geometric parameter identification of moving ships based on a single shot multibox detector (SSD) by using transfer learning techniques and monocular vision. The identification framework consists of ship detection (coarse scale) and geometric parameter calculation (fine scale) modules. For the ship detection, the SSD, which is a deep learning algorithm, was employed and fine-tuned by ship image samples downloaded from the Internet to obtain the rectangle regions of interest in the coarse scale. Subsequently, for the geometric parameter calculation, an accurate ship contour is created using morphological operations within the saturation channel in hue, saturation, and value color space. Furthermore, a local coordinate system was constructed using projective geometry transformation to calculate the geometric parameters of ships, such as width, length, height, localization, and velocity. The application of the proposed method to in situ video images, obtained from cameras set on the girder of the Wuhan Yangtze River Bridge above the shipping channel, confirmed the efficiency, accuracy, and effectiveness of the proposed method.

모노 비전 기반 3차원 평행직선의 방향 추정 기법 및 파렛트 측정 응용 (A Monocular Vision Based Technique for Estimating Direction of 3D Parallel Lines and Its Application to Measurement of Pallets)

  • 김민환;변성민;김진
    • 한국멀티미디어학회논문지
    • /
    • 제21권11호
    • /
    • pp.1254-1262
    • /
    • 2018
  • Many parallel lines may be shown in our real life and they are useful for analyzing structure of objects or buildings. In this paper, a vision based technique for estimating three-dimensional direction of parallel lines is suggested, which uses a calibrated camera and is applicable to an image being captured from the camera. Correctness of the technique is theoretically described and discussed in this paper. The technique is well applicable to measurement of orientation of a pallet in a warehouse, because a pair of parallel lines is well detected in the front plane of the pallet. Thereby the technique enables a forklift with a well-calibrated camera to engage the pallet automatically. Such a forklift in a warehouse can engage a pallet on a storing rack as well as one on the ground. Usefulness of the suggested technique for other applications is also discussed. We conducted an experiment of measuring a real commercial pallet with various orientation and distance and found for the technique to work correctly and accurately.

센서 융합 시스템을 이용한 심층 컨벌루션 신경망 기반 6자유도 위치 재인식 (A Deep Convolutional Neural Network Based 6-DOF Relocalization with Sensor Fusion System)

  • 조형기;조해민;이성원;김은태
    • 로봇학회논문지
    • /
    • 제14권2호
    • /
    • pp.87-93
    • /
    • 2019
  • This paper presents a 6-DOF relocalization using a 3D laser scanner and a monocular camera. A relocalization problem in robotics is to estimate pose of sensor when a robot revisits the area. A deep convolutional neural network (CNN) is designed to regress 6-DOF sensor pose and trained using both RGB image and 3D point cloud information in end-to-end manner. We generate the new input that consists of RGB and range information. After training step, the relocalization system results in the pose of the sensor corresponding to each input when a new input is received. However, most of cases, mobile robot navigation system has successive sensor measurements. In order to improve the localization performance, the output of CNN is used for measurements of the particle filter that smooth the trajectory. We evaluate our relocalization method on real world datasets using a mobile robot platform.