• 제목/요약/키워드: Omnidirectional Camera

검색결과 37건 처리시간 0.023초

고해상도 360° 전방위 IP 카메라를 이용한 다중 번호판 인식 시스템 (Multi License Plate Recognition System using High Resolution 360° Omnidirectional IP Camera)

  • 라승탁;이선구;이승호
    • 전기전자학회논문지
    • /
    • 제21권4호
    • /
    • pp.412-415
    • /
    • 2017
  • 본 논문에서는 고해상도 $360^{\circ}$ 전방위 IP 카메라를 이용한 다중 번호판 인식 시스템을 제안한다. 제안한 시스템은 $360^{\circ}$ 원형영상의 평면 분할 부와 다중 번호판 인식 부로 구성되었다. $360^{\circ}$ 원형영상의 평면 분할 부는 고해상도 $360^{\circ}$ 전방위 IP 카메라에서 원형영상 획득, 원형영상 분할, 평면영상으로 변환, 보간법을 사용한 픽셀 보정 및 컬러보정, 에지 보정 등의 과정을 거쳐 화질이 개선된 평면영상으로 출력한다. 다중 번호판 인식 부는 평면영상에서 다중 번호판 후보영역 추출, 다중 번호판 후보영역 정규화 및 복원, 신경망을 사용한 다중 번호판 숫자, 문자 인식 과정을 거쳐 다중 번호판을 인식하게 된다. 제안된 고해상도 $360^{\circ}$ 전방위 IP 카메라를 이용한 다중 번호판 인식 시스템을 평가하기 위하여 지능형 주차관제시스템 운영 전문 업체와 공동으로 실험한 결과, 97.8%의 높은 번호판 인식률이 확인되었다.

Using Omnidirectional Images for Semi-Automatically Generating IndoorGML Data

  • Claridades, Alexis Richard;Lee, Jiyeong;Blanco, Ariel
    • 한국측량학회지
    • /
    • 제36권5호
    • /
    • pp.319-333
    • /
    • 2018
  • As human beings spend more time indoors, and with the growing complexity of indoor spaces, more focus is given to indoor spatial applications and services. 3D topological networks are used for various spatial applications that involve navigation indoors such as emergency evacuation, indoor positioning, and visualization. Manually generating indoor network data is impractical and prone to errors, yet current methods in automation need expensive sensors or datasets that are difficult and expensive to obtain and process. In this research, a methodology for semi-automatically generating a 3D indoor topological model based on IndoorGML (Indoor Geographic Markup Language) is proposed. The concept of Shooting Point is defined to accommodate the usage of omnidirectional images in generating IndoorGML data. Omnidirectional images were captured at selected Shooting Points in the building using a fisheye camera lens and rotator and indoor spaces are then identified using image processing implemented in Python. Relative positions of spaces obtained from CAD (Computer-Assisted Drawing) were used to generate 3D node-relation graphs representing adjacency, connectivity, and accessibility in the study area. Subspacing is performed to more accurately depict large indoor spaces and actual pedestrian movement. Since the images provide very realistic visualization, the topological relationships were used to link them to produce an indoor virtual tour.

전 방향 카메라 영상에서 사람의 얼굴 위치검출 방법 (Head Position Detection Using Omnidirectional Camera)

  • 배광혁;박강령;김재희
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2007년도 하계종합학술대회 논문집
    • /
    • pp.283-284
    • /
    • 2007
  • This paper proposes a method of real-time segmentation of moving region and detection of head position in a single omnidrectional camera Segmentation of moving region used background modeling method by a mixture of Gaussian(MOG) and shadow detection method. Circular constraint was proposed for detecting head position.

  • PDF

어안 워핑 이미지 기반의 Ego motion을 이용한 위치 인식 알고리즘 (Localization using Ego Motion based on Fisheye Warping Image)

  • 최윤원;최경식;최정원;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제20권1호
    • /
    • pp.70-77
    • /
    • 2014
  • This paper proposes a novel localization algorithm based on ego-motion which used Lucas-Kanade Optical Flow and warping image obtained through fish-eye lenses mounted on the robots. The omnidirectional image sensor is a desirable sensor for real-time view-based recognition of a robot because the all information around the robot can be obtained simultaneously. The preprocessing (distortion correction, image merge, etc.) of the omnidirectional image which obtained by camera using reflect in mirror or by connection of multiple camera images is essential because it is difficult to obtain information from the original image. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we extract motion vectors using Lucas-Kanade Optical Flow in preprocessed image. Third, we estimate the robot position and angle using ego-motion method which used direction of vector and vanishing point obtained by RANSAC. We confirmed the reliability of localization algorithm using ego-motion based on fisheye warping image through comparison between results (position and angle) of the experiment obtained using the proposed algorithm and results of the experiment measured from Global Vision Localization System.

360도 VR 촬영을 위한 무인 비행체용 카메라 짐벌 시스템 개발에 관한 연구 (A Study on the Development of Camera Gimbal System for Unmanned Flight Vehicle with VR 360 Degree Omnidirectional Photographing)

  • 정념;김상훈
    • 한국전자통신학회논문지
    • /
    • 제11권8호
    • /
    • pp.767-772
    • /
    • 2016
  • 본 논문은 무인 비행체에 설치되어 VR 360도 영상을 촬영하기 위한 카메라 짐벌 시스템에 관한 것으로서, 특히 자이로 기술을 이용하여 무인 비행체가 어느 방향으로 회전되더라도 카메라의 위치가 고정되어 영상의 흔들림이 최소화되도록 하였다. 이를 통해 안정적인 VR 360도 전방위 촬영이 가능한 무인 비행체용 카메라 짐벌 시스템을 개발하였다.

전방향 카메라를 사용한 원거리 추정을 위한 파라미터 추출 (Camera Parameter Extraction For Long Distance Estimation Using Omnidirectional Camera)

  • 이강산;전주일;강현수
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 2009년도 추계학술대회
    • /
    • pp.227-230
    • /
    • 2009
  • 본 논문은 전방향 스테레오 카메라를 이용한 거리 측정을 위해 반드시 수행되어야 하는 전방향 카메라의 교정방법에 관해 기술한다. 전방향 스테레오 카메라의 교정에 있어서, 두 대의 전방향 카메라를 각각 독립적으로 교정하거나 두 대의 카메라의 베이스라인이 크지 않은 경우의 교정은 기존의 연구된 다양한 방법을 통해 가능하다. 그러나 전방향 스테레오 카메라를 이용하여 원거리를 측정하기 위해서는 베이스라인이 충분히 커야 하며, 충분히 큰 베이스라인은 두 대의 전방향 카메라를 동시에 교정하는 것이 매우 힘들다. 이는 두 대의 전방향 카메라에서 촬영된 교정을 위한 테스트패턴의 크기가 최소한 한 대의 전방향 카메라에서 매우 작은 크기로 나타나기 때문이다. 따라서 본 논문에서는 베이스라인이 큰 두 대의 전방향 카메라의 교정을 위한 방법을 제안하고 실험을 통해 입증한다.

  • PDF

사용자 머리 움직임 속도와 가상 카메라 움직임 속도 간 차이에 따른 VR 멀미 측정 (Virtual Reality Sickness Assessment based on Difference between Head Movement Velocity and Virtual Camera Motion Velocity)

  • 김동언;정용주
    • 한국멀티미디어학회논문지
    • /
    • 제22권1호
    • /
    • pp.110-116
    • /
    • 2019
  • Virtual reality (VR) sickness can have an influential effect on the viewing quality of VR 3D contents. Particularly, watching the 3D contents on a head-mounted display (HMD) could cause some severe level of visual discomfort. Despite the importance of assessing the VR sickness, most of the recent studies have focused on unveiling the reason of inducing VR sickness. In this paper, we subjectively measure the level of VR sickness induced in the viewing of omnidirectional 3D graphics contents in HMD environment. Apart from that, we propose an objective assessment model that estimates the level of induced VR sickness by calculating the difference between head movement velocity and global camera motion velocity.

UAV-UGV의 협업제어를 위한 향상된 Target Tracking에 관한 연구 (Study on the Improved Target Tracking for the Collaborative Control of the UAV-UGV)

  • 최재영;김성관
    • 제어로봇시스템학회논문지
    • /
    • 제19권5호
    • /
    • pp.450-456
    • /
    • 2013
  • This paper suggests the target tracking method improved for the collaboration of the quad rotor type UAV (Unmanned Aerial Vehicle) and omnidirectional Unmanned Ground Vehicle. If UAV shakes or UGV moves rapidly, the existing method generates a phenomenon that the tracking object loses the tracking target. To solve the problems, we propose an algorithm that can track continually when they lose the target. The proposed algorithm stores the vector of the landmark. And if the target was lost, the control signal was inputted so that the landmark could move continuously to the direction running out. Prior to the experiment, Proportional and integral control were used in 4 motors in order to calibrate the Heading value of the omnidirectional mobile robot. The landmark of UGV was recognized as the camera adhered to UAV and the target was traced through the proportional-integral-derivative control. Finally, the performance of the target tracking controller and proposed algorithm was evaluated through the experiment.

로봇 응용을 위한 협력 및 결합 비전 시스템 (Mixing Collaborative and Hybrid Vision Devices for Robotic Applications)

  • 바쟝 정샬;김성흠;최동걸;이준영;권인소
    • 로봇학회논문지
    • /
    • 제6권3호
    • /
    • pp.210-219
    • /
    • 2011
  • This paper studies how to combine devices such as monocular/stereo cameras, motors for panning/tilting, fisheye lens and convex mirrors, in order to solve vision-based robotic problems. To overcome the well-known trade-offs between optical properties, we present two mixed versions of the new systems. The first system is the robot photographer with a conventional pan/tilt perspective camera and fisheye lens. The second system is the omnidirectional detector for a complete 360-degree field-of-view surveillance system. We build an original device that combines a stereo-catadioptric camera and a pan/tilt stereo-perspective camera, and also apply it in the real environment. Compared to the previous systems, we show benefits of two proposed systems in aspects of maintaining both high-speed and high resolution with collaborative moving cameras and having enormous search space with hybrid configuration. The experimental results are provided to show the effectiveness of the mixing collaborative and hybrid systems.

어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM (3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner)

  • 최윤원;최정원;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제21권7호
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.