• Title/Summary/Keyword: Omnidirectional Camera

Search Result 37, Processing Time 0.029 seconds

Multi License Plate Recognition System using High Resolution 360° Omnidirectional IP Camera (고해상도 360° 전방위 IP 카메라를 이용한 다중 번호판 인식 시스템)

  • Ra, Seung-Tak;Lee, Sun-Gu;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.21 no.4
    • /
    • pp.412-415
    • /
    • 2017
  • In this paper, we propose a multi license plate recognition system using high resolution $360^{\circ}$ omnidirectional IP camera. The proposed system consists of a planar division part of $360^{\circ}$ circular image and a multi license plate recognition part. The planar division part of the $360^{\circ}$ circular image are divided into a planar image with enhanced image quality through processes such as circular image acquisition, circular image segmentation, conversion to plane image, pixel correction using color interpolation, color correction and edge correction in a high resolution $360^{\circ}$ omnidirectional IP Camera. Multi license plate recognition part is through the multi-plate extraction candidate region, a multi-plate candidate area normalized and restore, multiple license plate number, character recognition using a neural network in the process of recognizing a multi-planar imaging plates. In order to evaluate the multi license plate recognition system using the proposed high resolution $360^{\circ}$ omnidirectional IP camera, we experimented with a specialist in the operation of intelligent parking control system, and 97.8% of high plate recognition rate was confirmed.

Using Omnidirectional Images for Semi-Automatically Generating IndoorGML Data

  • Claridades, Alexis Richard;Lee, Jiyeong;Blanco, Ariel
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.5
    • /
    • pp.319-333
    • /
    • 2018
  • As human beings spend more time indoors, and with the growing complexity of indoor spaces, more focus is given to indoor spatial applications and services. 3D topological networks are used for various spatial applications that involve navigation indoors such as emergency evacuation, indoor positioning, and visualization. Manually generating indoor network data is impractical and prone to errors, yet current methods in automation need expensive sensors or datasets that are difficult and expensive to obtain and process. In this research, a methodology for semi-automatically generating a 3D indoor topological model based on IndoorGML (Indoor Geographic Markup Language) is proposed. The concept of Shooting Point is defined to accommodate the usage of omnidirectional images in generating IndoorGML data. Omnidirectional images were captured at selected Shooting Points in the building using a fisheye camera lens and rotator and indoor spaces are then identified using image processing implemented in Python. Relative positions of spaces obtained from CAD (Computer-Assisted Drawing) were used to generate 3D node-relation graphs representing adjacency, connectivity, and accessibility in the study area. Subspacing is performed to more accurately depict large indoor spaces and actual pedestrian movement. Since the images provide very realistic visualization, the topological relationships were used to link them to produce an indoor virtual tour.

Head Position Detection Using Omnidirectional Camera (전 방향 카메라 영상에서 사람의 얼굴 위치검출 방법)

  • Bae, Kwang-Hyuk;Park, Kang-Ryoung;Kim, Jai-Hie
    • Proceedings of the IEEK Conference
    • /
    • 2007.07a
    • /
    • pp.283-284
    • /
    • 2007
  • This paper proposes a method of real-time segmentation of moving region and detection of head position in a single omnidrectional camera Segmentation of moving region used background modeling method by a mixture of Gaussian(MOG) and shadow detection method. Circular constraint was proposed for detecting head position.

  • PDF

Localization using Ego Motion based on Fisheye Warping Image (어안 워핑 이미지 기반의 Ego motion을 이용한 위치 인식 알고리즘)

  • Choi, Yun Won;Choi, Kyung Sik;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.1
    • /
    • pp.70-77
    • /
    • 2014
  • This paper proposes a novel localization algorithm based on ego-motion which used Lucas-Kanade Optical Flow and warping image obtained through fish-eye lenses mounted on the robots. The omnidirectional image sensor is a desirable sensor for real-time view-based recognition of a robot because the all information around the robot can be obtained simultaneously. The preprocessing (distortion correction, image merge, etc.) of the omnidirectional image which obtained by camera using reflect in mirror or by connection of multiple camera images is essential because it is difficult to obtain information from the original image. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we extract motion vectors using Lucas-Kanade Optical Flow in preprocessed image. Third, we estimate the robot position and angle using ego-motion method which used direction of vector and vanishing point obtained by RANSAC. We confirmed the reliability of localization algorithm using ego-motion based on fisheye warping image through comparison between results (position and angle) of the experiment obtained using the proposed algorithm and results of the experiment measured from Global Vision Localization System.

A Study on the Development of Camera Gimbal System for Unmanned Flight Vehicle with VR 360 Degree Omnidirectional Photographing (360도 VR 촬영을 위한 무인 비행체용 카메라 짐벌 시스템 개발에 관한 연구)

  • Jung, Nyum;Kim, Sang-Hoon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.11 no.8
    • /
    • pp.767-772
    • /
    • 2016
  • The purpose of this paper is to develop a gimbal system installed in the UFV(unmanned flight vehicles) for 360 degree VR video. In particular, even if the UFV rotated any direction the camera position is fiexd to minimize the shaking using the gyro sensor and the camera system is stable for taking $360^{\circ}$ panorama VR images.

Camera Parameter Extraction For Long Distance Estimation Using Omnidirectional Camera (전방향 카메라를 사용한 원거리 추정을 위한 파라미터 추출)

  • Lee, Kang-San;Jeon, Joo-Il;Kang, Hyun-Soo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.11a
    • /
    • pp.227-230
    • /
    • 2009
  • 본 논문은 전방향 스테레오 카메라를 이용한 거리 측정을 위해 반드시 수행되어야 하는 전방향 카메라의 교정방법에 관해 기술한다. 전방향 스테레오 카메라의 교정에 있어서, 두 대의 전방향 카메라를 각각 독립적으로 교정하거나 두 대의 카메라의 베이스라인이 크지 않은 경우의 교정은 기존의 연구된 다양한 방법을 통해 가능하다. 그러나 전방향 스테레오 카메라를 이용하여 원거리를 측정하기 위해서는 베이스라인이 충분히 커야 하며, 충분히 큰 베이스라인은 두 대의 전방향 카메라를 동시에 교정하는 것이 매우 힘들다. 이는 두 대의 전방향 카메라에서 촬영된 교정을 위한 테스트패턴의 크기가 최소한 한 대의 전방향 카메라에서 매우 작은 크기로 나타나기 때문이다. 따라서 본 논문에서는 베이스라인이 큰 두 대의 전방향 카메라의 교정을 위한 방법을 제안하고 실험을 통해 입증한다.

  • PDF

Virtual Reality Sickness Assessment based on Difference between Head Movement Velocity and Virtual Camera Motion Velocity (사용자 머리 움직임 속도와 가상 카메라 움직임 속도 간 차이에 따른 VR 멀미 측정)

  • Kim, DongUn;Jung, Yong Ju
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.1
    • /
    • pp.110-116
    • /
    • 2019
  • Virtual reality (VR) sickness can have an influential effect on the viewing quality of VR 3D contents. Particularly, watching the 3D contents on a head-mounted display (HMD) could cause some severe level of visual discomfort. Despite the importance of assessing the VR sickness, most of the recent studies have focused on unveiling the reason of inducing VR sickness. In this paper, we subjectively measure the level of VR sickness induced in the viewing of omnidirectional 3D graphics contents in HMD environment. Apart from that, we propose an objective assessment model that estimates the level of induced VR sickness by calculating the difference between head movement velocity and global camera motion velocity.

Study on the Improved Target Tracking for the Collaborative Control of the UAV-UGV (UAV-UGV의 협업제어를 위한 향상된 Target Tracking에 관한 연구)

  • Choi, Jae-Young;Kim, Sung-Gaun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.5
    • /
    • pp.450-456
    • /
    • 2013
  • This paper suggests the target tracking method improved for the collaboration of the quad rotor type UAV (Unmanned Aerial Vehicle) and omnidirectional Unmanned Ground Vehicle. If UAV shakes or UGV moves rapidly, the existing method generates a phenomenon that the tracking object loses the tracking target. To solve the problems, we propose an algorithm that can track continually when they lose the target. The proposed algorithm stores the vector of the landmark. And if the target was lost, the control signal was inputted so that the landmark could move continuously to the direction running out. Prior to the experiment, Proportional and integral control were used in 4 motors in order to calibrate the Heading value of the omnidirectional mobile robot. The landmark of UGV was recognized as the camera adhered to UAV and the target was traced through the proportional-integral-derivative control. Finally, the performance of the target tracking controller and proposed algorithm was evaluated through the experiment.

Mixing Collaborative and Hybrid Vision Devices for Robotic Applications (로봇 응용을 위한 협력 및 결합 비전 시스템)

  • Bazin, Jean-Charles;Kim, Sung-Heum;Choi, Dong-Geol;Lee, Joon-Young;Kweon, In-So
    • The Journal of Korea Robotics Society
    • /
    • v.6 no.3
    • /
    • pp.210-219
    • /
    • 2011
  • This paper studies how to combine devices such as monocular/stereo cameras, motors for panning/tilting, fisheye lens and convex mirrors, in order to solve vision-based robotic problems. To overcome the well-known trade-offs between optical properties, we present two mixed versions of the new systems. The first system is the robot photographer with a conventional pan/tilt perspective camera and fisheye lens. The second system is the omnidirectional detector for a complete 360-degree field-of-view surveillance system. We build an original device that combines a stereo-catadioptric camera and a pan/tilt stereo-perspective camera, and also apply it in the real environment. Compared to the previous systems, we show benefits of two proposed systems in aspects of maintaining both high-speed and high resolution with collaborative moving cameras and having enormous search space with hybrid configuration. The experimental results are provided to show the effectiveness of the mixing collaborative and hybrid systems.

3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner (어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.