• Title/Summary/Keyword: Multi-Camera System

Search Result 470, Processing Time 0.028 seconds

Multi License Plate Recognition System using High Resolution 360° Omnidirectional IP Camera (고해상도 360° 전방위 IP 카메라를 이용한 다중 번호판 인식 시스템)

  • Ra, Seung-Tak;Lee, Sun-Gu;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.21 no.4
    • /
    • pp.412-415
    • /
    • 2017
  • In this paper, we propose a multi license plate recognition system using high resolution $360^{\circ}$ omnidirectional IP camera. The proposed system consists of a planar division part of $360^{\circ}$ circular image and a multi license plate recognition part. The planar division part of the $360^{\circ}$ circular image are divided into a planar image with enhanced image quality through processes such as circular image acquisition, circular image segmentation, conversion to plane image, pixel correction using color interpolation, color correction and edge correction in a high resolution $360^{\circ}$ omnidirectional IP Camera. Multi license plate recognition part is through the multi-plate extraction candidate region, a multi-plate candidate area normalized and restore, multiple license plate number, character recognition using a neural network in the process of recognizing a multi-planar imaging plates. In order to evaluate the multi license plate recognition system using the proposed high resolution $360^{\circ}$ omnidirectional IP camera, we experimented with a specialist in the operation of intelligent parking control system, and 97.8% of high plate recognition rate was confirmed.

Establishment of Test Field for Aerial Camera Calibration (항공 카메라 검정을 위한 테스트 필드 구축방안)

  • Lee, Jae-One;Yoon, Jong-Seong;Sin, Jin-Soo;Yun, Bu-Yeol
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.16 no.2
    • /
    • pp.67-76
    • /
    • 2008
  • Recently, one of the most outstanding technological characteristics of aerial survey is an application of Direct Georeferencing, which is based on the integration of main sensing sensors such as aerial camera or Lidar with positioning sensors GPS and IMU. In addition, a variety of digital aerial mapping cameras is developed and supplied with the verification of their technical superiority and applicability. In accordance with this requirement, the development of a multi-looking aerial photographing system is just making 3-D information acquisition and texture mapping possible for the dead areas arising from building side and high terrain variation where the use of traditional phptogrammetry is not valid. However, the development of a multi-looking camera integrating different sensors and multi-camera array causes some problems to conduct time synchronization among sensors and their geometric and radiometric calibration. The establishment of a test field for aerial sensor calibration is absolutely necessary to solve this problem. Therefore, this paper describes investigations for photogrammetric Test Field of foreign countries and suggest an establishment scheme for domestic test field.

  • PDF

Real-Time Detection of Moving Objects from Shaking Camera Based on the Multiple Background Model and Temporal Median Background Model (다중 배경모델과 순시적 중앙값 배경모델을 이용한 불안정 상태 카메라로부터의 실시간 이동물체 검출)

  • Kim, Tae-Ho;Jo, Kang-Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.3
    • /
    • pp.269-276
    • /
    • 2010
  • In this paper, we present the detection method of moving objects based on two background models. These background models support to understand multi layered environment belonged in images taken by shaking camera and each model is MBM(Multiple Background Model) and TMBM (Temporal Median Background Model). Because two background models are Pixel-based model, it must have noise by camera movement. Therefore correlation coefficient calculates the similarity between consecutive images and measures camera motion vector which indicates camera movement. For the calculation of correlation coefficient, we choose the selected region and searching area in the current and previous image respectively then we have a displacement vector by the correlation process. Every selected region must have its own displacement vector therefore the global maximum of a histogram of displacement vectors is the camera motion vector between consecutive images. The MBM classifies the intensity distribution of each pixel continuously related by camera motion vector to the multi clusters. However, MBM has weak sensitivity for temporal intensity variation thus we use TMBM to support the weakness of system. In the video-based experiment, we verify the presented algorithm needs around 49(ms) to generate two background models and detect moving objects.

PTZ Camera Based Multi Event Processing for Intelligent Video Network (지능형 영상네트워크 연계형 PTZ카메라 기반 다중 이벤트처리)

  • Chang, Il-Sik;Ahn, Seong-Je;Park, Gwang-Yeong;Cha, Jae-Sang;Park, Goo-Man
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.11A
    • /
    • pp.1066-1072
    • /
    • 2010
  • In this paper we proposed a multi event handling surveillance system using multiple PTZ cameras. One event is assigned to each PTZ camera to detect unusual situation. If a new object appears in the scene while a camera is tracking the old one, it can not handle two objects simultaneously. In the second case that the object moves out of the scene during the tracking, the camera loses the object. In the proposed method, the nearby camera takes the role to trace the new one or detect the lost one in each case. The nearby camera can get the new object location information from old camera and set the seamless event link for the object. Our simulation result shows the continuous camera-to-camera object tracking performance.

Autonomous Driving Platform using Hybrid Camera System (복합형 카메라 시스템을 이용한 자율주행 차량 플랫폼)

  • Eun-Kyung Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1307-1312
    • /
    • 2023
  • In this paper, we propose a hybrid camera system that combines cameras with different focal lengths and LiDAR (Light Detection and Ranging) sensors to address the core components of autonomous driving perception technology, which include object recognition and distance measurement. We extract objects within the scene and generate precise location and distance information for these objects using the proposed hybrid camera system. Initially, we employ the YOLO7 algorithm, widely utilized in the field of autonomous driving due to its advantages of fast computation, high accuracy, and real-time processing, for object recognition within the scene. Subsequently, we use multi-focal cameras to create depth maps to generate object positions and distance information. To enhance distance accuracy, we integrate the 3D distance information obtained from LiDAR sensors with the generated depth maps. In this paper, we introduce not only an autonomous vehicle platform capable of more accurately perceiving its surroundings during operation based on the proposed hybrid camera system, but also provide precise 3D spatial location and distance information. We anticipate that this will improve the safety and efficiency of autonomous vehicles.

Development of PKNU3: A small-format, multi-spectral, aerial photographic system

  • Lee Eun-Khung;Choi Chul-Uong;Suh Yong-Cheol
    • Korean Journal of Remote Sensing
    • /
    • v.20 no.5
    • /
    • pp.337-351
    • /
    • 2004
  • Our laboratory originally developed the compact, multi-spectral, automatic aerial photographic system PKNU3 to allow greater flexibility in geological and environmental data collection. We are currently developing the PKNU3 system, which consists of a color-infrared spectral camera capable of simultaneous photography in the visible and near-infrared bands; a thermal infrared camera; two computers, each with an 80-gigabyte memory capacity for storing images; an MPEG board that can compress and transfer data to the computers in real-time; and the capability of using a helicopter platform. Before actual aerial photographic testing of the PKNU3, we experimented with each sensor. We analyzed the lens distortion, the sensitivity of the CCD in each band, and the thermal response of the thermal infrared sensor before the aerial photographing. As of September 2004, the PKNU3 development schedule has reached the second phase of testing. As the result of two aerial photographic tests, R, G, B and IR images were taken simultaneously; and images with an overlap rate of 70% using the automatic 1-s interval data recording time could be obtained by PKNU3. Further study is warranted to enhance the system with the addition of gyroscopic and IMU units. We evaluated the PKNU 3 system as a method of environmental remote sensing by comparing each chlorophyll image derived from PKNU 3 photographs. This appraisement was backed up with existing study that resulted in a modest improvement in the linear fit between the measures of chlorophyll and the RVI, NDVI and SAVI images stem from photographs taken by Duncantech MS 3100 which has same spectral configuration with MS 4000 used in PKNU3 system.

A Multi-view Super-Resolution Method with Joint-optimization of Image Fusion and Blind Deblurring

  • Fan, Jun;Wu, Yue;Zeng, Xiangrong;Huangpeng, Qizi;Liu, Yan;Long, Xin;Zhou, Jinglun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.5
    • /
    • pp.2366-2395
    • /
    • 2018
  • Multi-view super-resolution (MVSR) refers to the process of reconstructing a high-resolution (HR) image from a set of low-resolution (LR) images captured from different viewpoints typically by different cameras. These multi-view images are usually obtained by a camera array. In our previous work [1], we super-resolved multi-view LR images via image fusion (IF) and blind deblurring (BD). In this paper, we present a new MVSR method that jointly realizes IF and BD based on an integrated energy function optimization. First, we reformulate the MVSR problem into a multi-channel blind deblurring (MCBD) problem which is easier to be solved than the former. Then the depth map of the desired HR image is calculated. Finally, we solve the MCBD problem, in which the optimization problems with respect to the desired HR image and with respect to the unknown blur are efficiently addressed by the alternating direction method of multipliers (ADMM). Experiments on the Multi-view Image Database of the University of Tsukuba and images captured by our own camera array system demonstrate the effectiveness of the proposed method.

Multi-Focusing Image Capture System for 3D Stereo Image (3차원 영상을 위한 다초점 방식 영상획득장치)

  • Ham, Woon-Chul;Kwon, Hyeok-Jae;Enkhbaatar, Tumenjargal
    • The Journal of Korea Robotics Society
    • /
    • v.6 no.2
    • /
    • pp.118-129
    • /
    • 2011
  • In this paper, we suggest a new camera capturing and synthesizing algorithm with the multi-captured left and right images for the better comfortable feeling of 3D depth and also propose 3D image capturing hardware system based on the this new algorithm. We also suggest the simple control algorithm for the calibration of camera capture system with zooming function based on a performance index measure which is used as feedback information for the stabilization of focusing control problem. We also comment on the theoretical mapping theory concerning projection under the assumption that human is sitting 50cm in front of and watching the 3D LCD screen for the captured image based on the modeling of pinhole Camera. We choose 9 segmentations and propose the method to find optimal alignment and focusing based on the measure of alignment and sharpness and propose the synthesizing fusion with the optimized 9 segmentation images for the best 3D depth feeling.

Platform Calibration of an Aerial Multi-View Camera System (항공용 다각사진 카메라 시스템의 플랫폼 캘리브레이션)

  • Lee, Chang-No;Kim, Chang-Jae;Seo, Sang-Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.3
    • /
    • pp.369-375
    • /
    • 2010
  • Since multi-view images can be utilized for 3D visualization and surveying as well, a system calibration is an essential procedure. The cameras in the system are mounted to the holder and their locations and attitudes are relatively fixed. Therefore, the locations and the attitudes of the perspective centers of the four oblique looking cameras can be calculated using the location and attitude of the nadir looking camera and the boresight values between the cameras. In this regard, this research is focusing on the analysis of the relative location and attitude between the nadir and oblique looking cameras based on the results of the exterior orientation parameters after the aerial triangulation of the real multiview images. We acquired high standard deviations of the relative locations between the nadir and oblique cameras. Standard deviations of the relative attitudes between the cameras were low when only the exterior orientations of the oblique looking cameras were allowed to be adjusted. Moreover, low standard deviations of the relative attitudes came when we considered not all the exterior orientations of the cameras but the attitudes of them only.

Tracking and Recognition of vehicle and pedestrian for intelligent multi-visual surveillance systems (지능형 다중 화상감시시스템을 위한 움직이는 물체 추적 및 보행자/차량 인식 방법)

  • Lee, Saac;Cho, Jae-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.2
    • /
    • pp.435-442
    • /
    • 2015
  • In this paper, we propose a tracking and recognition of pedestrian/vehicle for intelligent multi-visual surveillance system. The intelligent multi-visual surveillance system consists of several fixed cameras and one calibrated PTZ camera, which automatically tracks and recognizes the detected moving objects. The fixed wide-angle cameras are used to monitor large open areas, but the moving objects on the images are too small to view in detail. But, the PTZ camera is capable of increasing the monitoring area and enhancing the image quality by tracking and zooming in on a target. The proposed system is able to determine whether the detected moving objects are pedestrian/vehicle or not using the SVM. In order to reduce the tracking error, an improved camera calibration algorithm between the fixed cameras and the PTZ camera is proposed. Various experimental results show the effectiveness of the proposed system.