• Title/Summary/Keyword: multi-cameras

Search Result 255, Processing Time 0.026 seconds

Combined Static and Dynamic Platform Calibration for an Aerial Multi-Camera System

  • Cui, Hong-Xia;Liu, Jia-Qi;Su, Guo-Zhong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.6
    • /
    • pp.2689-2708
    • /
    • 2016
  • Multi-camera systems which integrate two or more low-cost digital cameras are adopted to reach higher ground coverage and improve the base-height ratio in low altitude remote sensing. To guarantee accurate multi-camera integration, the geometric relationship among cameras must be determined through platform calibration techniques. This paper proposed a combined two-step platform calibration method. In the first step, the static platform calibration was conducted based on the stable relative orientation constraint and convergent conditions among cameras in static environments. In the second step, a dynamic platform self-calibration approach was proposed based on not only tie points but also straight lines in order to correct the small change of the relative relationship among cameras during dynamic flight. Experiments based on the proposed two-step platform calibration method were carried out with terrestrial and aerial images from a multi-camera system combined with four consumer-grade digital cameras onboard an unmanned aerial vehicle. The experimental results have shown that the proposed platform calibration approach is able to compensate the varied relative relationship during flight, acquiring the mosaicing accuracy of virtual images smaller than 0.5pixel. The proposed approach can be extended for calibrating other low-cost multi-camera system without rigorously mechanical structure.

Procedural Geometry Calibration and Color Correction ToolKit for Multiple Cameras (절차적 멀티카메라 기하 및 색상 정보 보정 툴킷)

  • Kang, Hoonjong;Jo, Dongsik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.4
    • /
    • pp.615-618
    • /
    • 2021
  • Recently, 3D reconstruction of real objects with multi-cameras has been widely used for many services such as VR/AR, motion capture, and plenoptic video generation. For accurate 3D reconstruction, geometry and color matching between multiple cameras will be needed. However, previous calibration and correction methods for geometry (internal and external parameters) and color (intensity) correction is difficult for non-majors to perform manually. In this paper, we propose a toolkit with procedural geometry calibration and color correction among cameras with different positions and types. Our toolkit consists of an easy user interface and turned out to be effective in setting up multi-cameras for reconstruction.

Platform Calibration of an Aerial Multi-View Camera System (항공용 다각사진 카메라 시스템의 플랫폼 캘리브레이션)

  • Lee, Chang-No;Kim, Chang-Jae;Seo, Sang-Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.3
    • /
    • pp.369-375
    • /
    • 2010
  • Since multi-view images can be utilized for 3D visualization and surveying as well, a system calibration is an essential procedure. The cameras in the system are mounted to the holder and their locations and attitudes are relatively fixed. Therefore, the locations and the attitudes of the perspective centers of the four oblique looking cameras can be calculated using the location and attitude of the nadir looking camera and the boresight values between the cameras. In this regard, this research is focusing on the analysis of the relative location and attitude between the nadir and oblique looking cameras based on the results of the exterior orientation parameters after the aerial triangulation of the real multiview images. We acquired high standard deviations of the relative locations between the nadir and oblique cameras. Standard deviations of the relative attitudes between the cameras were low when only the exterior orientations of the oblique looking cameras were allowed to be adjusted. Moreover, low standard deviations of the relative attitudes came when we considered not all the exterior orientations of the cameras but the attitudes of them only.

Algorithms for Multi-sensor and Multi-primitive Photogrammetric Triangulation

  • Shin, Sung-Woong;Habib, Ayman F.;Ghanma, Mwafag;Kim, Chang-Jae;Kim, Eui-Myoung
    • ETRI Journal
    • /
    • v.29 no.4
    • /
    • pp.411-420
    • /
    • 2007
  • The steady evolution of mapping technology is leading to an increasing availability of multi-sensory geo-spatial datasets, such as data acquired by single-head frame cameras, multi-head frame cameras, line cameras, and light detection and ranging systems, at a reasonable cost. The complementary nature of the data collected by these systems makes their integration to obtain a complete description of the object space. However, such integration is only possible after accurate co-registration of the collected data to a common reference frame. The registration can be carried out reliably through a triangulation procedure which considers the characteristics of the involved data. This paper introduces algorithms for a multi-primitive and multi-sensory triangulation environment, which is geared towards taking advantage of the complementary characteristics of spatial data available from the above mentioned sensors. The triangulation procedure ensures the alignment of involved data to a common reference frame. The devised methodologies are tested and proven efficient through experiments using real multi-sensory data.

  • PDF

Improved Object Recognition using Multi-view Camera for ADAS (ADAS용 다중화각 카메라를 이용한 객체 인식 향상)

  • Park, Dong-hun;Kim, Hakil
    • Journal of Broadcast Engineering
    • /
    • v.24 no.4
    • /
    • pp.573-579
    • /
    • 2019
  • To achieve fully autonomous driving, the perceptual skills of the surrounding environment must be superior to those of humans. The $60^{\circ}$ angle, $120^{\circ}$ wide angle cameras, which are used primarily in autonomous driving, have their disadvantages depending on the viewing angle. This paper uses a multi-angle object recognition system to overcome each of the disadvantages of wide and narrow-angle cameras. Also, the aspect ratio of data acquired with wide and narrow-angle cameras was analyzed to modify the SSD(Single Shot Detector) algorithm, and the acquired data was learned to achieve higher performance than when using only monocular cameras.

Object Detection and Localization on Map using Multiple Camera and Lidar Point Cloud

  • Pansipansi, Leonardo John;Jang, Minseok;Lee, Yonsik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.422-424
    • /
    • 2021
  • In this paper, it leads the approach of fusing multiple RGB cameras for visual objects recognition based on deep learning with convolution neural network and 3D Light Detection and Ranging (LiDAR) to observe the environment and match into a 3D world in estimating the distance and position in a form of point cloud map. The goal of perception in multiple cameras are to extract the crucial static and dynamic objects around the autonomous vehicle, especially the blind spot which assists the AV to navigate according to the goal. Numerous cameras with object detection might tend slow-going the computer process in real-time. The computer vision convolution neural network algorithm to use for eradicating this problem use must suitable also to the capacity of the hardware. The localization of classified detected objects comes from the bases of a 3D point cloud environment. But first, the LiDAR point cloud data undergo parsing, and the used algorithm is based on the 3D Euclidean clustering method which gives an accurate on localizing the objects. We evaluated the method using our dataset that comes from VLP-16 and multiple cameras and the results show the completion of the method and multi-sensor fusion strategy.

  • PDF

Cavitation in Pump Inducer with Axi-asymmetrical Inlet Plate Observed by Multi-cameras

  • Kim, Jun-Ho;Atono, Takashi;Ishizaka, Koichi;Watanabe, Satoshi;Furukawa, Akinori
    • International Journal of Fluid Machinery and Systems
    • /
    • v.3 no.2
    • /
    • pp.122-128
    • /
    • 2010
  • The attachment of inducer in front of main impeller is a powerful method to improve cavitation performance; however, cavitation surge oscillation with low frequency occurs with blade cavity growing to each throat section of blade passage simultaneously. Then, one conceptual method of installing suction axi-asymmetrical plate has been proposed so as to keep every throat passage away from being unstable at once, and the effect on suppression of the oscillation were investigated. In the present study, cavitation behaviors in the inducer is observed with distributing multi-cameras circumferentially, recording simultaneously and reconstructing multi-photos on one plane field as moving a linear cascade. Observed results are utilized for discussion with other measuring results as casing wall pressure distribution. Then the suppression mechanism of oscillation by installing axi-asymmetrical inlet plate will be clarified in more details.

Super-Resolution Image Reconstruction Using Multi-View Cameras (다시점 카메라를 이용한 초고해상도 영상 복원)

  • Ahn, Jae-Kyun;Lee, Jun-Tae;Kim, Chang-Su
    • Journal of Broadcast Engineering
    • /
    • v.18 no.3
    • /
    • pp.463-473
    • /
    • 2013
  • In this paper, we propose a super-resolution (SR) image reconstruction algorithm using multi-view images. We acquire 25 images from multi-view cameras, which consist of a $5{\times}5$ array of cameras, and then reconstruct an SR image of the center image using a low resolution (LR) input image and the other 24 LR reference images. First, we estimate disparity maps from the input image to the 24 reference images, respectively. Then, we interpolate a SR image by employing the LR image and matching points in the reference images. Finally, we refine the SR image using an iterative regularization scheme. Experimental results demonstrate that the proposed algorithm provides higher quality SR images than conventional algorithms.

Multi-camera System Calibration with Built-in Relative Orientation Constraints (Part 1) Theoretical Principle

  • Lari, Zahra;Habib, Ayman;Mazaheri, Mehdi;Al-Durgham, Kaleel
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.3
    • /
    • pp.191-204
    • /
    • 2014
  • In recent years, multi-camera systems have been recognized as an affordable alternative for the collection of 3D spatial data from physical surfaces. The collected data can be applied for different mapping(e.g., mobile mapping and mapping inaccessible locations)or metrology applications (e.g., industrial, biomedical, and architectural). In order to fully exploit the potential accuracy of these systems and ensure successful manipulation of the involved cameras, a careful system calibration should be performed prior to the data collection procedure. The calibration of a multi-camera system is accomplished when the individual cameras are calibrated and the geometric relationships among the different system components are defined. In this paper, a new single-step approach is introduced for the calibration of a multi-camera system (i.e., individual camera calibration and estimation of the lever-arm and boresight angles among the system components). In this approach, one of the cameras is set as the reference camera and the system mounting parameters are defined relative to that reference camera. The proposed approach is easy to implement and computationally efficient. The major advantage of this method, when compared to available multi-camera system calibration approaches, is the flexibility of being applied for either directly or indirectly geo-referenced multi-camera systems. The feasibility of the proposed approach is verified through experimental results using real data collected by a newly-developed indirectly geo-referenced multi-camera system.

The Improved Joint Bayesian Method for Person Re-identification Across Different Camera

  • Hou, Ligang;Guo, Yingqiang;Cao, Jiangtao
    • Journal of Information Processing Systems
    • /
    • v.15 no.4
    • /
    • pp.785-796
    • /
    • 2019
  • Due to the view point, illumination, personal gait and other background situation, person re-identification across cameras has been a challenging task in video surveillance area. In order to address the problem, a novel method called Joint Bayesian across different cameras for person re-identification (JBR) is proposed. Motivated by the superior measurement ability of Joint Bayesian, a set of Joint Bayesian matrices is obtained by learning with different camera pairs. With the global Joint Bayesian matrix, the proposed method combines the characteristics of multi-camera shooting and person re-identification. Then this method can improve the calculation precision of the similarity between two individuals by learning the transition between two cameras. For investigating the proposed method, it is implemented on two compare large-scale re-ID datasets, the Market-1501 and DukeMTMC-reID. The RANK-1 accuracy significantly increases about 3% and 4%, and the maximum a posterior (MAP) improves about 1% and 4%, respectively.