• Title/Summary/Keyword: multi-camera

Search Result 878, Processing Time 0.032 seconds

Synthesis of Multi-View Images Based on a Convergence Camera Model

  • Choi, Hyun-Jun
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.2
    • /
    • pp.197-200
    • /
    • 2011
  • In this paper, we propose a multi-view stereoscopic image synthesis algorithm for 3DTV system using depth information with an RGB texture from a depth camera. The proposed algorithm synthesizes multi-view images which a virtual convergence camera model could generate. Experimental results showed that the performance of the proposed algorithm is better than those of conventional methods.

Method of vegetation spectrum measurement using multi spectrum camera

  • Takafuji, Yoshifumi.;Kajiwara, Koji.;Honda, Yoshiaki.
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.570-572
    • /
    • 2003
  • In this paper, a method of vegetation spectrum measurement using multi spectrum camera was studied. Each pixel in taken images using multi spectrum camera have spectrum data, the relationship between spectrum data and distribution, structure, etc. are directly turned out. In other words, detailed spectrum data information of object including spatial distribution can be obtained from those images. However, the camera has some problems for applying field measurement and data analysis. In this study, those problems are solved.

  • PDF

Self-calibration of a Multi-camera System using Factorization Techniques for Realistic Contents Generation (실감 콘텐츠 생성을 위한 분해법 기반 다수 카메라 시스템 자동 보정 알고리즘)

  • Kim, Ki-Young;Woo, Woon-Tack
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.495-506
    • /
    • 2006
  • In this paper, we propose a self-calibration of a multi-camera system using factorization techniques for realistic contents generation. The traditional self-calibration algorithms for multi-camera systems have been focused on stereo(-rig) camera systems or multiple camera systems with a fixed configuration. Thus, it is required to exploit them in 3D reconstruction with a mobile multi-camera system and another general applications. For those reasons, we suggest the robust algorithm for general structured multi-camera systems including the algorithm for a plane-structured multi-camera system. In our paper, we explain the theoretical background and practical usages based on a projective factorization and the proposed affine factorization. We show experimental results with simulated data and real images as well. The proposed algorithm can be used for a 3D reconstruction and a mobile Augmented Reality.

View Synthesis and Coding of Multi-view Data in Arbitrary Camera Arrangements Using Multiple Layered Depth Images

  • Yoon, Seung-Uk;Ho, Yo-Sung
    • Journal of Multimedia Information System
    • /
    • v.1 no.1
    • /
    • pp.1-10
    • /
    • 2014
  • In this paper, we propose a new view synthesis technique for coding of multi-view color and depth data in arbitrary camera arrangements. We treat each camera position as a 3-D point in world coordinates and build clusters of those vertices. Color and depth data within a cluster are gathered into one camera position using a hierarchical representation based on the concept of layered depth image (LDI). Since one camera can cover only a limited viewing range, we set multiple reference cameras so that multiple LDIs are generated to cover the whole viewing range. Therefore, we can enhance the visual quality of the reconstructed views from multiple LDIs comparing with that from a single LDI. From experimental results, the proposed scheme shows better coding performance under arbitrary camera configurations in terms of PSNR and subjective visual quality.

  • PDF

Extrinsic calibration using a multi-view camera (멀티뷰 카메라를 사용한 외부 카메라 보정)

  • 김기영;김세환;박종일;우운택
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.187-190
    • /
    • 2003
  • In this paper, we propose an extrinsic calibration method for a multi-view camera to get an optimal pose in 3D space. Conventional calibration algorithms do not guarantee the calibration accuracy at a mid/long distance because pixel errors increase as the distance between camera and pattern goes far. To compensate for the calibration errors, firstly, we apply the Tsai's algorithm to each lens so that we obtain initial extrinsic parameters Then, we estimate extrinsic parameters by using distance vectors obtained from structural cues of a multi-view camera. After we get the estimated extrinsic parameters of each lens, we carry out a non-linear optimization using the relationship between camera coordinate and world coordinate iteratively. The optimal camera parameters can be used in generating 3D panoramic virtual environment and supporting AR applications.

  • PDF

Multi-tracer Imaging of a Compton Camera (다중 추적자 영상을 위한 컴프턴 카메라)

  • Kim, Soo Mee
    • Progress in Medical Physics
    • /
    • v.26 no.1
    • /
    • pp.18-27
    • /
    • 2015
  • Since a Compton camera has high detection sensitivity due to electronic collimation and a good energy resolution, it is a potential imaging system for nuclear medicine. In this study, we investigated the feasibility of a Compton camera for multi-tracer imaging and proposed a rotating Compton camera to satisfy Orlov's condition for 3D imaging. Two software phantoms of 140 and 511 keV radiation sources were used for Monte-Carlo simulation and then the simulation data were reconstructed by listmode ordered subset expectation maximization to evaluate the capability of multi-tracer imaging in a Compton camera. And the Compton camera rotating around the object was proposed and tested with different rotation angle steps for improving the limited coverage of the fixed conventional Compton camera over the field-of-view in terms of histogram of angles in spherical coordinates. The simulation data showed the separate 140 and 511 keV images from simultaneous multi-tracer detection in both 2D and 3D imaging and the number of valid projection lines on the conical surfaces was inversely proportional to the decrease of rotation angle. Considering computation load and proper number of projection lines on the conical surface, the rotation angle of 30 degree was sufficient for 3D imaging of the Compton camera in terms of 26 min of computation time and 5 million of detected event number and the increased detection time can be solved with multiple Compton camera system. The Compton camera proposed in this study can be effective system for multi-tracer imaging and is a potential system for development of various disease diagnosis and therapy approaches.

Multi-camera System Calibration with Built-in Relative Orientation Constraints (Part 2) Automation, Implementation, and Experimental Results

  • Lari, Zahra;Habib, Ayman;Mazaheri, Mehdi;Al-Durgham, Kaleel
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.3
    • /
    • pp.205-216
    • /
    • 2014
  • Multi-camera systems have been widely used as cost-effective tools for the collection of geospatial data for various applications. In order to fully achieve the potential accuracy of these systems for object space reconstruction, careful system calibration should be carried out prior to data collection. Since the structural integrity of the involved cameras' components and system mounting parameters cannot be guaranteed over time, multi-camera system should be frequently calibrated to confirm the stability of the estimated parameters. Therefore, automated techniques are needed to facilitate and speed up the system calibration procedure. The automation of the multi-camera system calibration approach, which was proposed in the first part of this paper, is contingent on the automated detection, localization, and identification of the object space signalized targets in the images. In this paper, the automation of the proposed camera calibration procedure through automatic target extraction and labelling approaches will be presented. The introduced automated system calibration procedure is then implemented for a newly-developed multi-camera system while considering the optimum configuration for the data collection. Experimental results from the implemented system calibration procedure are finally presented to verify the feasibility the proposed automated procedure. Qualitative and quantitative evaluation of the estimated system calibration parameters from two-calibration sessions is also presented to confirm the stability of the cameras' interior orientation and system mounting parameters.

Development of a Multi-View Camera System Prototype (다각사진촬영시스템 프로토타입 개발)

  • Park, Seon-Dong;Seo, Sang-Il;Yoon, Dong-Jin;Shin, Jin-Soo;Lee, Chang-No
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.2
    • /
    • pp.261-271
    • /
    • 2009
  • Due to the recent rise of a need for 3 dimensional geospatial information on urban areas, general interest in aerial multi-view cameras has been on an increase. The conventional geospatial information system depends solely upon vertical images, while the multi-view camera is capable of taking both vertical and oblique images taken from multiple directions, thus making it easier for the user to interpret the object. Through our research we developed a prototype of a multi-view camera system that includes a camera system, GPS/INS, a flight management system, and a control system. We also studied and experimented with the camera viewing angles, the synchronization of image capture, the exposure delay, the data storage that must be considered for the development of the multi-view camera system.

Multiple Camera Calibration for Panoramic 3D Virtual Environment (파노라믹 3D가상 환경 생성을 위한 다수의 카메라 캘리브레이션)

  • 김세환;김기영;우운택
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.2
    • /
    • pp.137-148
    • /
    • 2004
  • In this paper, we propose a new camera calibration method for rotating multi-view cameras to generate image-based panoramic 3D Virtual Environment. Since calibration accuracy worsens with an increase in distance between camera and calibration pattern, conventional camera calibration algorithms are not proper for panoramic 3D VE generation. To remedy the problem, a geometric relationship among all lenses of a multi-view camera is used for intra-camera calibration. Another geometric relationship among multiple cameras is used for inter-camera calibration. First camera parameters for all lenses of each multi-view camera we obtained by applying Tsai's algorithm. In intra-camera calibration, the extrinsic parameters are compensated by iteratively reducing discrepancy between estimated and actual distances. Estimated distances are calculated using extrinsic parameters for every lens. Inter-camera calibration arranges multiple cameras in a geometric relationship. It exploits Iterative Closet Point (ICP) algorithm using back-projected 3D point clouds. Finally, by repeatedly applying intra/inter-camera calibration to all lenses of rotating multi-view cameras, we can obtain improved extrinsic parameters at every rotated position for a middle-range distance. Consequently, the proposed method can be applied to stitching of 3D point cloud for panoramic 3D VE generation. Moreover, it may be adopted in various 3D AR applications.

A Study on the 3D Video Generation Technique using Multi-view and Depth Camera (다시점 카메라 및 depth 카메라를 이용한 3 차원 비디오 생성 기술 연구)

  • Um, Gi-Mun;Chang, Eun-Young;Hur, Nam-Ho;Lee, Soo-In
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.549-552
    • /
    • 2005
  • This paper presents a 3D video content generation technique and system that uses the multi-view images and the depth map. The proposed uses 3-view video and depth inputs from the 3-view video camera and depth camera for the 3D video content production. Each camera is calibrated using Tsai's calibration method, and its parameters are used to rectify multi-view images for the multi-view stereo matching. The depth and disparity maps for the center-view are obtained from both the depth camera and the multi-view stereo matching technique. These two maps are fused to obtain more reliable depth map. Obtained depth map is not only used to insert a virtual object to the scene based on the depth key, but is also used to synthesize virtual viewpoint images. Some preliminary test results are given to show the functionality of the proposed technique.

  • PDF