• Title/Summary/Keyword: Multi-camera

Search Result 879, Processing Time 0.03 seconds

The Analysis of 3D Position Accuracy of Multi-Looking Camera (다각촬영카메라의 3차원 위치정확도 분석)

  • Go, Jong-Sik;Choi, Yoon-Soo;Jang, Se-Jin;Lee, Ki-Wook
    • Spatial Information Research
    • /
    • v.19 no.3
    • /
    • pp.33-42
    • /
    • 2011
  • Since the method of generating 3D Spatial Information using aerial photographs was introduced, lots of researches on effective generation methods and applications have been performed. Nadir and oblique imagery are acquired in a same time by Pictometry system, and then 3D positioning is processed as Multi-Looking Camera procedure. In this procedure, the number of GCPs is the main factor which can affect the accuracy of true-orthoimage. In this study, 3D positioning accuracies of true-orthoimages which had been generated using various number of GCPs were estimated. Also, the standard of GCP number and distribution were proposed.

Refinements of Multi-sensor based 3D Reconstruction using a Multi-sensor Fusion Disparity Map (다중센서 융합 상이 지도를 통한 다중센서 기반 3차원 복원 결과 개선)

  • Kim, Si-Jong;An, Kwang-Ho;Sung, Chang-Hun;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.4
    • /
    • pp.298-304
    • /
    • 2009
  • This paper describes an algorithm that improves 3D reconstruction result using a multi-sensor fusion disparity map. We can project LRF (Laser Range Finder) 3D points onto image pixel coordinatesusing extrinsic calibration matrixes of a camera-LRF (${\Phi}$, ${\Delta}$) and a camera calibration matrix (K). The LRF disparity map can be generated by interpolating projected LRF points. In the stereo reconstruction, we can compensate invalid points caused by repeated pattern and textureless region using the LRF disparity map. The result disparity map of compensation process is the multi-sensor fusion disparity map. We can refine the multi-sensor 3D reconstruction based on stereo vision and LRF using the multi-sensor fusion disparity map. The refinement algorithm of multi-sensor based 3D reconstruction is specified in four subsections dealing with virtual LRF stereo image generation, LRF disparity map generation, multi-sensor fusion disparity map generation, and 3D reconstruction process. It has been tested by synchronized stereo image pair and LRF 3D scan data.

  • PDF

Clustering based object feature matching for multi-camera system (멀티 카메라 연동을 위한 군집화 기반의 객체 특징 정합)

  • Kim, Hyun-Soo;Kim, Gyeong-Hwan
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.915-916
    • /
    • 2008
  • We propose a clustering based object feature matching for identification of same object in multi-camera system. The method is focused on ease to system initialization and extension. Clustering is used to estimate parameters of Gaussian mixture models of objects. A similarity measure between models are determined by Kullback-Leibler divergence. This method can be applied to occlusion problem in tracking.

  • PDF

Analysis of Image and Development of UV Corona Camera for High-Voltage Discharge Detection (고전압 방전 검출용 자외선 코로나 카메라 개발 및 방전 이미지 분석)

  • Kim, Young-Seok;Shong, Kil-Mok;Bang, Sun-Bae;Kim, Chong-Min;Choi, Myeong-Il
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.25 no.9
    • /
    • pp.69-74
    • /
    • 2011
  • In this paper, the UV corona camera was developed using the solar blind and Multi Channel Plate(MCP) technology for the target localization of UV image. UV camera developed a $6.4[^{\circ}]{\times}4.8[^{\circ}]$ of the field of view as a conventional camera to diagnose a wide range of slightly enlarged, and power equipment to measure the distance between the camera and the distance meter has been attached. UV camera to measure the discharge count and the UV image was developed, compared with a commercial camera, there was no significant difference. In salt spray environments breakdown voltage was lower than the normal state, thereby discharging the image was rapidly growing phenomenon.

NON-UNIFORMITY CORRECTION- SYSTEM ANALYSIS FOR MULTI-SPECTRAL CAMERA

  • Park Jong-Euk;Kong Jong-Pil;Heo Haeng-Pal;Kim Young Sun;Chang Young Jun
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.478-481
    • /
    • 2005
  • The PMU (Payload Management Unit) is the main subsystem for the management, control and power supply of the MSC (Multi-Spectral Camera) Payload operation. It is the most important function for the electro-optical camera system that performs the Non-Uniformity Correction (NUC) function of the raw imagery data, rearranges the data from the CCD (Charge Coupled Device) detector and output it to the Data Compression and Storage Unit (DCSU). The NUC board in PMU performs it. In this paper, the NUC board system is described in terms of the configuration and the function, the efficiency for non-uniformity correction, and the influence of the data compression upon the peculiar feature of the CCD pixel. The NUC board is an image-processing unit within the PMU that receives video data from the CEV (Camera Electronic Unit) boards via a hotlinkand performs non-uniformity corrections upon the pixels according to commands received from the SBC (Single Board Computer) in the PMU. The lossy compression in DCSU needs the NUC in on-orbit condition.

  • PDF

MIPI CSI-2 & D-PHY Camera Controller Design for Future Mobile Platform (차세대 모바일 단말 플랫폼을 위한 MIPI CSI-2 & D-PHY 카메라 컨트롤러 구현)

  • Hyun, Eu-Gin;Kwon, Soon;Jung, Woo-Young
    • The KIPS Transactions:PartA
    • /
    • v.14A no.7
    • /
    • pp.391-398
    • /
    • 2007
  • In this paper, we design a future mobile camera standard interface based on the MIPI CSI-2 and D-PHY specification. The proposed CSI-2 have the efficient multi-lane management layer, which the independent buffer on the each lane are merged into single buffer. This scheme can flexibly manage data on multi lanes though the number of supported lanes are mismatched in a camera processor transmitter and a host processor. The proposed CSI-2 & D-PHY are verified under test bench. We make an experiment on CSI-2 & D-PHY with FPGA type test-bed and implement them onto a mobile handset. The proposed CSI-2 & D-PHY module are used as both the bridge type and the future camera processor IP for SoC.

Vision Inspection for Flexible Lens Assembly of Camera Phone (카메라 폰 렌즈 조립을 위한 비전 검사 방법들에 대한 연구)

  • Lee I.S.;Kim J.O.;Kang H.S.;Cho Y.J.;Lee G.B.
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2006.05a
    • /
    • pp.631-632
    • /
    • 2006
  • The assembly of camera lens modules fur the mobile phone has not been automated so far. They are still assembled manually because of high precision of all parts and hard-to-recognize lens by vision camera. In addition, the very short life cycle of the camera phone lens requires flexible and intelligent automation. This study proposes a fast and accurate identification system of the parts by distributing the camera for 4 degree of freedom assembly robot system. Single or multi-cameras can be installed according to the part's image capture and processing mode. It has an agile structure which enables adaptation with the minimal job change. The framework is proposed and the experimental result is shown to prove the effectiveness.

  • PDF

Algorithms for Multi-sensor and Multi-primitive Photogrammetric Triangulation

  • Shin, Sung-Woong;Habib, Ayman F.;Ghanma, Mwafag;Kim, Chang-Jae;Kim, Eui-Myoung
    • ETRI Journal
    • /
    • v.29 no.4
    • /
    • pp.411-420
    • /
    • 2007
  • The steady evolution of mapping technology is leading to an increasing availability of multi-sensory geo-spatial datasets, such as data acquired by single-head frame cameras, multi-head frame cameras, line cameras, and light detection and ranging systems, at a reasonable cost. The complementary nature of the data collected by these systems makes their integration to obtain a complete description of the object space. However, such integration is only possible after accurate co-registration of the collected data to a common reference frame. The registration can be carried out reliably through a triangulation procedure which considers the characteristics of the involved data. This paper introduces algorithms for a multi-primitive and multi-sensory triangulation environment, which is geared towards taking advantage of the complementary characteristics of spatial data available from the above mentioned sensors. The triangulation procedure ensures the alignment of involved data to a common reference frame. The devised methodologies are tested and proven efficient through experiments using real multi-sensory data.

  • PDF

Layered Depth Image Representation And H.264 Encoding of Multi-view video For Free viewpoint TV (자유시점 TV를 위한 다시점 비디오의 계층적 깊이 영상 표현과 H.264 부호화)

  • Shin, Jong Hong
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.7 no.2
    • /
    • pp.91-100
    • /
    • 2011
  • Free viewpoint TV can provide multi-angle view point images for viewer needs. In the real world, But all angle view point images can not be captured by camera. Only a few any angle view point images are captured by each camera. Group of the captured images is called multi-view image. Therefore free viewpoint TV wants to production of virtual sub angle view point images form captured any angle view point images. Interpolation methods are known of this problem general solution. To product interpolated view point image of correct angle need to depth image of multi-view image. Unfortunately, multi-view video including depth image is necessary to develop a new compression encoding technique for storage and transmission because of a huge amount of data. Layered depth image is an efficient representation method of multi-view video data. This method makes a data structure that is synthesis of multi-view color and depth image. This paper proposed enhanced compression method using layered depth image representation and H.264/AVC video coding technology. In experimental results, confirmed high compression performance and good quality reconstructed image.

Parameter Studies for Measuring Vibration by Using Camera (카메라를 이용한 진동 측정 시 주요인자 분석)

  • Jeon, Hyeong-Seop;Choi, Young-Chul;Park, Jin-Ho;Park, Jong-Won
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.20 no.11
    • /
    • pp.1033-1037
    • /
    • 2010
  • Accelerometer and laser vibrometers are widely used to measure vibration of structures like a building or piping. Recently, the research measuring vibration by using camera image is introduced. This method can measure multi-points simultaneously. Also, it is possible to measure in the long distance. When we measure the vibration using a camera, the parameter analysis is needed. Therefore, this paper took the experiment for the camera lens selection. An error by the camera images characteristic was theoretically analyzed and we verified through an experiment. And the accuracy of the method measuring the vibration displacement by using the camera images was analyzed.