• Title/Summary/Keyword: Cameras

Search Result 2,250, Processing Time 0.031 seconds

Tracing of Moving Objects by Stereo Video Cameras (스테레오 비디오 카메라에 의한 운동물체의 위치추적)

  • Lee, Chang-Kyung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.15 no.2
    • /
    • pp.185-193
    • /
    • 1997
  • While close range photogrammetry has been widely applied for static deformation analysis, video cameras have many characteristics that make them the sensors of choice for dynamic analysis of rapidly changing situations. They also have limitations. The aim of this research is to explore the potential of a video system for monitoring dynamic objects. A pilot system consists of two camcorders, VCR, and PC with frame grabber. To estimte the performance of this system for moving objects, a car was imaged covering several phases when starting to drive. The sequential images of a moving car were recorded on VCR. 15 images per second were digitized in an off-line mode by frame grabber. The image coordinates of targets attached to the rear bumper of a car were acquired by IDRISI, and the object coordinates were derived based on DLT. This research suggests that home video cameras, PC, and photogrammetric principles are promising tools for monitoring of the moving objects and vibrations as well as other time dependent situations.

  • PDF

Demosaicing Method for Digital Cameras with White-RGB Color Filter Array

  • Park, Jongjoo;Jang, Euee Seon;Chong, Jong-Wha
    • ETRI Journal
    • /
    • v.38 no.1
    • /
    • pp.164-173
    • /
    • 2016
  • Demosaicing, or color filter array (CFA) interpolation, estimates missing color channels of raw mosaiced images from a CFA to reproduce full-color images. It is an essential process for single-sensor digital cameras with CFAs. In this paper, a new demosaicing method for digital cameras with Bayer-like W-RGB CFAs is proposed. To preserve the edge structure when reproducing full-color images, we propose an edge direction-adaptive method using color difference estimation between different channels, which can be applied to practical digital camera use. To evaluate the performance of the proposed method in terms of CPSNR, FSIM, and S-CIELAB color distance measures, we perform simulations on sets of mosaiced images captured by an actual prototype digital camera with a Bayer-like W-RGB CFA. The simulation results show that the proposed method demosaics better than a conventional one by approximately +22.4% CPSNR, +0.9% FSIM, and +36.7% S-CIELAB distance.

Use of Optical Flow Information with three Cameras for Robot Navigation (로봇 주행을 위한 세개의 카메라를 사용한 광류 정보 활용)

  • Lee, Soo-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.2
    • /
    • pp.110-117
    • /
    • 2012
  • This paper describes a new design of optical flow estimation system with three cameras. Optical flow provides useful information of camera movement; however a unique solution is not usually available for unknowns including the depth information. A camera and two tilted cameras are used to have different view of angle and direction of movement to the camera axis. Geometric analysis is performed for cases of several independent movements. The ideas of taking advantage of the extra information for robot navigation are discussed with experimental results.

Monitoring Performance of Camera under the High Dose-rate Gamma Ray Environment (고선량율 감마선 환경하에서의 카메라 관측성능)

  • Cho, Jai-Wan;Jeong, Kyung-Min
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.61 no.8
    • /
    • pp.1172-1178
    • /
    • 2012
  • In this paper, the gamma ray irradiation test results of the CCD cameras are described. From the low dose-rate (2.11 Gy/h) to the high dose-rate (150 Gy/h) level, which is the same level when the hydrogen explosion was occurred in the 1~3 reactor unit of the Fukushima nuclear power plant, the monitoring performance of the cameras owing to the speckles are evaluated. The numbers of speckles, generated by gamma ray irradiation, in the image of cameras are calculated by image processing technique. And the legibility of the sensor indicator (dosimeter) owing to the numbers of the speckles is presented.

High-Definition Stereoscopic PTV (고해상 스테레오 PTV)

  • Doh Deog-Hee;Lee Won-Je;Cho Yong-Beom;Pyeon Yong-Beom
    • 한국가시화정보학회:학술대회논문집
    • /
    • 2002.11a
    • /
    • pp.11-14
    • /
    • 2002
  • A new high-definition stereoscopic PTV was constructed using two CCD cameras, stereoscopic photogrammetry based on a 30-PTV principle. The arrangement of the two cameras was based on angular position. The calibration of cameras and the pair-matching of the three-dimensional velocity vectors were based on Genetic Algorithm based 30-PTV technique. The constructed Stereoscopic PTV technique was tested on the standard images of the Impinging jet proposed by VSJ. The results on the turbulent properties of the jet obtained by the constructed system showed a good agreement with the original LES data.

  • PDF

The General Analysis of an Active Stereo Vision with Hand-Eye Calibration (핸드-아이 보정과 능동 스테레오 비젼의 일반적 해석)

  • Kim, Jin Dae;Lee, Jae Won;Sin, Chan Bae
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.21 no.5
    • /
    • pp.83-83
    • /
    • 2004
  • The analysis of relative pose(position and rotation) between stereo cameras is very important to determine the solution that provides three-dimensional information for an arbitrary moving target with respect to robot-end. In the space of free camera-model, the rotational parameters act on non-linear factors acquiring a kinematical solution. In this paper the general solution of active stereo that gives a three-dimensional pose of moving object is presented. The focus is to achieve a derivation of linear equation between a robot′s end and active stereo cameras. The equation is consistently derived from the vector of quaternion space. The calibration of cameras is also derived in this space. Computer simulation and the results of error-sensitivity demonstrate the successful operation of the solution. The suggested solution can also be applied to the more complex real time tracking and quite general and are applicable in various stereo fields.

The General Analysis of an Active Stereo Vision with Hand-Eye Calibration (핸드-아이 보정과 능동 스테레오 비젼의 일반적 해석)

  • 김진대;이재원;신찬배
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.21 no.5
    • /
    • pp.89-90
    • /
    • 2004
  • The analysis of relative pose(position and rotation) between stereo cameras is very important to determine the solution that provides three-dimensional information for an arbitrary moving target with respect to robot-end. In the space of free camera-model, the rotational parameters act on non-linear factors acquiring a kinematical solution. In this paper the general solution of active stereo that gives a three-dimensional pose of moving object is presented. The focus is to achieve a derivation of linear equation between a robot's end and active stereo cameras. The equation is consistently derived from the vector of quaternion space. The calibration of cameras is also derived in this space. Computer simulation and the results of error-sensitivity demonstrate the successful operation of the solution. The suggested solution can also be applied to the more complex real time tracking and quite general and are applicable in various stereo fields.

A Study on the Interior Orientation for Various Image Formation Sensors

  • Lee, Suk-Kun;Shin, Sung-Woong
    • Korean Journal of Geomatics
    • /
    • v.4 no.1
    • /
    • pp.23-30
    • /
    • 2004
  • This study aims to establish interior orientation for various types of sensors including frame cameras, panoramic cameras, line cameras, and whisk-broom scanners. To do so, this study suggests the classification of components of interior orientation of which elements are different according to the sensors. This is entailed by incorporation of sensor characteristics into mathematical models of interior orientation parameters are suggested for being used as guidelines in recovering systematic distortions. Finally, the potential errors resulted from the assumption of regarding sensor model of whisk-broom scanner model as that of push-broom scanner are discussed.

  • PDF

High Resolution 360 degree Video Generation System using Multiple Cameras (다수의 카메라를 이용한 고해상도 360도 동영상 생성 시스템)

  • Jeong, Jinwook;Jun, Kyungkoo
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.8
    • /
    • pp.1329-1336
    • /
    • 2016
  • This paper develops a 360 degree video system using multiple off-the-shelf webcams and a set of embedded boards. Existing 360 degree cameras have shortcomings that they do not support real-time video generation since recorded videos should be copied to computers or smartphones which then provide stitching. Another shortcoming is that wide FoV(Field of View) cameras are not able to provide sufficiently high resolution. Moreover, resulting images are visually distorted bending straight lines. By employing an array of 65 degree FoV webcams, we were able to generate videos on the spot and achieve over 6K resolution with much less distortion. We describe the configuration and algorithms of the proposed system. The performance evaluation results of our early stage prototype system are presented.

Perceptual Photo Enhancement with Generative Adversarial Networks (GAN 신경망을 통한 자각적 사진 향상)

  • Que, Yue;Lee, Hyo Jong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.522-524
    • /
    • 2019
  • In spite of a rapid development in the quality of built-in mobile cameras, their some physical restrictions hinder them to achieve the satisfactory results of digital single lens reflex (DSLR) cameras. In this work we propose an end-to-end deep learning method to translate ordinary images by mobile cameras into DSLR-quality photos. The method is based on the framework of generative adversarial networks (GANs) with several improvements. First, we combined the U-Net with DenseNet and connected dense block (DB) in terms of U-Net. The Dense U-Net acts as the generator in our GAN model. Then, we improved the perceptual loss by using the VGG features and pixel-wise content, which could provide stronger supervision for contrast enhancement and texture recovery.