• Title/Summary/Keyword: Camera calibration data

Search Result 167, Processing Time 0.024 seconds

The comparative study of PKNU2 Image and Aerial photo & satellite image

  • Lee, Chang-Hun;Choi, Chul-Uong;Kim, Ho-Yong;Jung, Hei-Chul
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.453-454
    • /
    • 2003
  • Most research materials (data), which are used for the study of digital mapping and digital elevation model (DEM) in the field of Remote Sensing and Aerial Photogrammetry are aerial photographs and satellite images. Additionally, they are also used for National land mapping, National land management, environment management, military purposes, resource exploration and Earth surface analysis etc. Although aerial photographs have high resolution, the data, which they contain, are not used for environment exploration that requires continuous observation because of problems caused by its coastline, as well as single - spectral and long-term periodic image. In addition to this, they are difficult to interpret precisely because Satellite Images are influenced by atmospheric phenomena at the time of photographing, and have by far much lower resolution than existing aerial photographs, while they have a great practical usability because they are mulitispectral images. The PKNU 2 is an aerial photographing system that is made to compensate with the weak points of existing aerial photograph and satellite images. It is able to take pictures of very high resolution using a color digital camera with 6 million pixels and a color infrared camera, and can take perpendicular photographs because PKNU 2 system has equipment that makes the cameras stay level. Moreover, it is very cheap to take pictures by using super light aircraft as a platform. It has much higher resolution than exiting aerial photographs and satellite images because it flies at a low altitude about 800m. The PKNU 2 can obtain multispectral images of visible to near infrared band so that it is good to manage environment and to make a classified diagram of vegetation.

  • PDF

3D geometric model generation based on a stereo vision system using random pattern projection (랜덤 패턴 투영을 이용한 스테레오 비전 시스템 기반 3차원 기하모델 생성)

  • Na, Sang-Wook;Son, Jeong-Soo;Park, Hyung-Jun
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2005.05a
    • /
    • pp.848-853
    • /
    • 2005
  • 3D geometric modeling of an object of interest has been intensively investigated in many fields including CAD/CAM and computer graphics. Traditionally, CAD and geometric modeling tools are widely used to create geometric models that have nearly the same shape of 3D real objects or satisfy designers intent. Recently, with the help of the reverse engineering (RE) technology, we can easily acquire 3D point data from the objects and create 3D geometric models that perfectly fit the scanned data more easily and fast. In this paper, we present 3D geometric model generation based on a stereo vision system (SVS) using random pattern projection. A triangular mesh is considered as the resulting geometric model. In order to obtain reasonable results with the SVS-based geometric model generation, we deal with many steps including camera calibration, stereo matching, scanning from multiple views, noise handling, registration, and triangular mesh generation. To acquire reliable stere matching, we project random patterns onto the object. With experiments using various random patterns, we propose several tips helpful for the quality of the results. Some examples are given to show their usefulness.

  • PDF

Bundle Block Adjustment of Omni-directional Images by a Mobile Mapping System (모바일매핑시스템으로 취득된 전방위 영상의 광속조정법)

  • Oh, Tae-Wan;Lee, Im-Pyeong
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.5
    • /
    • pp.593-603
    • /
    • 2010
  • Most spatial data acquisition systems employing a set of frame cameras may have suffered from their small fields of view and poor base-distance ratio. These limitations can be significantly reduced by employing an omni-directional camera that is capable of acquiring images in every direction. Bundle Block Adjustment (BBA) is one of the existing georeferencing methods to determine the exterior orientation parameters of two or more images. In this study, by extending the concept of the traditional BBA method, we attempt to develop a mathematical model of BBA for omni-directional images. The proposed mathematical model includes three main parts; observation equations based on the collinearity equations newly derived for omni-directional images, stochastic constraints imposed from GPS/INS data and GCPs. We also report the experimental results from the application of our proposed BBA to the real data obtained mainly in urban areas. With the different combinations of the constraints, we applied four different types of mathematical models. With the type where only GCPs are used as the constraints, the proposed BBA can provide the most accurate results, ${\pm}5cm$ of RMSE in the estimated ground point coordinates. In future, we plan to perform more sophisticated lens calibration for the omni-directional camera to improve the georeferencing accuracy of omni-directional images. These georeferenced omni-directional images can be effectively utilized for city modelling, particularly autonomous texture mapping for realistic street view.

Development of a Vision Based Fall Detection System For Healthcare (헬스케어를 위한 영상기반 기절동작 인식시스템 개발)

  • So, In-Mi;Kang, Sun-Kyung;Kim, Young-Un;Lee, Chi-Geun;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.6 s.44
    • /
    • pp.279-287
    • /
    • 2006
  • This paper proposes a method to detect fall action by using stereo images to recognize emergency situation. It uses 3D information to extract the visual information for learning and testing. It uses HMM(Hidden Markov Model) as a recognition algorithm. The proposed system extracts background images from two camera images. It extracts a moving object from input video sequence by using the difference between input image and background image. After that, it finds the bounding rectangle of the moving object and extracts 3D information by using calibration data of the two cameras. We experimented to the recognition rate of fall action with the variation of rectangle width and height and that of 3D location of the rectangle center point. Experimental results show that the variation of 3D location of the center point achieves the higher recognition rate than the variation of width and height.

  • PDF

Design of a Mapping Framework on Image Correction and Point Cloud Data for Spatial Reconstruction of Digital Twin with an Autonomous Surface Vehicle (무인수상선의 디지털 트윈 공간 재구성을 위한 이미지 보정 및 점군데이터 간의 매핑 프레임워크 설계)

  • Suhyeon Heo;Minju Kang;Jinwoo Choi;Jeonghong Park
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.61 no.3
    • /
    • pp.143-151
    • /
    • 2024
  • In this study, we present a mapping framework for 3D spatial reconstruction of digital twin model using navigation and perception sensors mounted on an Autonomous Surface Vehicle (ASV). For improving the level of realism of digital twin models, 3D spatial information should be reconstructed as a digitalized spatial model and integrated with the components and system models of the ASV. In particular, for the 3D spatial reconstruction, color and 3D point cloud data which acquired from a camera and a LiDAR sensors corresponding to the navigation information at the specific time are required to map without minimizing the noise. To ensure clear and accurate reconstruction of the acquired data in the proposed mapping framework, a image preprocessing was designed to enhance the brightness of low-light images, and a preprocessing for 3D point cloud data was included to filter out unnecessary data. Subsequently, a point matching process between consecutive 3D point cloud data was conducted using the Generalized Iterative Closest Point (G-ICP) approach, and the color information was mapped with the matched 3D point cloud data. The feasibility of the proposed mapping framework was validated through a field data set acquired from field experiments in a inland water environment, and its results were described.

Comparison of Correction Coefficients for the Non-uniformity of Pixel Response in Satellite Camera Electronics (위성카메라 전자부의 화소간 응답불균일성 보정계수의 비교검토)

  • Kong, Jong-Pil;Lee, Song-Jae
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.2
    • /
    • pp.89-98
    • /
    • 2011
  • Four kinds of gain and offset correction coefficients that are used to correct the nonuniformity between pixels are discussed. And their correction performance has been compared by performing image correction. using the correction coefficients calculated, on the real image data obtained from a newly fabricated camera electronics system. The performance of the correction coefficients depends in general on the number of the light input levels used to obtain the reference image. The result shows that, as expected obviously, when only two light input levles are used to obtain the reference image, even though its correction coefficients are relatively easily calculated, the correction performance is relatively poor. And with the number of light inputs increased to a value of larger than two, the correction performance is improved. It is noted, however, no Significant performance difference is found between the different correction coefficients employed.

Thermal Image Real-time estimation and Fire Alarm by using a CCD Camera (CCD 카메라를 이용한 열화상 실시간 추정과 화재경보)

  • Baek, Dong-Hyun
    • Fire Science and Engineering
    • /
    • v.30 no.6
    • /
    • pp.92-98
    • /
    • 2016
  • This study evaluated thermal image real-time estimation and fire alarm using by a CCD camera, which has been a seamless feature-point analysis method, according to the angle and position and image fusion by a vector coordinate point set-up of equal shape. The system has higher accuracy, fixing data value of temperature sensing and fire image of 0~255, and sensor output-value of 0~5,000. The operation time of a flame specimen within 500 m, 1000 m, and 1500 m from the test report specimen took 7 s, 26 s, and 62 s, respectively, and image creation was proven. A diagnosis of fire accident was designated to 3 steps: Caution/Alarm/Fire. Therefore, a series of process and the transmission of SNS were identified. A light bulb and fluorescent bulb were also tested for a false alarm test, but no false alarm occurred. The possibility that an unwanted alarm will be reduced was verified through a forecast of the fire progress or real-time estimation of a thermal image by the change in the image of a time-based flame and an analysis of the diffusion velocity.

A Study on the Integrated System Implementation of Close Range Digital Photogrammetry Procedures (근거리 수치사진측량 과정의 단일 통합환경 구축에 관한 연구)

  • Yeu, Bock-Mo;Lee, Suk-Kun;Choi, Song-Wook;Kim, Eui-Myoung
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.7 no.1 s.13
    • /
    • pp.53-63
    • /
    • 1999
  • For the close range digital photogrammetry, multi-step procedures should be embodied in an integrated system. However, it is hard to construct an Integrated system through conventional procedural processing. Using Object Oriented Programming(OOP), photogrammetric processings can be classified with corresponding subjects and it is easy to construct an integrated system lot digital photogrammetry as well as to add the newly developed classes. In this study, the equation of 3-dimensional mathematic model is developed to make an immediate calibration of the CCD camera, the focus distance of which varies according to the distance of the object. Classes for the input and output of images are also generated to carry out the close range digital photogrammetric procedures by OOP. Image matching, coordinate transformation, dirct linear transformation and bundle adjustment are performed by producing classes corresponding to each part of data processing. The bundle adjustment, which adds the principle coordinate and focal length term to the non-photogrammetric CCD camera, is found to increase usability of the CCD camera and the accuracy of object positioning. In conclusion, classes and their hierarchies in the digital photogrammetry are designed to manage multi-step procedures using OOP and close range digital photogrammetric process is implemented using CCD camera in an integrated System.

  • PDF

Surface Temperature Measurement in Microscale with Temperature Sensitive Fluorescence (온도 민감 형광을 이용한 마이크로 스케일 표면온도 측정)

  • Jung Woonseop;Kim Sungwook;Kim Ho-Young;Yoo Jung Yul
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.30 no.2 s.245
    • /
    • pp.153-160
    • /
    • 2006
  • A technique for measuring surface temperature field in micro scale is newly proposed, which uses temperature-sensitive fluorescent (TSF) dye coated on the surface and is easily implemented with a fluorescence microscope and a CCD camera. The TSF dye is chosen among mixtures of various chemical compositions including rhodamine B as the fluorescent dye to be most sensitive to temperature change. In order to examine the effectiveness of this temperature measurement technique, numerical analysis and experiment on transient conduction heat transfer for two different substrate materials, i. e., silicon and glass, are performed. In the experiment, to accurately measure the temperature with high resolution temperature calibration curves were obtained with very fine spatial units. The experimental results agree qualitatively well with the numerical data in the silicon and glass substrate cases so that the present temperature measurement method proves to be quite reliable. In addition, it is noteworthy that the glass substrate is more appropriate to be used as thermally-insulating locally-heating heater in micro thermal devices. This fact is identified in the temperature measuring experiment on the locally-heating heaters made on the wafer of silicon and glass substrates. Accordingly, this technique is capable of accurate and non-intrusive high-resolution measurement of temperature field in microscale.

Temperature Field Measurement of Non-Isothermal Jet Flow Using LIF Technique (레이저형광여기(LIF)를 이용한 비등온 제트유동의 온도장 측정)

  • Yoon, Jong-Hwan;Lee, Sang-Joon
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.24 no.10
    • /
    • pp.1399-1408
    • /
    • 2000
  • A 2-dimensional temperature field measurement technique using PLIF (Planar Laser Induced Fluorescence) was developed and it was applied to an axisymmetric buoyant jet. Rhodamine B was used as a fluorescent dye. Laser light sheet illuminated a two-dimensional cross section of the jet. The intensity variations of LIF signal from Rhodamine B molecules scattered by the laser light were captured with an optical filter and a CCD camera. The spatial variations of temperature field of buoyant jet were derived using the calibration data between the LIF signal and real temperature. The measured results show that the turbulent jet is more efficient in mixing compared to the transition and laminar jet flows. As the initial flow condition varies from laminar to turbulent flow, the entrainment from ambient fluid increases and temperature decay along the jet center axis becomes larger. In addition to the mean temperature field, the spatial distributions of temperature fluctuations were measured by the PLIF technique and the result shows the shear layer development from the jet nozzle exit.