• Title/Summary/Keyword: View Calibration

Search Result 170, Processing Time 0.028 seconds

The Research for the Wide-Angle Lens Distortion Correction by Photogrammetry Techniques (사진측량 기법을 사용한 광각렌즈 왜곡보정에 관한 연구)

  • Kang, Jin-A;Park, Jae-Min;Kim, Byung-Guk
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.26 no.2
    • /
    • pp.103-110
    • /
    • 2008
  • General lens, widely using in Photogrammetry, has narrow view, and have to adjust "Image-Registration Method" after obtain images and it need cost; economic, period of time. Recent days, there is various study that use wide-angle lens, usually for robotics field, put to practical use in photogrammetry instead of general lens. In this studies, distortion tendency of wide-angle lens and utilize the correction techniques suitable to wide-angle lens by the existing photographic survey methods. After carrying out the calibration of the wide-angle lens, we calculated the correction parameters, and then developed the method that convert the original image-point to new image-point correcting distortion. For authorization the developed algorithm, we had inspection about shape and position; there are approximately 2D RMSE of 3 pixel, cx = 2, and cy = 3 different.

Lane Detection-based Camera Pose Estimation (차선검출 기반 카메라 포즈 추정)

  • Jung, Ho Gi;Suhr, Jae Kyu
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.23 no.5
    • /
    • pp.463-470
    • /
    • 2015
  • When a camera installed on a vehicle is used, estimation of the camera pose including tilt, roll, and pan angle with respect to the world coordinate system is important to associate camera coordinates with world coordinates. Previous approaches using huge calibration patterns have the disadvantage that the calibration patterns are costly to make and install. And, previous approaches exploiting multiple vanishing points detected in a single image are not suitable for automotive applications as a scene where multiple vanishing points can be captured by a front camera is hard to find in our daily environment. This paper proposes a camera pose estimation method. It collects multiple images of lane markings while changing the horizontal angle with respect to the markings. One vanishing point, the cross point of the left and right lane marking, is detected in each image, and vanishing line is estimated based on the detected vanishing points. Finally, camera pose is estimated from the vanishing line. The proposed method is based on the fact that planar motion does not change the vanishing line of the plane and the normal vector of the plane can be estimated by the vanishing line. Experiments with large and small tilt and roll angle show that the proposed method outputs accurate estimation results respectively. It is verified by checking the lane markings are up right in the bird's eye view image when the pan angle is compensated.

Full-field Distortion Measurement of Virtual-reality Devices Using Camera Calibration and Probe Rotation (카메라 교정 및 측정부 회전을 이용한 가상현실 기기의 전역 왜곡 측정법)

  • Yang, Dong-Geun;Kang, Pilseong;Ghim, Young-Sik
    • Korean Journal of Optics and Photonics
    • /
    • v.30 no.6
    • /
    • pp.237-242
    • /
    • 2019
  • A compact virtual-reality (VR) device with wider field of view provides users with a more realistic experience and comfortable fit, but VR lens distortion is inevitable, and the amount of distortion must be measured for correction. In this paper, we propose two different full-field distortion-measurement methods, considering the characteristics of the VR device. The first is the distortion-measurement method using multiple images based on camera calibration, which is a well-known technique for the correction of camera-lens distortion. The other is the distortion-measurement method by measuring lens distortion at multiple measurement points by rotating a camera. Our proposed methods are verified by measuring the lens distortion of Google Cardboard, as a representative sample of a commercial VR device, and comparing our measurement results to a simulation using the nominal values.

Omnidirectional Camera Motion Estimation Using Projected Contours (사영 컨투어를 이용한 전방향 카메라의 움직임 추정 방법)

  • Hwang, Yong-Ho;Lee, Jae-Man;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.5
    • /
    • pp.35-44
    • /
    • 2007
  • Since the omnidirectional camera system with a very large field of view could take many information about environment scene from few images, various researches for calibration and 3D reconstruction using omnidirectional image have been presented actively. Most of line segments of man-made objects we projected to the contours by using the omnidirectional camera model. Therefore, the corresponding contours among images sequences would be useful for computing the camera transformations including rotation and translation. This paper presents a novel two step minimization method to estimate the extrinsic parameters of the camera from the corresponding contours. In the first step, coarse camera parameters are estimated by minimizing an angular error function between epipolar planes and back-projected vectors from each corresponding point. Then we can compute the final parameters minimizing a distance error of the projected contours and the actual contours. Simulation results on the synthetic and real images demonstrated that our algorithm can achieve precise contour matching and camera motion estimation.

Using Contour Matching for Omnidirectional Camera Calibration (투영곡선의 자동정합을 이용한 전방향 카메라 보정)

  • Hwang, Yong-Ho;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.6
    • /
    • pp.125-132
    • /
    • 2008
  • Omnidirectional camera system with a wide view angle is widely used in surveillance and robotics areas. In general, most of previous studies on estimating a projection model and the extrinsic parameters from the omnidirectional images assume corresponding points previously established among views. This paper presents a novel omnidirectional camera calibration based on automatic contour matching. In the first place, we estimate the initial parameters including translation and rotations by using the epipolar constraint from the matched feature points. After choosing the interested points adjacent to more than two contours, we establish a precise correspondence among the connected contours by using the initial parameters and the active matching windows. The extrinsic parameters of the omnidirectional camera are estimated minimizing the angular errors of the epipolar plane of endpoints and the inverse projected 3D vectors. Experimental results on synthetic and real images demonstrate that the proposed algorithm obtains more precise camera parameters than the previous method.

Calibration of Omnidirectional Camera by Considering Inlier Distribution (인라이어 분포를 이용한 전방향 카메라의 보정)

  • Hong, Hyun-Ki;Hwang, Yong-Ho
    • Journal of Korea Game Society
    • /
    • v.7 no.4
    • /
    • pp.63-70
    • /
    • 2007
  • Since the fisheye lens has a wide field of view, it can capture the scene and illumination from all directions from far less number of omnidirectional images. Due to these advantages of the omnidirectional camera, it is widely used in surveillance and reconstruction of 3D structure of the scene In this paper, we present a new self-calibration algorithm of omnidirectional camera from uncalibrated images by considering the inlier distribution. First, one parametric non-linear projection model of omnidirectional camera is estimated with the known rotation and translation parameters. After deriving projection model, we can compute an essential matrix of the camera with unknown motions, and then determine the camera information: rotation and translations. The standard deviations are used as a quantitative measure to select a proper inlier set. The experimental results showed that we can achieve a precise estimation of the omnidirectional camera model and extrinsic parameters including rotation and translation.

  • PDF

Estimation of Manhattan Coordinate System using Convolutional Neural Network (합성곱 신경망 기반 맨하탄 좌표계 추정)

  • Lee, Jinwoo;Lee, Hyunjoon;Kim, Junho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.3
    • /
    • pp.31-38
    • /
    • 2017
  • In this paper, we propose a system which estimates Manhattan coordinate systems for urban scene images using a convolutional neural network (CNN). Estimating the Manhattan coordinate system from an image under the Manhattan world assumption is the basis for solving computer graphics and vision problems such as image adjustment and 3D scene reconstruction. We construct a CNN that estimates Manhattan coordinate systems based on GoogLeNet [1]. To train the CNN, we collect about 155,000 images under the Manhattan world assumption by using the Google Street View APIs and calculate Manhattan coordinate systems using existing calibration methods to generate dataset. In contrast to PoseNet [2] that trains per-scene CNNs, our method learns from images under the Manhattan world assumption and thus estimates Manhattan coordinate systems for new images that have not been learned. Experimental results show that our method estimates Manhattan coordinate systems with the median error of $3.157^{\circ}$ for the Google Street View images of non-trained scenes, as test set. In addition, compared to an existing calibration method [3], the proposed method shows lower intermediate errors for the test set.

Automatic Calibration of Storage-Function Rainfall-Runoff Model Using an Optimization Technique (최적화(最適化) 기법(技法)에 의한 저유함수(貯留函數) 유출(流出) 모형(模型)의 자동보정(自動補正))

  • Shim, Soon Bo;Kim, Sun Koo;Ko, Seok Ku
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.12 no.3
    • /
    • pp.127-137
    • /
    • 1992
  • For the real-time control of a multi-purpose reservoir in case of a storm, it is absolutely necessary to forecast accurate flood inflows through a good rainfall-runoff model by calibrating the parameters with the on-line rainfall and water level data transmitted by the telemetering systems. To calibrate the parameters of a runoff model. the trial and error method of manual calibration has been adopted from the subjective view point of a model user. The object of this study is to develop a automatic calibration method using an optimization technique. The pattern-search algorithm was applied as an optimization technique because of the stability of the solution under various conditions. The object function was selected as the sum of the squares of differences between observed and fitted ordinates of the hydrograph. Two historical flood events were applied to verify the developed technique for the automatic calibration of the parameters of the storage-function rainfall-runoff model which has been used for the flood control of the Soyanggang multi-purpose reservoir by the Korea Water Resources Corporation. The developed method was verified to be much more suitable than the manual method in flood forecasting and real-time reservoir controlling because it saves calibration time and efforts in addition to the better flood forecasting capability.

  • PDF

Characteristics of Ocean Scanning Multi-spectral Imager(OSMI) (Ocean Scanning Multi-spectral Imager (OSMI) 특성)

  • Young Min Cho;Sang-Soon Yong;Sun Hee Woo;Sang-Gyu Lee;Kyoung-Hwan Oh;Hong-Yul Paik
    • Korean Journal of Remote Sensing
    • /
    • v.14 no.3
    • /
    • pp.223-231
    • /
    • 1998
  • Ocean Scanning Multispectral Imager (OSMI) is a payload on the Korean Multi-Purpose SATellite (KOMPSAT) to perform worldwide ocean color monitoring for the study of biological oceanography. The instrument images the ocean surface using a whisk-broom motion with a swath width of 800 km and a ground sample distance (GSD) of less than 1 km over the entire field-of-view (FOV). The instrument is designed to have an on-orbit operation duty cycle of 20% over the mission lifetime of 3 years with the functions of programmable gain/offset and on-orbit image data storage. The instrument also performs sun calibration and dark calibration for on-orbit instalment calibration. The OSMI instrument is a multi-spectral imager covering the spectral range from 400 nm to 900 nm using a Charge Coupled Device (CCD) Focal Plane Array (FPA). The ocean colors are monitored using 6 spectral channels that can be selected via ground commands after launch. The instrument performances are fully measured for 8 basic spectral bands centered at 412, 443, 490, 510, 555, 670, 765 and 865 nm during ground characterization of instalment. In addition to the ground calibration, the on-orbit calibration will also be used for the on-orbit band selection. The on-orbit band selection capability can provide great flexibility in ocean color monitoring.

Accuracy Comparison of TOA and TOC Reflectance Products of KOMPSAT-3, WorldView-2 and Pléiades-1A Image Sets Using RadCalNet BTCN and BSCN Data

  • Kim, Kwangseob;Lee, Kiwon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.1
    • /
    • pp.21-32
    • /
    • 2022
  • The importance of the classical theme of how the Top-of-Atmosphere (TOA) and Top-of-Canopy (TOC) reflectance of high-resolution satellite images match the actual atmospheric reflectance and surface reflectance has been emphasized. Based on the Radiometric Calibration Network (RadCalNet) BTCN and BSCN data, this study compared the accuracy of TOA and TOC reflectance products of the currently available optical satellites, including KOMPSAT-3, WorldView-2, and Pléiades-1A image sets calculated using the absolute atmospheric correction function of the Orfeo Toolbox (OTB) tool. The comparison experiment used data in 2018 and 2019, and the Landsat-8 image sets from the same period were applied together. The experiment results showed that the product of TOA and TOC reflectance obtained from the three sets of images were highly consistent with RadCalNet data. It implies that any imagery may be applied when high-resolution reflectance products are required for a certain application. Meanwhile, the processed results of the OTB tool and those by the Apparent Reflection method of another tool for WorldView-2 images were nearly identical. However, in some cases, the reflectance products of Landsat-8 images provided by USGS sometimes showed relatively low consistency than those computed by the OTB tool, with the reference of RadCalNet BTCN and BSCN data. Continuous experiments on active vegetation areas in addition to the RadCalNet sites are necessary to obtain generalized results.