• Title/Summary/Keyword: 3D Calibration

Search Result 667, Processing Time 0.028 seconds

Procedural Geometry Calibration and Color Correction ToolKit for Multiple Cameras (절차적 멀티카메라 기하 및 색상 정보 보정 툴킷)

  • Kang, Hoonjong;Jo, Dongsik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.4
    • /
    • pp.615-618
    • /
    • 2021
  • Recently, 3D reconstruction of real objects with multi-cameras has been widely used for many services such as VR/AR, motion capture, and plenoptic video generation. For accurate 3D reconstruction, geometry and color matching between multiple cameras will be needed. However, previous calibration and correction methods for geometry (internal and external parameters) and color (intensity) correction is difficult for non-majors to perform manually. In this paper, we propose a toolkit with procedural geometry calibration and color correction among cameras with different positions and types. Our toolkit consists of an easy user interface and turned out to be effective in setting up multi-cameras for reconstruction.

Geometric calibration of a computed laminography system for high-magnification nondestructive test imaging

  • Chae, Seung-Hoon;Son, Kihong;Lee, Sooyeul
    • ETRI Journal
    • /
    • v.44 no.5
    • /
    • pp.816-825
    • /
    • 2022
  • Nondestructive testing, which can monitor a product's interior without disassembly, is becoming increasingly essential for industrial inspection. Computed laminography (CL) is widely used in this application, as it can reconstruct a product, such as a printed circuit board, into a three-dimensional (3D) high-magnification image using X-rays. However, such high-magnification scanning environments can be affected by minute vibrations of the CL device, which can generate motion artifacts in the 3D reconstructed image. Since such vibrations are irregular, geometric corrections must be performed at every scan. In this paper, we propose a geometry calibration method that can correct the geometric information of CL scans based on the image without using geometry calibration phantoms. The proposed method compares the projection and digitally reconstructed radiography images to measure the geometric error. To validate the proposed method, we used both numerical phantom images at various magnifications and images obtained from real industrial CL equipment. The experiment results confirmed that sharpness and contrast-to-noise ratio (CNR) were improved.

Accurate Camera Calibration Using GMDH Algorithm (GMDH 알고리즘을 이용한 정확한 카메라의 보정기법)

  • Kim, Myoung-Hwan;Do, Yong-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.592-594
    • /
    • 2004
  • Camera calibration is an important problem to determine the relationship between 3D real world and 2D camera image. The existing calibration methods can be classified into linear and non-linear models. The linear methods are simple and robust against noise, but the accuracy expectation is generally poor. In comparison, if the non-linearity, which is due mainly to lens distortion, is corrected, the accuracy can be better. However, as the optical features of lens are diverse, no non-linear method can be always effective for diverse vision systems. In this paper, we propose a new approach to correct the calibration error of a linear method using GMDH algorithm. The proposed technique is simple in concept and showed improved accuracy in various cases.

  • PDF

Camera Calibration when the Accuracies of Camera Model and Data Are Uncertain (카메라 모델과 데이터의 정확도가 불확실한 상황에서의 카메라 보정)

  • Do, Yong-Tae
    • Journal of Sensor Science and Technology
    • /
    • v.13 no.1
    • /
    • pp.27-34
    • /
    • 2004
  • Camera calibration is an important and fundamental procedure for the application of a vision sensor to 3D problems. Recently many camera calibration methods have been proposed particularly in the area of robot vision. However, the reliability of data used in calibration has been seldomly considered in spite of its importance. In addition, a camera model can not guarantee good results consistently in various conditions. This paper proposes methods to overcome such uncertainty problems of data and camera models as we often encounter them in practical camera calibration steps. By the use of the RANSAC (Random Sample Consensus) algorithm, few data having excessive magnitudes of errors are excluded. Artificial neural networks combined in a two-step structure are trained to compensate for the result by a calibration method of a particular model in a given condition. The proposed methods are useful because they can be employed additionally to most existing camera calibration techniques if needed. We applied them to a linear camera calibration method and could get improved results.

Improvement and Validation of Convective Rainfall Rate Retrieved from Visible and Infrared Image Bands of the COMS Satellite (COMS 위성의 가시 및 적외 영상 채널로부터 복원된 대류운의 강우강도 향상과 검증)

  • Moon, Yun Seob;Lee, Kangyeol
    • Journal of the Korean earth science society
    • /
    • v.37 no.7
    • /
    • pp.420-433
    • /
    • 2016
  • The purpose of this study is to improve the calibration matrixes of 2-D and 3-D convective rainfall rates (CRR) using the brightness temperature of the infrared $10.8{\mu}m$ channel (IR), the difference of brightness temperatures between infrared $10.8{\mu}m$ and vapor $6.7{\mu}m$ channels (IR-WV), and the normalized reflectance of the visible channel (VIS) from the COMS satellite and rainfall rate from the weather radar for the period of 75 rainy days from April 22, 2011 to October 22, 2011 in Korea. Especially, the rainfall rate data of the weather radar are used to validate the new 2-D and 3-DCRR calibration matrixes suitable for the Korean peninsula for the period of 24 rainy days in 2011. The 2D and 3D calibration matrixes provide the basic and maximum CRR values ($mm\;h^{-1}$) by multiplying the rain probability matrix, which is calculated by using the number of rainy and no-rainy pixels with associated 2-D (IR, IR-WV) and 3-D (IR, IR-WV, VIS) matrixes, by the mean and maximum rainfall rate matrixes, respectively, which is calculated by dividing the accumulated rainfall rate by the number of rainy pixels and by the product of the maximum rain rate for the calibration period by the number of rain occurrences. Finally, new 2-D and 3-D CRR calibration matrixes are obtained experimentally from the regression analysis of both basic and maximum rainfall rate matrixes. As a result, an area of rainfall rate more than 10 mm/h is magnified in the new ones as well as CRR is shown in lower class ranges in matrixes between IR brightness temperature and IR-WV brightness temperature difference than the existing ones. Accuracy and categorical statistics are computed for the data of CRR events occurred during the given period. The mean error (ME), mean absolute error (MAE), and root mean squire error (RMSE) in new 2-D and 3-D CRR calibrations led to smaller than in the existing ones, where false alarm ratio had decreased, probability of detection had increased a bit, and critical success index scores had improved. To take into account the strong rainfall rate in the weather events such as thunderstorms and typhoon, a moisture correction factor is corrected. This factor is defined as the product of the total precipitable waterby the relative humidity (PW RH), a mean value between surface and 500 hPa level, obtained from a numerical model or the COMS retrieval data. In this study, when the IR cloud top brightness temperature is lower than 210 K and the relative humidity is greater than 40%, the moisture correction factor is empirically scaled from 1.0 to 2.0 basing on PW RH values. Consequently, in applying to this factor in new 2D and 2D CRR calibrations, the ME, MAE, and RMSE are smaller than the new ones.

Simple Camera Calibration Using Neural Networks (신경망을 이용한 간단한 카메라교정)

  • 전정희;김충원
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.3 no.4
    • /
    • pp.867-873
    • /
    • 1999
  • Camera calibration is a procedure which calculates internal and external parameters of a camera with the Down world coordinates of the control points. Accurate camera calibration is required for achieving accurate visual measurements. In this paper, we propose a simple and flexible camera calibration using neural networks which doesn't require a special knowledge of 3D geometry and camera optics. There are some applications which are not in need of the values of the internal and external parameters. The proposed method is very useful to these applications. Also, the proposed camera calibration has advantage that resolves the ill-condition as object plane is near parallel image plane. The ill-condition is frequently met in product inspection. For little more accurate calibration, acquired image is divided into two regions according to radial distortion of lens and neural network is applied to each region. Experimental results and comparison with Tsai's algorithm prove the validity of the proposed camera calibration.

  • PDF

A 3D Foot Scanner Using Mirrors and Single Camera (거울 및 단일 카메라를 이용한 3차원 발 스캐너)

  • Chung, Seong-Youb;Park, Sang-Kun
    • Korean Journal of Computational Design and Engineering
    • /
    • v.16 no.1
    • /
    • pp.11-20
    • /
    • 2011
  • A structured beam laser is often used to scan object and make 3D model. Multiple cameras are inevitable to see occluded areas, which is the main reason of the high price of the scanner. In this paper, a low cost 3D foot scanner is developed using one camera and two mirrors. The camera and two mirrors are located below and above the foot, respectively. Occluded area, which is the top of the foot, is reflected by the mirrors. Then the camera measures 3D point data of the bottom and top of the foot at the same time. Then, the whole foot model is reconstructed after symmetrical transformation of the data reflected by mirrors. The reliability of the scan data depends on the accuracy of the parameters between the camera and the laser. A calibration method is also proposed and verified by experiments. The results of the experiments show that the worst errors of the system are 2 mm along x, y, and z directions.

Comparison Study of Experimental Neutron Room Scattering Corrections with Theoretical Corrections in RCL's Calibration Facility at KAERI (한국원자력연구소 중성자교정실에 대한 중성자산란보정인자 결정연구)

  • Yoon, Suk-Chul;Chang, Si-Young;Kim, Jong-Soo;Kim, Jang-Lyul;Kim, Bong-Hwan
    • Journal of Radiation Protection and Research
    • /
    • v.22 no.1
    • /
    • pp.29-33
    • /
    • 1997
  • Neutron room scattering corrections that should be made when neutron detectors are calibrated with a $D_2O$ moderated $^{252}Cf$ neutron source in the center of a calibration room are considered. Such room scattering corrections are dependent on specific neutron source type, detector type, calibration distance, and calibration room configuration. Room scattering corrections for the responses of a thermoluminescence dosimeter and two different types of spherical detectors to neutron source in the Radiation Calibration Laboratory(RCL) neutron calibration facility at the Korea Atomic Energy Research Institute(KAERI) were experimentally determined and are presented. The measured room scattering results are then compared with theoretical results calculated by predicting room scattering effects in terms of parameters related to the specific configuration. Agreement between measured and calculated scattering correction is generally about 10% for three kinds of detectors in the calibration facility.

  • PDF

Experiment of KOMPSAT-3/3A Absolute Radiometric Calibration Coefficients Estimation Using FLARE Target (FLARE 타겟을 이용한 다목적위성3호/3A호의 절대복사 검보정 계수 산출)

  • Kyoungwook Jin;Dae-Soon Park
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1389-1399
    • /
    • 2023
  • KOMPSAT-3/3A (K3/K3A) absolute radiometric calibration study was conducted based on a Field Line of sight Automated Radiance Exposure (FLARE) system. FLARE is a system, which has been developed by Labsphere, Inc. adopted a SPecular Array Radiometric Calibration (SPARC) concept. The FLARE utilizes a specular mirror target resulting in a simplified radiometric calibration method by minimizing other sources of diffusive radiative energies. Several targeted measurements of K3/3A satellites over a FLARE site were acquired during a field campaign period (July 5-15, 2021). Due to bad weather situations, only two observations of K3 were identified as effective samples and they were employed for the study. Absolute radiometric calibration coefficients were computed using combined information from the FLARE and K3 satellite measurements. Comparison between the two FLARE measurements (taken on 7/7 and 7/13) showed very consistent results (less than 1% difference between them except the NIR channel). When additional data sets of K3/K3A taken on Aug 2021 were also analyzed and compared with gain coefficients from the metadata which are used by current K3/K3A, It showed a large discrepancy. It is assumed that more studies are needed to verify usefulness of the FLARE system for the K3/3A absolute radiometric calibration.

A Study on the 3-D Information Abstraction of object using Triangulation System (물체의 3-D 형상 복원을 위한 삼각측량 시스템)

  • Kim, Kuk-Se;Lee, Jeong-Ki;Cho, Ai-Ri;Ba, Il-Ho;Lee, Joon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2003.05a
    • /
    • pp.409-412
    • /
    • 2003
  • The 3-D shape use to effect of movie, animation, industrial design, medical treatment service, education, engineering etc... But it is not easy to make 3-D shape from the information of 2-D image. There are two methods in restoring 3-D video image through 2-D image; First the method of using a laser; Second, the method of acquiring 3-D image through stereo vision. Instead of doing two methods with many difficulties, I study the method of simple 3-D image in this research paper. We present here a simple and efficient method, called direct calibration, which does not require any equations at all. The direct calibration procedure builds a lookup table(LUT) linking image and 3-D coordinates by a real 3-D triangulation system. The LUT is built by measuring the image coordinates of a grid of known 3-D points, and recording both image and world coordinates for each point; the depth values of all other visible points are obtained by interpolation.

  • PDF