• Title/Summary/Keyword: multi-camera calibration

Search Result 82, Processing Time 0.027 seconds

Scaling attack for Camera-Lidar calibration model (카메라-라이다 정합 모델에 대한 스케일링 공격)

  • Yi-JI IM;Dae-Seon Choi
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.298-300
    • /
    • 2023
  • 자율주행 및 robot navigation 시스템에서 물체 인식 성능향상을 위해 대부분 MSF(Multi-Sensor Fusion) 기반 설계를 한다. 따라서 각 센서로부터 들어온 정보를 정합하는 것은 정확한 MSF 알고리즘을 위한 필요조건이다. 다양한 선행 연구에서 2D 데이터에 대한 공격을 진행했다. 자율주행에서는 3D 데이터를 다루어야 하므로 선행 연구에서 하지 않았던 3D 데이터 공격을 진행했다. 본 연구에서는 스케일링 공격 기반 카메라-라이다 센서 간 정합 모델의 정확도를 저하시키는 공격 방법을 제안한다. 제안 방법은 입력 라이다의 포인트 클라우드에 스케일링 공격을 적용하여 다운스케일링 단계에서 공격하고자 한다. 실험 결과, 입력 데이터에 공격하였을 때 공격 전보다 평균제곱 이동오류는 56% 이상, 평균 사원수 각도 오류는 98% 이상 증가했음을 보였다. 다운스케일링 크기 별, 알고리즘별 공격을 적용했을 때, 10×20 크기로 다운스케일링 하고 lanczos4 알고리즘을 적용했을 때 가장 효과적으로 공격할 수 있음을 확인했다.

A New Illumination Compensation Method based on Color Optimization Function for Generating 3D Volumetric Model (3차원 체적 모델의 생성을 위한 색상 최적화 함수 기반의 조명 보상 기법)

  • Park, Byung-Seo;Kim, Kyung-Jin;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.25 no.4
    • /
    • pp.598-608
    • /
    • 2020
  • In this paper, we propose a color correction technique for images acquired through a multi-view camera system for acquiring a 3D model. It is assumed that the 3D volume is captured indoors, and the position and intensity of the light is constant over time. 8 multi-view cameras are used, and converging toward the center of the space, so even if the lighting is constant, the intensity and angle of light entering each camera may be different. Therefore, a color optimization function is applied to a color correction chart taken from all cameras, and a color conversion matrix defining a relationship between the obtained 8 images is calculated. Using this, the images of all cameras are corrected based on the standard color correction chart. This paper proposed a color correction method to minimize the color difference between cameras when acquiring an image using 8 cameras of 3D objects, and experimentally proved that the color difference between images is reduced when it is restored to a 3D image.

Multiple Camera-Based Correspondence of Ground Foot for Human Motion Tracking (사람의 움직임 추적을 위한 다중 카메라 기반의 지면 위 발의 대응)

  • Seo, Dong-Wook;Chae, Hyun-Uk;Jo, Kang-Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.8
    • /
    • pp.848-855
    • /
    • 2008
  • In this paper, we describe correspondence among multiple images taken by multiple cameras. The correspondence among multiple views is an interesting problem which often appears in the application like visual surveillance or gesture recognition system. We use the principal axis and the ground plane homography to estimate foot of human. The principal axis belongs to the subtracted silhouette-based region of human using subtraction of the predetermined multiple background models with current image which includes moving person. For the calculation of the ground plane homography, we use landmarks on the ground plane in 3D space. Thus the ground plane homography means the relation of two common points in different views. In the normal human being, the foot of human has an exactly same position in the 3D space and we represent it to the intersection in this paper. The intersection occurs when the principal axis in an image crosses to the transformed ground plane from other image. However the positions of the intersection are different depend on camera views. Therefore we construct the correspondence that means the relationship between the intersection in current image and the transformed intersection from other image by homography. Those correspondences should confirm within a short distance measuring in the top viewed plane. Thus, we track a person by these corresponding points on the ground plane. Experimental result shows the accuracy of the proposed algorithm has almost 90% of detecting person for tracking based on correspondence of intersections.

Analysis of 3D Reconstruction Accuracy by ToF-Stereo Fusion (ToF와 스테레오 융합을 이용한 3차원 복원 데이터 정밀도 분석 기법)

  • Jung, Sukwoo;Lee, Youn-Sung;Lee, KyungTaek
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.466-468
    • /
    • 2022
  • 3D reconstruction is important issue in many applications such as Augmented Reality (AR), eXtended Reality (XR), and Metaverse. For 3D reconstruction, depth map can be acquired by stereo camera and time-of-flight (ToF) sensor. We used both sensors complementarily to improve the accuracy of 3D information of the data. First, we applied general multi-camera calibration technique which uses both color and depth information. Next, the depth map of the two sensors are fused by 3D registration and reprojection approach. The fused data is compared with the ground truth data which is reconstructed using RTC360 sensor. We used Geomagic Wrap to analysis the average RMSE of the two data. The proposed procedure was implemented and tested with real-world data.

  • PDF

A Study on the Integrated System Implementation of Close Range Digital Photogrammetry Procedures (근거리 수치사진측량 과정의 단일 통합환경 구축에 관한 연구)

  • Yeu, Bock-Mo;Lee, Suk-Kun;Choi, Song-Wook;Kim, Eui-Myoung
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.7 no.1 s.13
    • /
    • pp.53-63
    • /
    • 1999
  • For the close range digital photogrammetry, multi-step procedures should be embodied in an integrated system. However, it is hard to construct an Integrated system through conventional procedural processing. Using Object Oriented Programming(OOP), photogrammetric processings can be classified with corresponding subjects and it is easy to construct an integrated system lot digital photogrammetry as well as to add the newly developed classes. In this study, the equation of 3-dimensional mathematic model is developed to make an immediate calibration of the CCD camera, the focus distance of which varies according to the distance of the object. Classes for the input and output of images are also generated to carry out the close range digital photogrammetric procedures by OOP. Image matching, coordinate transformation, dirct linear transformation and bundle adjustment are performed by producing classes corresponding to each part of data processing. The bundle adjustment, which adds the principle coordinate and focal length term to the non-photogrammetric CCD camera, is found to increase usability of the CCD camera and the accuracy of object positioning. In conclusion, classes and their hierarchies in the digital photogrammetry are designed to manage multi-step procedures using OOP and close range digital photogrammetric process is implemented using CCD camera in an integrated System.

  • PDF

Analysis of the Accuracy of the UAV Photogrammetric Method using Digital Camera (디지털 카메라를 이용한 무인항공 사진측량의 정확도 분석)

  • Jung, Sung-Heuk;Lim, Hyeong-Min;Lee, Jae-Kee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.6
    • /
    • pp.741-747
    • /
    • 2009
  • For construction of 3D virtual city models, airborne digital cameras, laser scanners, multi-oblique photograph systems and other devices are currently being used. With such advanced techniques, precise 3D spatial information can be collected and high quality 3D city models can be built in a considerably large area. The 3D spatial information to be built has to provide the latest information that quickly reflects the causes of any change due to urban development. In this study, a UAV photogrammetric method using low cost UAV and digital camera was proposed to acquire and update 3D spatial information effectively on small areas where information continuously change. In the proposed UAV photogrammetric method, the elements of interior orientation were acquired through camera calibration and the vertical and oblique photographs were taken at 9 points and the 3D drawing of ground control points and buildings was performed using 20 images among the pictured images. This study also analyzed the accuracy of the proposed method comparing with ground survey data and digital map in order to examine whether the method can be used in on-demand 3D spatial information update on relatively small areas.

Use of Unmanned Aerial Vehicle for Multi-temporal Monitoring of Soybean Vegetation Fraction

  • Yun, Hee Sup;Park, Soo Hyun;Kim, Hak-Jin;Lee, Wonsuk Daniel;Lee, Kyung Do;Hong, Suk Young;Jung, Gun Ho
    • Journal of Biosystems Engineering
    • /
    • v.41 no.2
    • /
    • pp.126-137
    • /
    • 2016
  • Purpose: The overall objective of this study was to evaluate the vegetation fraction of soybeans, grown under different cropping conditions using an unmanned aerial vehicle (UAV) equipped with a red, green, and blue (RGB) camera. Methods: Test plots were prepared based on different cropping treatments, i.e., soybean single-cropping, with and without herbicide application and soybean and barley-cover cropping, with and without herbicide application. The UAV flights were manually controlled using a remote flight controller on the ground, with 2.4 GHz radio frequency communication. For image pre-processing, the acquired images were pre-treated and georeferenced using a fisheye distortion removal function, and ground control points were collected using Google Maps. Tarpaulin panels of different colors were used to calibrate the multi-temporal images by converting the RGB digital number values into the RGB reflectance spectrum, utilizing a linear regression method. Excess Green (ExG) vegetation indices for each of the test plots were compared with the M-statistic method in order to quantitatively evaluate the greenness of soybean fields under different cropping systems. Results: The reflectance calibration methods used in the study showed high coefficients of determination, ranging from 0.8 to 0.9, indicating the feasibility of a linear regression fitting method for monitoring multi-temporal RGB images of soybean fields. As expected, the ExG vegetation indices changed according to different soybean growth stages, showing clear differences among the test plots with different cropping treatments in the early season of < 60 days after sowing (DAS). With the M-statistic method, the test plots under different treatments could be discriminated in the early seasons of <41 DAS, showing a value of M > 1. Conclusion: Therefore, multi-temporal images obtained with an UAV and a RGB camera could be applied for quantifying overall vegetation fractions and crop growth status, and this information could contribute to determine proper treatments for the vegetation fraction.

Study on Reflectance and NDVI of Aerial Images using a Fixed-Wing UAV "Ebee"

  • Lee, Kyung-Do;Lee, Ye-Eun;Park, Chan-Won;Hong, Suk-Young;Na, Sang-Il
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.49 no.6
    • /
    • pp.731-742
    • /
    • 2016
  • Recent technological advance in UAV (Unmanned Aerial Vehicle) technology offers new opportunities for assessing crop situation using UAV imagery. The objective of this study was to assess if reflectance and NDVI derived from consumer-grade cameras mounted on UAVs are useful for crop condition monitoring. This study was conducted using a fixed-wing UAV(Ebee) with Cannon S110 camera from March 2015 to March 2016 in the experiment field of National Institute of Agricultural Sciences. Results were compared with ground-based recordings obtained from consumer-grade cameras and ground multi-spectral sensors. The relationship between raw digital numbers (DNs) of UAV images and measured calibration tarp reflectance was quadratic. Surface (lawn grass, stairs, and soybean cultivation area) reflectance obtained from UAV images was not similar to reflectance measured by ground-based sensors. But NDVI based on UAV imagery was similar to NDVI calculated by ground-based sensors.

Geometric Correction for Uneven Quadric Projection Surfaces Using Recursive Subdivision of B$\acute{e}$zier Patches

  • Ahmed, Atif;Hafiz, Rehan;Khan, Muhammad Murtaza;Cho, Yongju;Cha, Jihun
    • ETRI Journal
    • /
    • v.35 no.6
    • /
    • pp.1115-1125
    • /
    • 2013
  • This paper presents a scheme for geometric correction of projected content for planar and quadratic projection surfaces. The scheme does not require the projection surface to be perfectly quadratic or planar and is therefore suitable for uneven low-cost commercial and home projection surfaces. An approach based on the recursive subdivision of second-order B$\acute{e}$zier patches is proposed for the estimation of projection distortion owing to surface imperfections. Unlike existing schemes, the proposed scheme is completely automatic, requires no prior knowledge of the projection surface, and uses a single uncalibrated camera without requiring any physical markers on the projection surface. Furthermore, the scheme is scalable for geometric calibration of multi-projector setups. The efficacy of the proposed scheme is demonstrated using simulations and via practical experiments on various surfaces. A relative distortion error metric is also introduced that provides a quantitative measure of the suppression of geometric distortions, which occurs as the result of an imperfect projection surface.

Correction of Prompt Gamma Distribution for Improving Accuracy of Beam Range Determination in Inhomogeneous Phantom

  • Park, Jong Hoon;Kim, Sung Hun;Ku, Youngmo;Lee, Hyun Su;Kim, Young-su;Kim, Chan Hyeong;Shin, Dong Ho;Lee, Se Byeong;Jeong, Jong Hwi
    • Progress in Medical Physics
    • /
    • v.28 no.4
    • /
    • pp.207-217
    • /
    • 2017
  • For effective patient treatment in proton therapy, it is therefore important to accurately measure the beam range. For measuring beam range, various researchers determine the beam range by measuring the prompt gammas generated during nuclear reactions of protons with materials. However, the accuracy of the beam range determination can be lowered in heterogeneous phantoms, because of the differences with respect to the prompt gamma production depending on the properties of the material. In this research, to improve the beam range determination in a heterogeneous phantom, we derived a formula to correct the prompt-gamma distribution using the ratio of the prompt gamma production, stopping power, and density obtained for each material. Then, the prompt-gamma distributions were acquired by a multi-slit prompt-gamma camera on various kinds of heterogeneous phantoms using a Geant4 Monte Carlo simulation, and the deduced formula was applied to the prompt-gamma distributions. For the case involving the phantom having bone-equivalent material in the soft tissue-equivalent material, it was confirmed that compared to the actual range, the determined ranges were relatively accurate both before and after correction. In the case of a phantom having the lung-equivalent material in the soft tissue-equivalent material, although the maximum error before correction was 18.7 mm, the difference was very large. However, when the correction method was applied, the accuracy was significantly improved by a maximum error of 4.1 mm. Moreover, for a phantom that was constructed based on CT data, after applying the calibration method, the beam range could be generally determined within an error of 2.5 mm. Simulation results confirmed the potential to determine the beam range with high accuracy in heterogeneous phantoms by applying the proposed correction method. In future, these methods will be verified by performing experiments using a therapeutic proton beam.