• Title/Summary/Keyword: multi-camera calibration

Search Result 82, Processing Time 0.027 seconds

Localization of Mobile Robot In Unstructured Environment using Auto-Calibration Algorithm (Auto-Calibration을 이용한 Unstructured Environment에서의 실내 위치추정 기법)

  • Eom, We-Sub;Seo, Dae-Geun;Park, Jae-Hyun;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.2
    • /
    • pp.211-217
    • /
    • 2009
  • This paper proposes a way of expanding the use area of localization technique by using a beacon. In other words, we have developed the auto-calibration algorithm that recognizes the location of this beacon by attaching the beacon on an arbitrary position and by using the information of existing beacon under this situation. By doing so, the moving robot can overcome the limitation that the localization of moving robot is only possible within the area that has installed the existing beacon since the beacon cannot be installed on the accurate location when passing through a danger zone or an unknown zone. Accordingly, the moving robot can slowly move to the unknown zone according to this auto-calibration algorithm and can recognize its own location at a later time in a safe zone. The localization technique is essentially needed in using a moving robot and it is necessary to guarantee certain degree of reliability. Generally, moving robots are designed in a way to work well under the situation that the surroundings is well arranged and the localization techniques of using camera, laser and beacon are well developed. However due to the characteristics of sensor, there may be the cases that the place is dark, interfering radio waves, and/or difficult to install a beacon. The effectiveness of the method proposed in this paper has been proved through an experiment in this paper.

Parallel Multi-task Cascade Convolution Neural Network Optimization Algorithm for Real-time Dynamic Face Recognition

  • Jiang, Bin;Ren, Qiang;Dai, Fei;Zhou, Tian;Gui, Guan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.10
    • /
    • pp.4117-4135
    • /
    • 2020
  • Due to the angle of view, illumination and scene diversity, real-time dynamic face detection and recognition is no small difficulty in those unrestricted environments. In this study, we used the intrinsic correlation between detection and calibration, using a multi-task cascaded convolutional neural network(MTCNN) to improve the efficiency of face recognition, and the output of each core network is mapped in parallel to a compact Euclidean space, where distance represents the similarity of facial features, so that the target face can be identified as quickly as possible, without waiting for all network iteration calculations to complete the recognition results. And after the angle of the target face and the illumination change, the correlation between the recognition results can be well obtained. In the actual application scenario, we use a multi-camera real-time monitoring system to perform face matching and recognition using successive frames acquired from different angles. The effectiveness of the method was verified by several real-time monitoring experiments, and good results were obtained.

Applicability Assessment of Disaster Rapid Mapping: Focused on Fusion of Multi-sensing Data Derived from UAVs and Disaster Investigation Vehicle (재난조사 특수차량과 드론의 다중센서 자료융합을 통한 재난 긴급 맵핑의 활용성 평가)

  • Kim, Seongsam;Park, Jesung;Shin, Dongyoon;Yoo, Suhong;Sohn, Hong-Gyoo
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.5_2
    • /
    • pp.841-850
    • /
    • 2019
  • The purpose of this study is to strengthen the capability of rapid mapping for disaster through improving the positioning accuracy of mapping and fusion of multi-sensing point cloud data derived from Unmanned Aerial Vehicles (UAVs) and disaster investigation vehicle. The positioning accuracy was evaluated for two procedures of drone mapping with Agisoft PhotoScan: 1) general geo-referencing by self-calibration, 2) proposed geo-referencing with optimized camera model by using fixed accurate Interior Orientation Parameters (IOPs) derived from indoor camera calibration test and bundle adjustment. The analysis result of positioning accuracy showed that positioning RMS error was improved 2~3 m to 0.11~0.28 m in horizontal and 2.85 m to 0.45 m in vertical accuracy, respectively. In addition, proposed data fusion approach of multi-sensing point cloud with the constraints of the height showed that the point matching error was greatly reduced under about 0.07 m. Accordingly, our proposed data fusion approach will enable us to generate effectively and timelinessly ortho-imagery and high-resolution three dimensional geographic data for national disaster management in the future.

Volume Calculation for Filling Up of Rubbish Using Stereo Camera and Uniform Mesh (스테레오 카메라와 균일 매시를 이용한 매립지의 환경감시를 위한 체적 계산 알고리즘)

  • Lee, Young-Dae;Cho, Sung-Youn;Kim, Kyung;Lee, Dong-Gyu
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.3
    • /
    • pp.15-22
    • /
    • 2012
  • For the construction of safe and clear urban environment, it is necessary that we identify the rubbish waste volume and we know the accuracy volume. In this paper, we developed the algorithm which computes the waste volume using the stereo camera for enhancing the environment of waste repository. Using the stereo vision camera, we first computed the distortion parameters of stereo camera and then we obtained the points cloud of the object surface by measuring the target object. Regarding the points cloud as the input of the volume calculation algorithm, we obtained the waste volume of the target object. For this purpose, we suggested two volume calculation algorithm based on the uniform meshing method. The difference between the measured volume such as today's one and yesterday's one gives the reposit of waste volume. Using this approach, we can get the change of the waste volume repository by reading the volume reports weekly, monthly and yearly, so we can get quantitative statistics report of waste volume.

Analysis on Mapping Accuracy of a Drone Composite Sensor: Focusing on Pre-calibration According to the Circumstances of Data Acquisition Area (드론 탑재 복합센서의 매핑 정확도 분석: 데이터 취득 환경에 따른 사전 캘리브레이션 여부를 중심으로)

  • Jeon, Ilseo;Ham, Sangwoo;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.577-589
    • /
    • 2021
  • Drone mapping systems can be applied to many fields such as disaster damage investigation, environmental monitoring, and construction process monitoring. To integrate individual sensors attached to a drone, it was essential to undergo complicated procedures including time synchronization. Recently, a variety of composite sensors are released which consist of visual sensors and GPS/INS. Composite sensors integrate multi-sensory data internally, and they provide geotagged image files to users. Therefore, to use composite sensors in drone mapping systems, mapping accuracies from composite sensors should be examined. In this study, we analyzed the mapping accuracies of a composite sensor, focusing on the data acquisition area and pre-calibration effect. In the first experiment, we analyzed how mapping accuracy varies with the number of ground control points. When 2 GCPs were used for mapping, the total RMSE has been reduced by 40 cm from more than 1 m to about 60 cm. In the second experiment, we assessed mapping accuracies based on whether pre-calibration is conducted or not. Using a few ground control points showed the pre-calibration does not affect mapping accuracies. The formation of weak geometry of the image sequences has resulted that pre-calibration can be essential to decrease possible mapping errors. In the absence of ground control points, pre-calibration also can improve mapping errors. Based on this study, we expect future drone mapping systems using composite sensors will contribute to streamlining a survey and calibration process depending on the data acquisition circumstances.

Spectrum-Based Color Reproduction Algorithm for Makeup Simulation of 3D Facial Avatar

  • Jang, In-Su;Kim, Jae Woo;You, Ju-Yeon;Kim, Jin Seo
    • ETRI Journal
    • /
    • v.35 no.6
    • /
    • pp.969-979
    • /
    • 2013
  • Various simulation applications for hair, clothing, and makeup of a 3D avatar can provide more useful information to users before they select a hairstyle, clothes, or cosmetics. To enhance their reality, the shapes, textures, and colors of the avatars should be similar to those found in the real world. For a more realistic 3D avatar color reproduction, this paper proposes a spectrum-based color reproduction algorithm and color management process with respect to the implementation of the algorithm. First, a makeup color reproduction model is estimated by analyzing the measured spectral reflectance of the skin samples before and after applying the makeup. To implement the model for a makeup simulation system, the color management process controls all color information of the 3D facial avatar during the 3D scanning, modeling, and rendering stages. During 3D scanning with a multi-camera system, spectrum-based camera calibration and characterization are performed to estimate the spectrum data. During the virtual makeup process, the spectrum data of the 3D facial avatar is modified based on the makeup color reproduction model. Finally, during 3D rendering, the estimated spectrum is converted into RGB data through gamut mapping and display characterization.

Development of a Robot's Visual System for Measuring Distance and Width of Object Algorism (로봇의 시각시스템을 위한 물체의 거리 및 크기측정 알고리즘 개발)

  • Kim, Hoi-In;Kim, Gab-Soon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.2
    • /
    • pp.88-92
    • /
    • 2011
  • This paper looks at the development of the visual system of robots, and the development of image processing algorism to measure the size of an object and the distance from robot to an object for the visual system. Robots usually get the visual systems with a camera for measuring the size of an object and the distance to an object. The visual systems are accurately impossible the size and distance in case of that the locations of the systems is changed and the objects are not on the ground. Thus, in this paper, we developed robot's visual system to measure the size of an object and the distance to an object using two cameras and two-degree robot mechanism. And, we developed the image processing algorism to measure the size of an object and the distance from robot to an object for the visual system, and finally, carried out the characteristics test of the developed visual system. As a result, it is thought that the developed system could accurately measure the size of an object and the distance to an object.

Speeding up the KLT Tracker for Real-time Image Georeferencing using GPS/INS Data

  • Tanathong, Supannee;Lee, Im-Pyeong
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.6
    • /
    • pp.629-644
    • /
    • 2010
  • A real-time image georeferencing system requires all inputs to be determined in real-time. The intrinsic camera parameters can be identified in advance from a camera calibration process while other control information can be derived instantaneously from real-time GPS/INS data. The bottleneck process is tie point acquisition since manual operations will be definitely obstacles for real-time system while the existing extraction methods are not fast enough. In this paper, we present a fast-and-automated image matching technique based on the KLT tracker to obtain a set of tie-points in real-time. The proposed work accelerates the KLT tracker by supplying the initial guessed tie-points computed using the GPS/INS data. Originally, the KLT only works effectively when the displacement between tie-points is small. To drive an automated solution, this paper suggests an appropriate number of depth levels for multi-resolution tracking under large displacement using the knowledge of uncertainties the GPS/INS data measurements. The experimental results show that our suggested depth levels is promising and the proposed work can obtain tie-points faster than the ordinary KLT by 13% with no less accuracy. This promising result suggests that our proposed algorithm can be effectively integrated into the real-time image georeferencing for further developing a real-time surveillance application.

Illumination Mismatch Compensation Algorithm based on Layered Histogram Matching by Using Depth Information (깊이 정보에 따른 레이어별 히스토그램 매칭을 이용한 조명 불일치 보상 기법)

  • Lee, Dong-Seok;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.8C
    • /
    • pp.651-660
    • /
    • 2010
  • In this paper, we implement an efficient histogram-based prefiltering to compensate the illumination mismatches in regions between neighboring views. In multi-view video, such illumination disharmony can primarily occur on account of different camera location and orientation and an imperfect camera calibration. This discrepancy can cause the performance decrease of multi-view video coding(MVC) algorithm. A histogram matching algorithm can be exploited to make up for these differences in a prefiltering step. Once all camera frames of a multi-view sequence are adjusted to a predefined reference through the histogram matching, the coding efficiency of MVC is improved. However general frames of multi-view video sequence are composed of several regions with different color composition and their histogram distribution which are mutually independent of each other. In addition, the location and depth of these objects from sequeuces captured from different cameras can be different with different frames. Thus we propose a new algorithm which classify a image into several subpartitions by its depth information first and then histogram matching is performed for each region individually. Experimental results show that the compression ratio for the proposed algorithm is improved comparing with the conventional image-based algorithms.

Development of 3-dimensional measuring robot cell (3차원 측정 로보트 셀 개발)

  • Park, Kang;Cho, Koung-Rae;Shin, Hyun-Oh;Kim, Mun-Sang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1991.10a
    • /
    • pp.1139-1143
    • /
    • 1991
  • Using industrial robots and sensors, we developed an inline car body inspection system which proposes high flexibility and sufficient accuracy. Car Body Inspection(CBI) cell consists of two industrial robots, two corresponding carriages, camera vision system, a process computer with multi-tasking ability and several LDS's. As industrial robots guarantee sufficient repeatabilities, the CBI cell adopts the concept of relative measurement instead of that of absolute measurement. By comparing the actual measured data with reference data, the dimensional errors of the corresponding points can be calculated. The length of the robot arms changes according to ambient temperature and it affects the measuring accuracy. To compensate this error, a robot arm calibration process was realized. By measuring a reference jig, the differential changes of the robot arms due to temperature fluctuation can be calculated and compensated.

  • PDF