• Title/Summary/Keyword: Ground-based camera

Search Result 181, Processing Time 0.022 seconds

A Vision-based Position Estimation Method Using a Horizon (지평선을 이용한 영상기반 위치 추정 방법 및 위치 추정 오차)

  • Shin, Jong-Jin;Nam, Hwa-Jin;Kim, Byung-Ju
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.15 no.2
    • /
    • pp.169-176
    • /
    • 2012
  • GPS(Global Positioning System) is widely used for the position estimation of an aerial vehicle. However, GPS may not be available due to hostile jamming or strategic reasons. A vision-based position estimation method can be effective if GPS does not work properly. In mountainous areas without any man-made landmark, a horizon is a good feature for estimating the position of an aerial vehicle. In this paper, we present a new method to estimate the position of the aerial vehicle equipped with a forward-looking infrared camera. It is assumed that INS(Inertial Navigation System) provides the attitudes of an aerial vehicle and a camera. The horizon extracted from an infrared image is compared with horizon models generated from DEM(Digital Elevation Map). Because of a narrow field of view of the camera, two images with a different camera view are utilized to estimate a position. The algorithm is tested using real infrared images acquired on the ground. The experimental results show that the method can be used for estimating the position of an aerial vehicle.

A Monocular Vision Based Technique for Estimating Direction of 3D Parallel Lines and Its Application to Measurement of Pallets (모노 비전 기반 3차원 평행직선의 방향 추정 기법 및 파렛트 측정 응용)

  • Kim, Minhwan;Byun, Sungmin;Kim, Jin
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.11
    • /
    • pp.1254-1262
    • /
    • 2018
  • Many parallel lines may be shown in our real life and they are useful for analyzing structure of objects or buildings. In this paper, a vision based technique for estimating three-dimensional direction of parallel lines is suggested, which uses a calibrated camera and is applicable to an image being captured from the camera. Correctness of the technique is theoretically described and discussed in this paper. The technique is well applicable to measurement of orientation of a pallet in a warehouse, because a pair of parallel lines is well detected in the front plane of the pallet. Thereby the technique enables a forklift with a well-calibrated camera to engage the pallet automatically. Such a forklift in a warehouse can engage a pallet on a storing rack as well as one on the ground. Usefulness of the suggested technique for other applications is also discussed. We conducted an experiment of measuring a real commercial pallet with various orientation and distance and found for the technique to work correctly and accurately.

EpiLoc: Deep Camera Localization Under Epipolar Constraint

  • Xu, Luoyuan;Guan, Tao;Luo, Yawei;Wang, Yuesong;Chen, Zhuo;Liu, WenKai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.6
    • /
    • pp.2044-2059
    • /
    • 2022
  • Recent works have shown that the geometric constraint can be harnessed to boost the performance of CNN-based camera localization. However, the existing strategies are limited to imposing image-level constraint between pose pairs, which is weak and coarse-gained. In this paper, we introduce a pixel-level epipolar geometry constraint to vanilla localization framework without the ground-truth 3D information. Dubbed EpiLoc, our method establishes the geometric relationship between pixels in different images by utilizing the epipolar geometry thus forcing the network to regress more accurate poses. We also propose a variant called EpiSingle to cope with non-sequential training images, which can construct the epipolar geometry constraint based on a single image in a self-supervised manner. Extensive experiments on the public indoor 7Scenes and outdoor RobotCar datasets show that the proposed pixel-level constraint is valuable, and helps our EpiLoc achieve state-of-the-art results in the end-to-end camera localization task.

Vision-Based Collision-Free Formation Control of Multi-UGVs using a Camera on UAV (무인비행로봇에 장착된 카메라를 이용한 다중 무인지상로봇의 충돌 없는 대형 제어기법)

  • Choi, Francis Byonghwa;Ha, Changsu;Lee, Dongjun
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.30 no.1
    • /
    • pp.53-58
    • /
    • 2013
  • In this paper, we present a framework for collision avoidance of UGVs by vision-based control. On the image plane which is created by perspective camera rigidly attached to UAV hovering stationarily, image features of UGVs are to be controlled by our control framework so that they proceed to desired locations while avoiding collision. UGVs are assumed as unicycle wheeled mobile robots with nonholonomic constraint and they follow the image feature's movement on the ground plane with low-level controller. We used potential function method to guarantee collision prevention, and showed its stability. Simulation results are presented to validate capability and stability of the proposed framework.

3D Information based Visualization System for Real-Time Teleoperation of Unmanned Ground Vehicles (무인 지상 로봇의 실시간 원격 제어를 위한 3차원 시각화 시스템)

  • Jang, Ga-Ram;Bae, Ji-Hun;Lee, Dong-Hyuk;Park, Jae-Han
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.4
    • /
    • pp.220-229
    • /
    • 2018
  • In the midst of disaster, such as an earthquake or a nuclear radiation exposure area, there are huge risks to send human crews. Many robotic researchers have studied to send UGVs in order to replace human crews at dangerous environments. So far, two-dimensional camera information has been widely used for teleoperation of UGVs. Recently, three-dimensional information based teleoperations are attempted to compensate the limitations of camera information based teleoperation. In this paper, the 3D map information of indoor and outdoor environments reconstructed in real-time is utilized in the UGV teleoperation. Further, we apply the LTE communication technology to endure the stability of the teleoperation even under the deteriorate environment. The proposed teleoperation system is performed at explosive disposal missions and their feasibilities could be verified through completion of that missions using the UGV with the Explosive Ordnance Disposal (EOD) team of Busan Port Security Corporation.

Application of UAV-based RGB Images for the Growth Estimation of Vegetable Crops

  • Kim, Dong-Wook;Jung, Sang-Jin;Kwon, Young-Seok;Kim, Hak-Jin
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2017.04a
    • /
    • pp.45-45
    • /
    • 2017
  • On-site monitoring of vegetable growth parameters, such as leaf length, leaf area, and fresh weight, in an agricultural field can provide useful information for farmers to establish farm management strategies suitable for optimum production of vegetables. Unmanned Aerial Vehicles (UAVs) are currently gaining a growing interest for agricultural applications. This study reports on validation testing of previously developed vegetable growth estimation models based on UAV-based RGB images for white radish and Chinese cabbage. Specific objective was to investigate the potential of the UAV-based RGB camera system for effectively quantifying temporal and spatial variability in the growth status of white radish and Chinese cabbage in a field. RGB images were acquired based on an automated flight mission with a multi-rotor UAV equipped with a low-cost RGB camera while automatically tracking on a predefined path. The acquired images were initially geo-located based on the log data of flight information saved into the UAV, and then mosaicked using a commerical image processing software. Otsu threshold-based crop coverage and DSM-based crop height were used as two predictor variables of the previously developed multiple linear regression models to estimate growth parameters of vegetables. The predictive capabilities of the UAV sensing system for estimating the growth parameters of the two vegetables were evaluated quantitatively by comparing to ground truth data. There were highly linear relationships between the actual and estimated leaf lengths, widths, and fresh weights, showing coefficients of determination up to 0.7. However, there were differences in slope between the ground truth and estimated values lower than 0.5, thereby requiring the use of a site-specific normalization method.

  • PDF

Video-based Height Measurements of Multiple Moving Objects

  • Jiang, Mingxin;Wang, Hongyu;Qiu, Tianshuang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.9
    • /
    • pp.3196-3210
    • /
    • 2014
  • This paper presents a novel video metrology approach based on robust tracking. From videos acquired by an uncalibrated stationary camera, the foreground likelihood map is obtained by using the Codebook background modeling algorithm, and the multiple moving objects are tracked by a combined tracking algorithm. Then, we compute vanishing line of the ground plane and the vertical vanishing point of the scene, and extract the head feature points and the feet feature points in each frame of video sequences. Finally, we apply a single view mensuration algorithm to each of the frames to obtain height measurements and fuse the multi-frame measurements using RANSAC algorithm. Compared with other popular methods, our proposed algorithm does not require calibrating the camera, and can track the multiple moving objects when occlusion occurs. Therefore, it reduces the complexity of calculation and improves the accuracy of measurement simultaneously. The experimental results demonstrate that our method is effective and robust to occlusion.

Vision-based Ground Test for Active Debris Removal

  • Lim, Seong-Min;Kim, Hae-Dong;Seong, Jae-Dong
    • Journal of Astronomy and Space Sciences
    • /
    • v.30 no.4
    • /
    • pp.279-290
    • /
    • 2013
  • Due to the continuous space development by mankind, the number of space objects including space debris in orbits around the Earth has increased, and accordingly, difficulties of space development and activities are expected in the near future. In this study, among the stages for space debris removal, the implementation of a vision-based approach technique for approaching space debris from a far-range rendezvous state to a proximity state, and the ground test performance results were described. For the vision-based object tracking, the CAM-shift algorithm with high speed and strong performance, and the Kalman filter were combined and utilized. For measuring the distance to a tracking object, a stereo camera was used. For the construction of a low-cost space environment simulation test bed, a sun simulator was used, and in the case of the platform for approaching, a two-dimensional mobile robot was used. The tracking status was examined while changing the position of the sun simulator, and the results indicated that the CAM-shift showed a tracking rate of about 87% and the relative distance could be measured down to 0.9 m. In addition, considerations for future space environment simulation tests were proposed.

TELEMETRY TIMING ANALYSIS FOR IMAGE RECONSTRUCTION OF KOMPSAT SPACECRAFT

  • Lee, Jin-Ho;Chang, Young-Keun
    • Journal of Astronomy and Space Sciences
    • /
    • v.17 no.1
    • /
    • pp.117-122
    • /
    • 2000
  • The KOMPSAT(Korea Multi-Purpose SATellite) has two optical imaging instruments called EOC(Electro-Optical Camera) and OSMI (Ocean Scanning Multispectral Imager). The image data of these instruments are transmitted to ground station and restored correctly after post-processing with the telemetry data transfeered from KOMPSAT spacecraft. The major timing information of the KOMPSAT is OBT (On-Board Time) which is formatted by the on-board computer of the spacecraft, based on 1Hz sync. pulse coming from the GPS receiver involved. The OBT is transmitted to ground station with the house-keeping telemetry data of the spacecraft while it is distributed to the instruments via 1553B data bus for synchronization during imaging and formatting. The timing information contained in the spacecraft telemetry data would have direct relation to the image data of the instruments, which should be well explained to get a more accurate image. This paper addresses the timing analysis of the KOMPSAT spacecraft and instruments, including the gyro data timing analysis for the correct restoration of the EOC and OSMI image data at ground station.

  • PDF

Rigorous Modeling of the First Generation of the Reconnaissance Satellite Imagery

  • Shin, Sung-Woong;Schenk, Tony
    • Korean Journal of Remote Sensing
    • /
    • v.24 no.3
    • /
    • pp.223-233
    • /
    • 2008
  • In the mid 90's, the U.S. government released images acquired by the first generation of photo reconnaissance satellite missions between 1960 and 1972. The Declassified Intelligent Satellite Photographs (DISP) from the Corona mission are of high quality with an astounding ground resolution of about 2 m. The KH-4A panoramic camera system employed a scan angle of $70^{\circ}$ that produces film strips with a dimension of $55\;mm\;{\times}\;757\;mm$. Since GPS/INS did not exist at the time of data acquisition, the exterior orientation must be established in the traditional way by using control information and the interior orientation of the camera. Detailed information about the camera is not available, however. For reconstructing points in object space from DISP imagery to an accuracy that is comparable to high resolution (a few meters), a precise camera model is essential. This paper is concerned with the derivation of a rigorous mathematical model for the KH-4A/B panoramic camera. The proposed model is compared with generic sensor models, such as affine transformation and rational functions. The paper concludes with experimental results concerning the precision of reconstructed points in object space. The rigorous mathematical panoramic camera model for the KH-4A camera system is based on extended collinearity equations assuming that the satellite trajectory during one scan is smooth and the attitude remains unchanged. As a result, the collinearity equations express the perspective center as a function of the scan time. With the known satellite velocity this will translate into a shift along-track. Therefore, the exterior orientation contains seven parameters to be estimated. The reconstruction of object points can now be performed with the exterior orientation parameters, either by intersecting bundle rays with a known surface or by using the stereoscopic KH-4A arrangement with fore and aft cameras mounted an angle of $30^{\circ}$.