• Title/Summary/Keyword: Multi-Sensor Image

Search Result 286, Processing Time 0.033 seconds

A Study on Aerial Triangulation from Multi-Sensor Imagery

  • Lee, Young-Ran;Habib, Ayman;Kim, Kyung-Ok
    • Korean Journal of Remote Sensing
    • /
    • v.19 no.3
    • /
    • pp.255-261
    • /
    • 2003
  • Recently, the enormous increase in the volume of remotely sensed data is being acquired by an ever-growing number of earth observation satellites. The combining of diversely sourced imagery together is an important requirement in many applications such as data fusion, city modeling and object recognition. Aerial triangulation is a procedure to reconstruct object space from imagery. However, since the different kinds of imagery have their own sensor model, characteristics, and resolution, the previous approach in aerial triangulation (or georeferencing) is purformed on a sensor model separately. This study evaluated the advantages of aerial triangulation of large number of images from multi-sensors simultaneously. The incorporated multi-sensors are frame, push broom, and whisky broom cameras. The limits and problems of push-broom or whisky broom sensor models can be compensated by combined triangulation with other sensors The reconstructed object space from multi-sensor triangulation is more accurate than that from a single model. Experiments conducted in this study show the more accurately reconstructed object space from multi-sensor triangulation.

External Light Evasion Method for Large Multi-touch Screens

  • Park, Young-Jin;Lyu, Hong-Kun;Lee, Sang-Kook;Cho, Hui-Sup
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.3 no.4
    • /
    • pp.226-233
    • /
    • 2014
  • This paper presents an external light evasion method that rectifies the problem of misrecognition due to external lighting. The fundamental concept underlying the proposed method involves recognition of the differences between two images and elimination of the desynchronized external light by synchronizing the image sensor and inner light source of the optical touch screen. A range of artificial indoor light sources and natural sunlight are assessed. The proposed system synchronizes with a Vertical Synchronization (VSYNC) signal and the light source drive signal of the image sensor. Therefore, it can display synchronized light of the acquired image through the image sensor and remove external light that is not from the light source. A subtraction operation is used to find the differences and the absolute value of the result is utilized; hence, the order is irrelevant. The resulting image, which displays only a touched blob on the touchscreen, was created after image processing for coordination recognition and was then supplied to a coordination extraction algorithm.

Auto-parking Controller of Omnidirectional Mobile Robot Using Image Localization Sensor and Ultrasonic Sensors (영상위치센서와 초음파센서를 사용한 전 방향 이동로봇의 자동주차 제어기)

  • Yun, Him Chan;Park, Tae Hyoung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.6
    • /
    • pp.571-576
    • /
    • 2015
  • This paper proposes an auto-parking controller for omnidirectional mobile robots. The controller uses the multi-sensor system including ultrasonic sensor and camera. The several ultrasonic sensors of robot detect the distance between robot and each wall of the parking lot. The camera detects the global position of robot by capturing the image of artificial landmarks. To improve the accuracy of position estimation, we applied the extended Kalman filter with adaptive fuzzy controller. Also we developed the fuzzy control system to reduce the settling time of parking. The experimental results are presented to verify the usefulness of the proposed controller.

Development of Multi-sensor Image Fusion software(InFusion) for Value-added applications (고부가 활용을 위한 이종영상 융합 소프트웨어(InFusion) 개발)

  • Choi, Myung-jin;Chung, Inhyup;Ko, Hyeong Ghun;Jang, Sumin
    • Journal of Satellite, Information and Communications
    • /
    • v.12 no.3
    • /
    • pp.15-21
    • /
    • 2017
  • Following the successful launch of KOMPSAT-3 in May 2012, KOMPSAT-5 in August 2013, and KOMPSAT-3A in March 2015 have succeeded in launching the integrated operation of optical, radar and thermal infrared sensors in Korea. We have established a foundation to utilize the characteristics of each sensors. In order to overcome limitations in the range of application and accuracy of the application of a single sensor, multi-sensor image fusion techniques have been developed which take advantage of multiple sensors and complement each other. In this paper, we introduce the development of software (InFusion) for multi-sensor image fusion and valued-added product generation using KOMPSAT series. First, we describe the characteristics of each sensor and the necessity of fusion software development, and describe the entire development process. It aims to increase the data utilization of KOMPSAT series and to inform the superiority of domestic software through creation of high value-added products.

Multi-Image RPCs Sensor Modeling of High-Resolution Satellite Images Without GCPs (고해상도 위성영상 무기준점 기반 다중영상 센서 모델링)

  • Oh, Jae Hong;Lee, Chang No
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.6
    • /
    • pp.533-540
    • /
    • 2021
  • High-resolution satellite images have high potential to acquire geospatial information over inaccessible areas such as Antarctica. Reference data are often required to increase the positional accuracy of the satellite data but the data are not available in many inland areas in Antarctica. Therefore this paper presents a multi-image RPCs (Rational Polynomial Coefficients) sensor modeling without any ground controls or reference data. Conjugate points between multi-images are extracted and used for the multi-image sensor modeling. The experiment was carried out for Kompsat-3A and showed that the significant accuracy increase was not observed but the approach has potential to suppress the maximum errors, especially the vertical errors.

Feature Matching using Variable Circular Template for Multi-resolution Image Registration (다중 해상도 영상 등록을 위한 가변 원형 템플릿을 이용한 특징 정합)

  • Ye, Chul-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_3
    • /
    • pp.1351-1367
    • /
    • 2018
  • Image registration is an essential process for image fusion, change detection and time series analysis using multi-sensor images. For this purpose, we need to detect accurately the difference of scale and rotation between the multi-sensor images with difference spatial resolution. In this paper, we propose a new feature matching method using variable circular template for image registration between multi-resolution images. The proposed method creates a circular template at the center of a feature point in a coarse scale image and also a variable circular template in a fine scale image, respectively. After changing the scale of the variable circular template, we rotate the variable circular template by each predefined angle and compute the mutual information between the two circular templates and then find the scale, the angle of rotation and the center location of the variable circular template, respectively, in fine scale image when the mutual information between the two circular templates is maximum. The proposed method was tested using Kompsat-2, Kompsat-3 and Kompsat-3A images with different spatial resolution. The experimental results showed that the error of scale factor, the error of rotation angle and the localization error of the control point were less than 0.004, $0.3^{\circ}$ and one pixel, respectively.

RADIOMETRIC CALIBRATION OF OSMI IMAGERY USING SOLAR CALIBRATION (SOLAR CALIBRAION을 이용한 OSMI 영상자료의 복사 보정)

  • 이동한;김용승
    • Journal of Astronomy and Space Sciences
    • /
    • v.17 no.2
    • /
    • pp.295-308
    • /
    • 2000
  • OSMI(Ocean Scanning Multi-Spectral Imager) raw image data(Level 0) were acquired and radiometrically corrected. We have applied two methods, using solar & dark calibration data from OSMI sensor and comparing with the SeaWiFS data, to the radiometric correction of OSMI raw image data. First, we could get the values of the gain and the offset for each pixel and each band from comparing the solar & dark calibration data with the solar input radiance values, calculated from the transmittance, BRDF (Bidirectional Reflectance Distribution Function) and the solar incidence angle($\beta$, $\theta$) of OSMI sensor. Applying this calibration data to OSMI raw image data, we got the two odd results, the lower value of the radiometric corrected image data than the expected value, and the Venetian Blind Effect in the radiometric corrected image data. Second, we could get the reasonable results from comparing OSMI raw image data with the SeaWiFS data, and get a new problem of OSMI sensor.

  • PDF

Physical Offset of UAVs Calibration Method for Multi-sensor Fusion (다중 센서 융합을 위한 무인항공기 물리 오프셋 검보정 방법)

  • Kim, Cheolwook;Lim, Pyeong-chae;Chi, Junhwa;Kim, Taejung;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1125-1139
    • /
    • 2022
  • In an unmanned aerial vehicles (UAVs) system, a physical offset can be existed between the global positioning system/inertial measurement unit (GPS/IMU) sensor and the observation sensor such as a hyperspectral sensor, and a lidar sensor. As a result of the physical offset, a misalignment between each image can be occurred along with a flight direction. In particular, in a case of multi-sensor system, an observation sensor has to be replaced regularly to equip another observation sensor, and then, a high cost should be paid to acquire a calibration parameter. In this study, we establish a precise sensor model equation to apply for a multiple sensor in common and propose an independent physical offset estimation method. The proposed method consists of 3 steps. Firstly, we define an appropriate rotation matrix for our system, and an initial sensor model equation for direct-georeferencing. Next, an observation equation for the physical offset estimation is established by extracting a corresponding point between a ground control point and the observed data from a sensor. Finally, the physical offset is estimated based on the observed data, and the precise sensor model equation is established by applying the estimated parameters to the initial sensor model equation. 4 region's datasets(Jeon-ju, Incheon, Alaska, Norway) with a different latitude, longitude were compared to analyze the effects of the calibration parameter. We confirmed that a misalignment between images were adjusted after applying for the physical offset in the sensor model equation. An absolute position accuracy was analyzed in the Incheon dataset, compared to a ground control point. For the hyperspectral image, root mean square error (RMSE) for X, Y direction was calculated for 0.12 m, and for the point cloud, RMSE was calculated for 0.03 m. Furthermore, a relative position accuracy for a specific point between the adjusted point cloud and the hyperspectral images were also analyzed for 0.07 m, so we confirmed that a precise data mapping is available for an observation without a ground control point through the proposed estimation method, and we also confirmed a possibility of multi-sensor fusion. From this study, we expect that a flexible multi-sensor platform system can be operated through the independent parameter estimation method with an economic cost saving.

A Study on the Development of Zigbee Wireless Image Transmission and Monitoring System (지그비 무선 이미지 전송 및 모니터링 시스템 개발에 대한 연구)

  • Roh, Jae-sung;Kim, Sang-il;Oh, Kyu-tae
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.05a
    • /
    • pp.631-634
    • /
    • 2009
  • Recent advances in wireless communication, electronics, MEMS device, sensor and battery technology have made it possible to manufacture low-cost, low-power, multi-function tiny sensor nodes. A large number of tiny sensor nodes form sensor network through wireless communication. Sensor networks represent a significant improvement over traditional sensors, research on Zigbee wireless image transmission has been a topic in industrial and scientific fields. In this paper, we design a Zigbee wireless image sensor node and multimedia monitoring server system. It consists of embedded processor, memory, CMOS image sensor, image acquisition and processing unit, Zigbee RF module, power supply unit and remote monitoring server system. In the future, we will further improve our Zigbee wireless image sensor node and monitoring server system. Besides, energy-efficient Zigbee wireless image transmission protocol and interworking with mobile network will be our work focus.

  • PDF