• Title/Summary/Keyword: Multi-sensor images

Search Result 179, Processing Time 0.028 seconds

Assessing Spatial Uncertainty Distributions in Classification of Remote Sensing Imagery using Spatial Statistics (공간 통계를 이용한 원격탐사 화상 분류의 공간적 불확실성 분포 추정)

  • Park No-Wook;Chi Kwang-Hoon;Kwon Byung-Doo
    • Korean Journal of Remote Sensing
    • /
    • v.20 no.6
    • /
    • pp.383-396
    • /
    • 2004
  • The application of spatial statistics to obtain the spatial uncertainty distributions in classification of remote sensing images is investigated in this paper. Two quantitative methods are presented for describing two kinds of uncertainty; one related to class assignment and the other related to the connection of reference samples. Three quantitative indices are addressed for the first category of uncertainty. Geostatistical simulation is applied both to integrate the exhaustive classification results with the sparse reference samples and to obtain the spatial uncertainty or accuracy distributions connected to those reference samples. To illustrate the proposed methods and to discuss the operational issues, the experiment was done on a multi-sensor remote sensing data set for supervised land-cover classification. As an experimental result, the two quantitative methods presented in this paper could provide additional information for interpreting and evaluating the classification results and more experiments should be carried out for verifying the presented methods.

Wavelet-based Fusion of Optical and Radar Image using Gradient and Variance (그레디언트 및 분산을 이용한 웨이블릿 기반의 광학 및 레이더 영상 융합)

  • Ye, Chul-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.5
    • /
    • pp.581-591
    • /
    • 2010
  • In this paper, we proposed a new wavelet-based image fusion algorithm, which has advantages in both frequency and spatial domains for signal analysis. The developed algorithm compares the ratio of SAR image signal to optical image signal and assigns the SAR image signal to the fused image if the ratio is larger than a predefined threshold value. If the ratio is smaller than the threshold value, the fused image signal is determined by a weighted sum of optical and SAR image signal. The fusion rules consider the ratio of SAR image signal to optical image signal, image gradient and local variance of each image signal. We evaluated the proposed algorithm using Ikonos and TerraSAR-X satellite images. The proposed method showed better performance than the conventional methods which take only relatively strong SAR image signals in the fused image, in terms of entropy, image clarity, spatial frequency and speckle index.

A CPU-GPU Hybrid System of Environment Perception and 3D Terrain Reconstruction for Unmanned Ground Vehicle

  • Song, Wei;Zou, Shuanghui;Tian, Yifei;Sun, Su;Fong, Simon;Cho, Kyungeun;Qiu, Lvyang
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1445-1456
    • /
    • 2018
  • Environment perception and three-dimensional (3D) reconstruction tasks are used to provide unmanned ground vehicle (UGV) with driving awareness interfaces. The speed of obstacle segmentation and surrounding terrain reconstruction crucially influences decision making in UGVs. To increase the processing speed of environment information analysis, we develop a CPU-GPU hybrid system of automatic environment perception and 3D terrain reconstruction based on the integration of multiple sensors. The system consists of three functional modules, namely, multi-sensor data collection and pre-processing, environment perception, and 3D reconstruction. To integrate individual datasets collected from different sensors, the pre-processing function registers the sensed LiDAR (light detection and ranging) point clouds, video sequences, and motion information into a global terrain model after filtering redundant and noise data according to the redundancy removal principle. In the environment perception module, the registered discrete points are clustered into ground surface and individual objects by using a ground segmentation method and a connected component labeling algorithm. The estimated ground surface and non-ground objects indicate the terrain to be traversed and obstacles in the environment, thus creating driving awareness. The 3D reconstruction module calibrates the projection matrix between the mounted LiDAR and cameras to map the local point clouds onto the captured video images. Texture meshes and color particle models are used to reconstruct the ground surface and objects of the 3D terrain model, respectively. To accelerate the proposed system, we apply the GPU parallel computation method to implement the applied computer graphics and image processing algorithms in parallel.

A grid-line suppression technique based on the nonsubsampled contourlet transform in digital radiography

  • Namwoo Kim;Taeyoung Um;Hyun Tae Leem;Bon Tack Koo;Kyuseok Kim;Kyu Bom Kim
    • Nuclear Engineering and Technology
    • /
    • v.55 no.2
    • /
    • pp.655-668
    • /
    • 2023
  • In radiography, an antiscatter grid is a well-known device for eliminating unexpected x-ray scatter. We investigate a new stationary grid artifact suppression method based on a nonsubsampled contourlet transform (NSCT) incorporated with Gaussian band-pass filtering. The proposed method has an advantage that extracts the Moiré components while minimizing the loss of image information and apply the prior information of Moiré component positions in multi-decomposition sub-band images. We implemented the proposed algorithm and performed a simulation and an experiment to demonstrate its viability. We did this experiment using an x-ray tube (M-113T, Varian, focal spot size: 0.1 mm), a flat-panel detector (ROSE-M Sensor, Aspenstate, pixel dimension: 3032 × 3800 pixels, pixel size: 0.076 mm), and carbon graphite-interspaced grids (JPI Healthcare, 18 cm × 24 cm, line density: 103 LP/inch and 150 LP/inch, ratio: 5:1, focal distance: 65 cm). Our results indicate that the proposed method successfully suppressed grid artifacts by reducing them without either reducing the spatial resolution or causing negative side effects. Consequently, we anticipate that the proposed method can improve image acquisition in a stationary grid x-ray system as well as in extended x-ray imaging.

The evaluation of Spectral Vegetation Indices for Classification of Nutritional Deficiency in Rice Using Machine Learning Method

  • Jaekyeong Baek;Wan-Gyu Sang;Dongwon Kwon;Sungyul Chanag;Hyeojin Bak;Ho-young Ban;Jung-Il Cho
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2022.10a
    • /
    • pp.88-88
    • /
    • 2022
  • Detection of stress responses in crops is important to diagnose crop growth and evaluate yield. Also, the multi-spectral sensor is effectively known to evaluate stress caused by nutrient and moisture in crops or biological agents such as weeds or diseases. Therefore, in this experiment, multispectral images were taken by an unmanned aerial vehicle(UAV) under field condition. The experiment was conducted in the long-term fertilizer field in the National Institute of Crop Science, and experiment area was divided into different status of NPK(Control, N-deficiency, P-deficiency, K-deficiency, Non-fertilizer). Total 11 vegetation indices were created with RGB and NIR reflectance values using python. Variations in nutrient content in plants affect the amount of light reflected or absorbed for each wavelength band. Therefore, the objective of this experiment was to evaluate vegetation indices derived from multispectral reflectance data as input into machine learning algorithm for the classification of nutritional deficiency in rice. RandomForest model was used as a representative ensemble model, and parameters were adjusted through hyperparameter tuning such as RandomSearchCV. As a result, training accuracy was 0.95 and test accuracy was 0.80, and IPCA, NDRE, and EVI were included in the top three indices for feature importance. Also, precision, recall, and f1-score, which are indicators for evaluating the performance of the classification model, showed a distribution of 0.7-0.9 for each class.

  • PDF

Simulation Approach for the Tracing the Marine Pollution Using Multi-Remote Sensing Data (다중 원격탐사 자료를 활용한 해양 오염 추적 모의 실험 방안에 대한 연구)

  • Kim, Keunyong;Kim, Euihyun;Choi, Jun Myoung;Shin, Jisun;Kim, Wonkook;Lee, Kwang-Jae;Son, Young Baek;Ryu, Joo-Hyung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.2_2
    • /
    • pp.249-261
    • /
    • 2020
  • Coastal monitoring using multiple platforms/sensors is a very important tools for accurately understanding the changes in offshore marine environment and disaster with high temporal and spatial resolutions. However, integrated observation studies using multiple platforms and sensors are insufficient, and none of them have been evaluated for efficiency and limitation of convergence. In this study, we aimed to suggest an integrated observation method with multi-remote sensing platform and sensors, and to diagnose the utility and limitation. Integrated in situ surveys were conducted using Rhodamine WT fluorescent dye to simulate various marine disasters. In September 2019, the distribution and movement of RWT dye patches were detected using satellite (Kompsat-2/3/3A, Landsat-8 OLI, Sentinel-3 OLCI and GOCI), unmanned aircraft (Mavic 2 pro and Inspire 2), and manned aircraft platforms after injecting fluorescent dye into the waters of the South Sea-Yeosu Sea. The initial patch size of the RWT dye was 2,600 ㎡ and spread to 62,000 ㎡ about 138 minutes later. The RWT patches gradually moved southwestward from the point where they were first released,similar to the pattern of tidal current flowing southwest as the tides gradually decreased. Unmanned Aerial Vehicles (UAVs) image showed highest resolution in terms of spatial and time resolution, but the coverage area was the narrowest. In the case of satellite images, the coverage area was wide, but there were some limitations compared to other platforms in terms of operability due to the long cycle of revisiting. For Sentinel-3 OLCI and GOCI, the spectral resolution and signal-to-noise ratio (SNR) were the highest, but small fluorescent dye detection was limited in terms of spatial resolution. In the case of hyperspectral sensor mounted on manned aircraft, the spectral resolution was the highest, but this was also somewhat limited in terms of operability. From this simulation approach, multi-platform integrated observation was able to confirm that time,space and spectral resolution could be significantly improved. In the future, if this study results are linked to coastal numerical models, it will be possible to predict the transport and diffusion of contaminants, and it is expected that it can contribute to improving model accuracy by using them as input and verification data of the numerical models.

Validation of Surface Reflectance Product of KOMPSAT-3A Image Data: Application of RadCalNet Baotou (BTCN) Data (다목적실용위성 3A 영상 자료의 지표 반사도 성과 검증: RadCalNet Baotou(BTCN) 자료 적용 사례)

  • Kim, Kwangseob;Lee, Kiwon
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.6_2
    • /
    • pp.1509-1521
    • /
    • 2020
  • Experiments for validation of surface reflectance produced by Korea Multi-Purpose Satellite (KOMPSAT-3A) were conducted using Chinese Baotou (BTCN) data among four sites of the Radical Calibration Network (RadCalNet), a portal that provides spectrophotometric reflectance measurements. The atmosphere reflectance and surface reflectance products were generated using an extension program of an open-source Orfeo ToolBox (OTB), which was redesigned and implemented to extract those reflectance products in batches. Three image data sets of 2016, 2017, and 2018 were taken into account of the two sensor model variability, ver. 1.4 released in 2017 and ver. 1.5 in 2019, such as gain and offset applied to the absolute atmospheric correction. The results of applying these sensor model variables showed that the reflectance products by ver. 1.4 were relatively well-matched with RadCalNet BTCN data, compared to ones by ver. 1.5. On the other hand, the reflectance products obtained from the Landsat-8 by the USGS LaSRC algorithm and Sentinel-2B images using the SNAP Sen2Cor program were used to quantitatively verify the differences in those of KOMPSAT-3A. Based on the RadCalNet BTCN data, the differences between the surface reflectance of KOMPSAT-3A image were shown to be highly consistent with B band as -0.031 to 0.034, G band as -0.001 to 0.055, R band as -0.072 to 0.037, and NIR band as -0.060 to 0.022. The surface reflectance of KOMPSAT-3A also indicated the accuracy level for further applications, compared to those of Landsat-8 and Sentinel-2B images. The results of this study are meaningful in confirming the applicability of Analysis Ready Data (ARD) to the surface reflectance on high-resolution satellites.

Hierarchical Land Cover Classification using IKONOS and AIRSAR Images (IKONOS와 AIRSAR 영상을 이용한 계층적 토지 피복 분류)

  • Yeom, Jun-Ho;Lee, Jeong-Ho;Kim, Duk-Jin;Kim, Yong-Il
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.4
    • /
    • pp.435-444
    • /
    • 2011
  • The land cover map derived from spectral features of high resolution optical images has low spectral resolution and heterogeneity in the same land cover class. For this reason, despite the same land cover class, the land cover can be classified into various land cover classes especially in vegetation area. In order to overcome these problems, detailed vegetation classification is applied to optical satellite image and SAR(Synthetic Aperture Radar) integrated data in vegetation area which is the result of pre-classification from optical image. The pre-classification and vegetation classification were performed with MLC(Maximum Likelihood Classification) method. The hierarchical land cover classification was proposed from fusion of detailed vegetation classes and non-vegetation classes of pre-classification. We can verify the facts that the proposed method has higher accuracy than not only general SAR data and GLCM(Gray Level Co-occurrence Matrix) texture integrated methods but also hierarchical GLCM integrated method. Especially the proposed method has high accuracy with respect to both vegetation and non-vegetation classification.

Generation of Mosaic Image using Aerial Oblique Images (경사사진을 이용한 모자이크 영상 제작)

  • Seo, Sang Il;Park, Byung-Wook;Lee, Byoung Kil;Kim, Jong In
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.22 no.3
    • /
    • pp.145-154
    • /
    • 2014
  • The road network becomes more complex and extensive. Therefore, the inconveniences are caused in accordance with the time delay of the restoration of damaged roads, demands for excessive costs on information collection, and limitations on acquisition of damage information of the roads. Recently, road centric spatial information is gathered using mobile multi sensor system for road inventory. But expensive MMS(Mobile Mapping System) equipments require high maintenance costs from beginning and takes a lot of time in the data processing. So research is needed for continuous maintenance by collecting and displaying the damaged information on a digital map using low cost mobile camera system. In this research we aim to develop the techniques for mosaic with a regular ground sample distance using successive image from oblique camera on a vehicle. For doing this, mosaic image is generated by estimating the homography of high resolution oblique image, and the ground sample distance and appropriate overlap are analyzed using high resolution aerial oblique images which contain resolution target. Based on this we have proposed the appropriate overlap and exposure interval for mobile road inventory system.

A Comparison of Pan-sharpening Algorithms for GK-2A Satellite Imagery (천리안위성 2A호 위성영상을 위한 영상융합기법의 비교평가)

  • Lee, Soobong;Choi, Jaewan
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.4
    • /
    • pp.275-292
    • /
    • 2022
  • In order to detect climate changes using satellite imagery, the GCOS (Global Climate Observing System) defines requirements such as spatio-temporal resolution, stability by the time change, and uncertainty. Due to limitation of GK-2A sensor performance, the level-2 products can not satisfy the requirement, especially for spatial resolution. In this paper, we found the optimal pan-sharpening algorithm for GK-2A products. The six pan-sharpening methods included in CS (Component Substitution), MRA (Multi-Resolution Analysis), VO (Variational Optimization), and DL (Deep Learning) were used. In the case of DL, the synthesis property based method was used to generate training dataset. The process of synthesis property is that pan-sharpening model is applied with Pan (Panchromatic) and MS (Multispectral) images with reduced spatial resolution, and fused image is compared with the original MS image. In the synthesis property based method, fused image with desire level for user can be produced only when the geometric characteristics between the PAN with reduced spatial resolution and MS image are similar. However, since the dissimilarity exists, RD (Random Down-sampling) was additionally used as a way to minimize it. Among the pan-sharpening methods, PSGAN was applied with RD (PSGAN_RD). The fused images are qualitatively and quantitatively validated with consistency property and the synthesis property. As validation result, the GSA algorithm performs well in the evaluation index representing spatial characteristics. In the case of spectral characteristics, the PSGAN_RD has the best accuracy with the original MS image. Therefore, in consideration of spatial and spectral characteristics of fused image, we found that PSGAN_RD is suitable for GK-2A products.