• 제목/요약/키워드: Spatial fusion

Search Result 329, Processing Time 0.033 seconds

Biorthogonal Wavelets-based Landsat 7 Image Fusion

  • Choi, Myung-Jin;Kim, Moon-Gyu;Kim, Tae-Jung;Kim, Rae-Young
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.724-726
    • /
    • 2003
  • Currently available image fusion methods are not efficient for fusing the Landsat 7 images. Significant color distortion is one of the major problems. In this paper, using the well-known wavelet based method for data fusion between high-resolution panchromatic and low-resolution multispectral satellite images, we performed Landsat 7 image fusion. Based on the experimental results obtained from this study, we analyzed some reasons for color distortion. A new approach using the biorthogonal wavelets based method for data fusion is presented. This new method has reached an optimum fusion result - with the same spectral resolution as the multispectral image and the same spatial resolution as the panchromatic image with minimum artifacts.

  • PDF

Hyperspectral Image Fusion Algorithm Based on Two-Stage Spectral Unmixing Method (2단계 분광혼합기법 기반의 하이퍼스펙트럴 영상융합 알고리즘)

  • Choi, Jae-Wan;Kim, Dae-Sung;Lee, Byoung-Kil;Yu, Ki-Yun;Kim, Yong-Il
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.4
    • /
    • pp.295-304
    • /
    • 2006
  • Image fusion is defined as making new image by merging two or more images using special algorithms. In case of remote sensing, it means fusing multispectral low-resolution remotely sensed image with panchromatic high-resolution image. Generally, hyperspectral image fusion is accomplished by utilizing fusion technique of multispectral imagery or spectral unmixing model. But, the former may distort spectral information and the latter needs endmember data or additional data, and has a problem with not preserving spatial information well. This study proposes a new algorithm based on two stage spectral unmixing model for preserving hyperspectral image's spectral information. The proposed fusion technique is implemented and tested using Hyperion and ALI images. it is shown to work well on maintaining more spatial/spectral information than the PCA/GS fusion algorithms.

Combining Geostatistical Indicator Kriging with Bayesian Approach for Supervised Classification

  • Park, No-Wook;Chi, Kwang-Hoon;Moon, Wooil-M.;Kwon, Byung-Doo
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.382-387
    • /
    • 2002
  • In this paper, we propose a geostatistical approach incorporated to the Bayesian data fusion technique for supervised classification of multi-sensor remote sensing data. Traditional spectral based classification cannot account for the spatial information and may result in unrealistic classification results. To obtain accurate spatial/contextual information, the indicator kriging that allows one to estimate the probability of occurrence of classes on the basis of surrounding observations is incorporated into the Bayesian framework. This approach has its merit incorporating both the spectral information and spatial information and improves the confidence level in the final data fusion task. To illustrate the proposed scheme, supervised classification of multi-sensor test remote sensing data set was carried out.

  • PDF

Analysis of the Increase of Matching Points for Accuracy Improvement in 3D Reconstruction Using Stereo CCTV Image Data

  • Moon, Kwang-il;Pyeon, MuWook;Eo, YangDam;Kim, JongHwa;Moon, Sujung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.2
    • /
    • pp.75-80
    • /
    • 2017
  • Recently, there has been growing interest in spatial data that combines information and communication technology with smart cities. The high-precision LiDAR (Light Dectection and Ranging) equipment is mainly used to collect three-dimensional spatial data, and the acquired data is also used to model geographic features and to manage plant construction and cultural heritages which require precision. The LiDAR equipment can collect precise data, but also has limitations because they are expensive and take long time to collect data. On the other hand, in the field of computer vision, research is being conducted on the methods of acquiring image data and performing 3D reconstruction based on image data without expensive equipment. Thus, precise 3D spatial data can be constructed efficiently by collecting and processing image data using CCTVs which are installed as infrastructure facilities in smart cities. However, this method can have an accuracy problem compared to the existing equipment. In this study, experiments were conducted and the results were analyzed to increase the number of extracted matching points by applying the feature-based method and the area-based method in order to improve the precision of 3D spatial data built with image data acquired from stereo CCTVs. For techniques to extract matching points, SIFT algorithm and PATCH algorithm were used. If precise 3D reconstruction is possible using the image data from stereo CCTVs, it will be possible to collect 3D spatial data with low-cost equipment and to collect and build data in real time because image data can be easily acquired through the Web from smart-phones and drones.

Fusion of Sonar and Laser Sensor for Mobile Robot Environment Recognition

  • Kim, Kyung-Hoon;Cho, Hyung-Suck
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.91.3-91
    • /
    • 2001
  • A sensor fusion scheme for mobile robot environment recognition that incorporates range data and contour data is proposed. Ultrasonic sensor provides coarse spatial description but guarantees open space with no obstacle within sonic cone with relatively high belief. Laser structured light system provides detailed contour description of environment but prone to light noise and is easily affected by surface reflectivity. Overall fusion process is composed of two stages: Noise elimination and belief updates. Dempster Shafer´s evidential reasoning is applied at each stage. Open space estimation from sonar range measurements brings elimination of noisy lines from laser sensor. Comparing actual sonar data to the simulated sonar data enables ...

  • PDF

Engineering geoscience in Korea - from mining to fusion technology

  • Hyun, Byung-Koo
    • 한국지구물리탐사학회:학술대회논문집
    • /
    • 2003.11a
    • /
    • pp.3-6
    • /
    • 2003
  • Fusion technology is a key to maximize innovative potential of geoscience for many challenging issues today that require integrated multi-disciplinary approach. Successful fusion technological advance can be achieved when interdisciplinary cooperation is firmly established. In order to establish firm the context of inter-disciplinarity that is still feeble, it is urgent to continuously develop geoscientific models and systematic infra for interdisciplinary cooperation such as well-prepared geo-spatial database and knowledge base network that can support multi-lateral cooperation between multiple disciplines and multi-phase international cooperation.

  • PDF

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.

the fusion of LiDAR Data and high resolution Image for the Precise Monitoring in Urban Areas (도심의 정밀 모니터링을 위한 LiDAR 자료와 고해상영상의 융합)

  • 강준묵;강영미;이형석
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2004.04a
    • /
    • pp.383-388
    • /
    • 2004
  • The fusion of a different kind sensor is fusion of the obtained data by the respective independent technology. This is a important technology for the construction of 3D spatial information. particularly, information is variously realized by the fusion of LiDAR and mobile scanning system and digital map, fusion of LiDAR data and high resolution, LiDAR etc. This study is to generate union DEM and digital ortho image by the fusion of LiDAR data and high resolution image and monitor precisely topology, building, trees etc in urban areas using the union DEM and digital ortho image. using only the LiDAR data has some problems because it needs manual linearization and subjective reconstruction.

  • PDF

A Case Study of Land-cover Classification Based on Multi-resolution Data Fusion of MODIS and Landsat Satellite Images (MODIS 및 Landsat 위성영상의 다중 해상도 자료 융합 기반 토지 피복 분류의 사례 연구)

  • Kim, Yeseul
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1035-1046
    • /
    • 2022
  • This study evaluated the applicability of multi-resolution data fusion for land-cover classification. In the applicability evaluation, a spatial time-series geostatistical deconvolution/fusion model (STGDFM) was applied as a multi-resolution data fusion model. The study area was selected as some agricultural lands in Iowa State, United States. As input data for multi-resolution data fusion, Moderate Resolution Imaging Spectroradiometer (MODIS) and Landsat satellite images were used considering the landscape of study area. Based on this, synthetic Landsat images were generated at the missing date of Landsat images by applying STGDFM. Then, land-cover classification was performed using both the acquired Landsat images and the STGDFM fusion results as input data. In particular, to evaluate the applicability of multi-resolution data fusion, two classification results using only Landsat images and using both Landsat images and fusion results were compared and evaluated. As a result, in the classification result using only Landsat images, the mixed patterns were prominent in the corn and soybean cultivation areas, which are the main land-cover type in study area. In addition, the mixed patterns between land-cover types of vegetation such as hay and grain areas and grass areas were presented to be large. On the other hand, in the classification result using both Landsat images and fusion results, these mixed patterns between land-cover types of vegetation as well as corn and soybean were greatly alleviated. Due to this, the classification accuracy was improved by about 20%p in the classification result using both Landsat images and fusion results. It was considered that the missing of the Landsat images could be compensated for by reflecting the time-series spectral information of the MODIS images in the fusion results through STGDFM. This study confirmed that multi-resolution data fusion can be effectively applied to land-cover classification.

Potential for Image Fusion Quality Improvement through Shadow Effects Correction (그림자효과 보정을 통한 영상융합 품질 향상 가능성)

  • 손홍규;윤공현
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2003.10a
    • /
    • pp.397-402
    • /
    • 2003
  • This study is aimed to improve the quality of image fusion results through shadow effects correction. For this, shadow effects correction algorithm is proposed and visual comparisons have been made to estimate the quality of image fusion results. The following four steps have been performed to improve the image fusion qualify First, the shadow regions of satellite image are precisely located. Subsequently, segmentation of context regions is manually implemented for accurate correction. Next step, to calculate correction factor we compared the context region with the same non-shadow context region. Finally, image fusion is implemented using collected images. The result presented here helps to accurately extract and interpret geo-spatial information from satellite imagery.

  • PDF