• Title/Summary/Keyword: Spatial fusion

Search Result 329, Processing Time 0.027 seconds

LFFCNN: Multi-focus Image Synthesis in Light Field Camera (LFFCNN: 라이트 필드 카메라의 다중 초점 이미지 합성)

  • Hyeong-Sik Kim;Ga-Bin Nam;Young-Seop Kim
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.3
    • /
    • pp.149-154
    • /
    • 2023
  • This paper presents a novel approach to multi-focus image fusion using light field cameras. The proposed neural network, LFFCNN (Light Field Focus Convolutional Neural Network), is composed of three main modules: feature extraction, feature fusion, and feature reconstruction. Specifically, the feature extraction module incorporates SPP (Spatial Pyramid Pooling) to effectively handle images of various scales. Experimental results demonstrate that the proposed model not only effectively fuses a single All-in-Focus image from images with multi focus images but also offers more efficient and robust focus fusion compared to existing methods.

  • PDF

Skin Lesion Segmentation with Codec Structure Based Upper and Lower Layer Feature Fusion Mechanism

  • Yang, Cheng;Lu, GuanMing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.1
    • /
    • pp.60-79
    • /
    • 2022
  • The U-Net architecture-based segmentation models attained remarkable performance in numerous medical image segmentation missions like skin lesion segmentation. Nevertheless, the resolution gradually decreases and the loss of spatial information increases with deeper network. The fusion of adjacent layers is not enough to make up for the lost spatial information, thus resulting in errors of segmentation boundary so as to decline the accuracy of segmentation. To tackle the issue, we propose a new deep learning-based segmentation model. In the decoding stage, the feature channels of each decoding unit are concatenated with all the feature channels of the upper coding unit. Which is done in order to ensure the segmentation effect by integrating spatial and semantic information, and promotes the robustness and generalization of our model by combining the atrous spatial pyramid pooling (ASPP) module and channel attention module (CAM). Extensive experiments on ISIC2016 and ISIC2017 common datasets proved that our model implements well and outperforms compared segmentation models for skin lesion segmentation.

Improvement of Land Cover Classification Accuracy by Optimal Fusion of Aerial Multi-Sensor Data

  • Choi, Byoung Gil;Na, Young Woo;Kwon, Oh Seob;Kim, Se Hun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.3
    • /
    • pp.135-152
    • /
    • 2018
  • The purpose of this study is to propose an optimal fusion method of aerial multi - sensor data to improve the accuracy of land cover classification. Recently, in the fields of environmental impact assessment and land monitoring, high-resolution image data has been acquired for many regions for quantitative land management using aerial multi-sensor, but most of them are used only for the purpose of the project. Hyperspectral sensor data, which is mainly used for land cover classification, has the advantage of high classification accuracy, but it is difficult to classify the accurate land cover state because only the visible and near infrared wavelengths are acquired and of low spatial resolution. Therefore, there is a need for research that can improve the accuracy of land cover classification by fusing hyperspectral sensor data with multispectral sensor and aerial laser sensor data. As a fusion method of aerial multisensor, we proposed a pixel ratio adjustment method, a band accumulation method, and a spectral graph adjustment method. Fusion parameters such as fusion rate, band accumulation, spectral graph expansion ratio were selected according to the fusion method, and the fusion data generation and degree of land cover classification accuracy were calculated by applying incremental changes to the fusion variables. Optimal fusion variables for hyperspectral data, multispectral data and aerial laser data were derived by considering the correlation between land cover classification accuracy and fusion variables.

Spectral Quality Enhancement of Pan-Sharpened Satellite Image by Using Modified Induction Technique (수정된 영상 유도 기법을 통한 융합영상의 분광정보 향상 알고리즘)

  • Choi, Jae-Wan;Kim, Hyung-Tae
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.16 no.3
    • /
    • pp.15-20
    • /
    • 2008
  • High-spatial resolution remote sensing satellites (IKONOS-2, QuickBird and KOMPSAT-2) have provided low-spatial resolution multispectral images and high-spatial resolution panchromatic images. Image fusion or Pan-sharpening is a very important in that it aims at using a satellite image with various applications such as visualization and feature extraction through combining images that have a different spectral and spatial resolution. Many image fusion algorithms are proposed, most methods could not preserve the spectral information of original multispectral image after image fusion. In order to solve this problem, modified induction technique which reduce the spectral distortion of fused image is developed. The spectral distortion is adjusted by the comparison between the spatially degraded pan-sharpened image and original multispectral image and our algorithm is evaluated by QuickBird satellite imagery. In the experiment, pan-sharpened image by various methods can reduce spectral distortion when our algorithm is applied to the fused images.

  • PDF

On Mathematical Representation and Integration Theory for GIS Application of Remote Sensing and Geological Data

  • Moon, Woo-Il M.
    • Korean Journal of Remote Sensing
    • /
    • v.10 no.2
    • /
    • pp.37-48
    • /
    • 1994
  • In spatial information processing, particularly in non-renewable resource exploration, the spatial data sets, including remote sensing, geophysical and geochemical data, have to be geocoded onto a reference map and integrated for the final analysis and interpretation. Application of a computer based GIS(Geographical Information System of Geological Information System) at some point of the spatial data integration/fusion processing is now a logical and essential step. It should, however, be pointed out that the basic concepts of the GIS based spatial data fusion were developed with insufficient mathematical understanding of spatial characteristics or quantitative modeling framwork of the data. Furthermore many remote sensing and geological data sets, available for many exploration projects, are spatially incomplete in coverage and interduce spatially uneven information distribution. In addition, spectral information of many spatial data sets is often imprecise due to digital rescaling. Direct applications of GIS systems to spatial data fusion can therefore result in seriously erroneous final results. To resolve this problem, some of the important mathematical information representation techniques are briefly reviewed and discussed in this paper with condideration of spatial and spectral characteristics of the common remote sensing and exploration data. They include the basic probabilistic approach, the evidential belief function approach (Dempster-Shafer method) and the fuzzy logic approach. Even though the basic concepts of these three approaches are different, proper application of the techniques and careful interpretation of the final results are expected to yield acceptable conclusions in cach case. Actual tests with real data (Moon, 1990a; An etal., 1991, 1992, 1993) have shown that implementation and application of the methods discussed in this paper consistently provide more accurate final results than most direct applications of GIS techniques.

Spatial Downscaling of MODIS Land Surface Temperature: Recent Research Trends, Challenges, and Future Directions

  • Yoo, Cheolhee;Im, Jungho;Park, Sumin;Cho, Dongjin
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.4
    • /
    • pp.609-626
    • /
    • 2020
  • Satellite-based land surface temperature (LST) has been used as one of the major parameters in various climate and environmental models. Especially, Moderate Resolution Imaging Spectroradiometer (MODIS) LST is the most widely used satellite-based LST product due to its spatiotemporal coverage (1 km spatial and sub-daily temporal resolutions) and longevity (> 20 years). However, there is an increasing demand for LST products with finer spatial resolution (e.g., 10-250 m) over regions such as urban areas. Therefore, various methods have been proposed to produce high-resolution MODIS-like LST less than 250 m (e.g., 100 m). The purpose of this review is to provide a comprehensive overview of recent research trends and challenges for the downscaling of MODIS LST. Based on the recent literature survey for the past decade, the downscaling techniques classified into three groups-kernel-driven, fusion-based, and the combination of kernel-driven and fusion-based methods-were reviewed with their pros and cons. Then, five open issues and challenges were discussed: uncertainty in LST retrievals, low thermal contrast, the nonlinearity of LST temporal change, cloud contamination, and model generalization. Future research directions of LST downscaling were finally provided.

Using the fusion of spatial and temporal features for malicious video classification (공간과 시간적 특징 융합 기반 유해 비디오 분류에 관한 연구)

  • Jeon, Jae-Hyun;Kim, Se-Min;Han, Seung-Wan;Ro, Yong-Man
    • The KIPS Transactions:PartB
    • /
    • v.18B no.6
    • /
    • pp.365-374
    • /
    • 2011
  • Recently, malicious video classification and filtering techniques are of practical interest as ones can easily access to malicious multimedia contents through the Internet, IPTV, online social network, and etc. Considerable research efforts have been made to developing malicious video classification and filtering systems. However, the malicious video classification and filtering is not still being from mature in terms of reliable classification/filtering performance. In particular, the most of conventional approaches have been limited to using only the spatial features (such as a ratio of skin regions and bag of visual words) for the purpose of malicious image classification. Hence, previous approaches have been restricted to achieving acceptable classification and filtering performance. In order to overcome the aforementioned limitation, we propose new malicious video classification framework that takes advantage of using both the spatial and temporal features that are readily extracted from a sequence of video frames. In particular, we develop the effective temporal features based on the motion periodicity feature and temporal correlation. In addition, to exploit the best data fusion approach aiming to combine the spatial and temporal features, the representative data fusion approaches are applied to the proposed framework. To demonstrate the effectiveness of our method, we collect 200 sexual intercourse videos and 200 non-sexual intercourse videos. Experimental results show that the proposed method increases 3.75% (from 92.25% to 96%) for classification of sexual intercourse video in terms of accuracy. Further, based on our experimental results, feature-level fusion approach (for fusing spatial and temporal features) is found to achieve the best classification accuracy.

Image Fusion Framework for Enhancing Spatial Resolution of Satellite Image using Structure-Texture Decomposition (구조-텍스처 분할을 이용한 위성영상 융합 프레임워크)

  • Yoo, Daehoon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.21-29
    • /
    • 2019
  • This paper proposes a novel framework for image fusion of satellite imagery to enhance spatial resolution of the image via structure-texture decomposition. The resolution of the satellite imagery depends on the sensors, for example, panchromatic images have high spatial resolution but only a single gray band whereas multi-spectral images have low spatial resolution but multiple bands. To enhance the spatial resolution of low-resolution images, such as multi-spectral or infrared images, the proposed framework combines the structures from the low-resolution image and the textures from the high-resolution image. To improve the spatial quality of structural edges, the structure image from the low-resolution image is guided filtered with the structure image from the high-resolution image as the guidance image. The combination step is performed by pixel-wise addition of the filtered structure image and the texture image. Quantitative and qualitative evaluation demonstrate the proposed method preserves spectral and spatial fidelity of input images.

Modified a'trous Algorithm based Wavelet Pan-sharpening Method Using IKONOS Image (IKONOS 영상을 이용한 수정된 a'trous 알고리즘 기반 웨이블릿 영상융합 기법)

  • Kim, Yong Hyun;Choi, Jae Wan;Kim, Hye Jin;Kim, Yong Il
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.2D
    • /
    • pp.305-309
    • /
    • 2009
  • The object of image fusion is to integrate information from multiple images as the same scene. In the satellite image fusion, many image fusion methods have been proposed for combining a high resolution panchromatic(PAN) image with low resolution multispectral(MS) images and it is very important to preserve both the spatial detail and the spectral information of fusion result. The image fusion method using wavelet transform shows good result compared with other fusion methods in preserving spectral information. This study proposes a modified a'trous algorithm based wavelet image fusion method using IKONOS image. Based on the result of experiment using IKONOS image, we confirmed that proposed method was more effective in preserving spatial detail and spectral information than existing fusion methods using a'trous algorithm.

Multi-focus Image Fusion using Fully Convolutional Two-stream Network for Visual Sensors

  • Xu, Kaiping;Qin, Zheng;Wang, Guolong;Zhang, Huidi;Huang, Kai;Ye, Shuxiong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.5
    • /
    • pp.2253-2272
    • /
    • 2018
  • We propose a deep learning method for multi-focus image fusion. Unlike most existing pixel-level fusion methods, either in spatial domain or in transform domain, our method directly learns an end-to-end fully convolutional two-stream network. The framework maps a pair of different focus images to a clean version, with a chain of convolutional layers, fusion layer and deconvolutional layers. Our deep fusion model has advantages of efficiency and robustness, yet demonstrates state-of-art fusion quality. We explore different parameter settings to achieve trade-offs between performance and speed. Moreover, the experiment results on our training dataset show that our network can achieve good performance with subjective visual perception and objective assessment metrics.