DOI QR코드

DOI QR Code

An Adaptive Weighted Regression and Guided Filter Hybrid Method for Hyperspectral Pansharpening

  • Dong, Wenqian (State Key Lab. of Integrated Service Networks, Xidian University) ;
  • Xiao, Song (State Key Lab. of Integrated Service Networks, Xidian University)
  • Received : 2017.11.05
  • Accepted : 2018.05.03
  • Published : 2019.01.31

Abstract

The goal of hyperspectral pansharpening is to combine a hyperspectral image (HSI) with a panchromatic image (PANI) derived from the same scene to obtain a single fused image. In this paper, a new hyperspectral pansharpening approach using adaptive weighted regression and guided filter is proposed. First, the intensity information (INT) of the HSI is obtained by the adaptive weighted regression algorithm. Especially, the optimization formula is solved to obtain the closed solution to reduce the calculation amount. Then, the proposed method proposes a new way to obtain the sufficient spatial information from the PANI and INT by guided filtering. Finally, the fused HSI is obtained by adding the extracted spatial information to the interpolated HSI. Experimental results demonstrate that the proposed approach achieves better property in preserving the spectral information as well as enhancing the spatial detail compared with other excellent approaches in visual interpretation and objective fusion metrics.

Keywords

1. Introduction

 With the rapid development of remote sensing satellite science and technology, a variety of remote sensing sensors have acquired a large number of remote sensing images, which are widely used in weather forecasting, geological survey, marine environmental monitoring and other fields. However, due to the mutual restriction between spatial detail resolution and spectral resolution of remote sensing images acquired by optical sensors, itis a difficult problem to obtain images with both high spectral resolution and high spatial resolution under the condition of maintaining a certain SNR. Existing remote sensing satellites can only obtain hyperspectral image (HSI) or multispectral image (MSI) with low spatial resolution but high spectral resolution and panchromatic image (PANI) with little spectral information but comprehensive spatial information. However, the HSI and PANI often fail to meet the requirements of high spatial and spectral resolutions for application in remote sensing image classification, target detection and other fields. Therefore, the fusion of the PANI and HSI which is also called pansharpening is of great research and exploration significance to obtain remote sensing images that can simultaneously maintain high spatial resolution and comprehensive spectral information [1-3]. In recent years, hyperspectral pansharpening technology has received the attention of researchers and developed rapidly.

 In order to fuse the PANI and HSI and enhance the spatial resolution of HSI, many approaches have been presented. Component substitution (CS), e.g., Gram-Schmidt (GS) [4], intensity-hue-saturation (IHS) [5-6], principal component analysis (PCA) [7-9], and Gram-Schmidt Adaptive (GSA) [10] is one family of fusion algorithms. These CS methods rely on the projection of HSI to another space to separate spectral component and spatial component. The obtained spatial component is replaced with the PANI. The final fused HSI is generated by inverse transformation [3]. Although they have good spatial capacity [9-10], CS approaches generate a significant spectral distortion [11]. Another technique is multiresolution analysis (MRA), e.g., smoothing filter-based intensity modulation (SFIM)[12], MTF-Generalized Laplacian Pyramid (MTG) [13], and MTG with High Pass Modulation (MGH) [14]. An appropriate spatial filter is applied to the PANI in MRA class for generating spatial details. The obtained spatial component is added to HSI to obtain the fused HSI. Although the MRA approach has advantage of spectral and temporal consistency [15], the design of spatial filter is complicated and the computational burden is large. Model based approaches and hybrid approaches are also two classes of fusion algorithms [3]. Model based methods include the Matrix factorization algorithms such as coupled nonnegative matrix factorization (CNMF) [16]. These algorithms have a superior fusion effect, but they have large amount of calculation [17]. Hybrid approach combines two hyperspectral pansharpening algorithms to produce a new algorithm. Guided filter PCA (GFPCA) is a typical representative of the hybrid approach [18]. GFPCA effectively reduce spectral distortion, but it generates some blur due to the insufficient spatial details [11]. Many multi-modality image fusion [19] and infrared image fusion methods [20] can also be transplanted into hyperspectral remote sensing image. These fusion methods usually obtain excellent experimental results.

 This paper proposes a novel fusion approach based on adaptive weighted regression and guided filter. Compared with the traditional approaches, the proposed approach has following novelties. First, local similarity between multiple source images is an important issue in the process of extracting spatial information. The guided filter which is a local-based image filter has the potential to minimize the overlap spatial detail information between the intensity information (INT) and PANI. Therefore, the spectral distortion can be greatly reduced. Second, the guided filter is used for obtaining the variation tendency of INT and PANI in turn, in which INT and PANI serve as guided images respectively. In this way, the spectral distortion problem could be weakened obviously. The extracted spatial information depends not only on the HSI but also on the PANI, which ensures the data dependence. Third, the INT image which represents the approximate spatial layer of HSI is acquired via the adaptive weighted regression algorithm [10] instead of simple liner weighting, and we solve an optimized formula to acquire the weights to decrease calculation time. Comparative analyses indicate the presented approach outperforms other state-of-the-art approaches.

 The remainder of the paper is organized as follows. Section 2 explains the proposed fusion approach using the adaptive weighted regression method and guided filter. Experiments and their analyses are presented in section 3. Section 4 contains the conclusions.

2. Proposed method

 Fig. 1 displays the schematic diagram of the proposed method. Synthetic intensity information (INT) is first obtained via the adaptive weighted regression algorithm, and an optimized formula is solved to obtain weighting coefficients. Then, detail information is acquired via utilizing the guided filter with PANI and INT serving as the guidance images. Finally, the fused HSI is generated through adding the acquired spatial information to the interpolated HSI. The detailed steps of the proposed approach are described as follows.

Fig. 1. Flowchart of the proposed approach.

2.1 Obtaining intensity information (INT) with adaptive weighted regression

 Since the original HSI and PANI have different sizes, upsampling is done on HSI to obtain the same size for both. The adaptive weighted regression method is used for obtaining the INT image of HSI which denotes an approximate spatial component of HSI.

\(I N T=\sum_\limits{i=1}^{m} \lambda_{i} H S U_{i}\)       (1)

 where \(HSU\) is the interpolated HSI, \(m\) is band number of HSI, \(\lambda_i \)is the weighting parameters, and \(HSU_i\) is the ith band of \(HSU\). To decrease spectral distortion, the optimal set of weights \(\left\{{\lambda_{i}}\right\}_{i,...,m}\) can be obatined by solving the optimization formula as follows:

\(\min _\limits{{\lambda}_{1}, \ldots, \lambda_{m}}\left\|P A N-\sum_\limits{i=1}^{m} \lambda_{i} H S U_{i}\right\|^{2}\)       (2)

where \(PAN\) denotes the PANI. We employ the least square method to solve the optimization function above.

\(J(\lambda)=\|P A N-\lambda H S U\|^{2}=(P A N-\lambda H S U)^{T}(P A N-\lambda H S U)\)       (3)

where \(\lambda=[\lambda_1,\lambda_2,\ldots,\lambda_m]\) . Equation (3) is calculated the derivative with respect to \(\lambda\), and Equation (3) can be denoted as

\(\left(H S U^{T} \times H S U\right) \times \lambda-H S U^{T} \times P A N=0\)       (4)

 The weight vector \(\lambda\) can be obtained by calculating Equation (4).

\(\lambda=\left(H S U^{T} \times H S U\right)^{-1} \times H S U^{T} \times P A N\)       (5)

where each line corresponds to one band. \(HSU \in R^{n \times m}\) and \(PAN \in R^{n \times 1}\) denote the interploted HSI and PANI. \(n\) represents the total pixel number of one band. The optimization equation (2) is solved to get the closed-form solution. Since the optimization iterative steps are no longer done, the calculation amount is reduced.

2.2 Extracting spatial details with guided filter

 To extract sufficient spatial information from the PANI and INT, a novel guided filter strategy is proposed. The guided filter [21] has successfully experimented for some image processing fields including flash/no-flash de-noising, compression, and so on [22-23]. The guided filter which possesses the fast realization is one effective edge-preserving filter. In this paper, a guided filter is first utilized to extract spatial details differences between PANI and INT. Considering that the spatial details differences between PANI and INT cannot completely represent the complete detail information, a guided filter is then utilized to capture spatial structure component serving as supplementary details from PANI. As shown in Fig. 1, the detail information is extracted as follows.

 1) A guided image filter is first performed on PANI with INT served as guidance image. It is assumed that the filtering output image GI is a linear transform of guidance image INT in a square area \(\Omega_{k}\) .

\(G I_{i}=a_{k} I N T_{i}+b_{k}, \forall_{i} \in \Omega_{k}\)       (6)

where\(INT_i\) and \(GI_i\) represent the \(i^{th}\) pixel intensity of INT and GI. Size of square window \(\Omega_k\) is \((2r+1) \times (2r+1) \)and \(r\) is an integer. \(a_k\) and \(b_k\) are permanent in the square window. They are obtained by computing a cost function and minimizing the squared difference between PANI and GI.

\(E\left(a_{k}, b_{k}\right)=\sum_{i \in \Omega_{k}}\left(\left(a_{k} I N T_{i}+b_{k}-P A N_{i}\right)^{2}+\psi a_{k}^{2}\right)\)       (7)

where ψ denotes the regularized parameter. Parameters ka and kb are calculated via linear regression [24]:

\(a_{k}=\frac{\frac{1}{|\Omega|} \sum_{i\in\Omega_{k}} I N T_{i} P A N_{i}-\mu_{k} \overline{P A N_{k}}}{\theta_{k}^{2}+\psi}\)       (8)

\(b_{k}=\overline{P A N_{k}}-a_{k} \mu_{k}\)       (9)

where \(\mu_k\) and \(\theta_k^2\)are mean and variance of INT, \(|\Omega|\) is pixel number in \(\Omega_k\) , and \(\overline{P A N_{k}}\) is mean of PANI. All overlapping windows \(\Omega_k\) which cover the pixel \(i\) have different values of \((a_k,b_k)\) . The value of the output image \(GI_i\) may vary when \((a_k,b_k)\) are calculated indifferent square window. Thus, the average value of \(a_k\) and \(b_k\) for all the overlapping windows is calculated. The final guided filtered output image can be given as follows:

\(G I_{i}=\overline{a_{i}} I N T_{i}+\overline{b_{i}}\)       (10)

where \(\overline a_i=\frac{1}{|\Omega|}\Sigma_{k\in\Omega_i}a_k\) and \(\overline b_i=\frac{1}{|\Omega|}\Sigma_{k\in\Omega_i}b_k\) .

 For clarity, we represent this filtering process as:

\(G I=f\left(P A N, I N T, \quad \gamma_{1}, \quad \varepsilon_{1}\right)\)       (11)

where \(f\) denotes a guided filter function, \(\gamma_1\) is filter size, and \(\varepsilon_1\) is blur degree. Since the guidance image INT contains less details, we employ a relatively small \(\gamma_1\) and \(\varepsilon_1\). GI obtains the spatial information of the HSI, since guided filter transfers the spatialinformation from the guidance image to the output image. Therefore, the spatial details differences between HSI and PANI can be easily acquired by subtracting \(GI\) from PANI.

\(S D=P A N-G I\)       (12)

where \(SD\) is the details differences between PANI and INT.

 2) In order to obtain enough spatial infromation and realize consistency, a guided filter is applied to INT with PANI served as a guidance image. The obtained details which is captured from the PANI is the supplementary information. Similar to the extraction details process above, this procedure can be described as:

\(G P=f\left(I N T, P A N, \quad \gamma_{2}, \quad \varepsilon_{2}\right)\)       (13)

where GP represents the guided filtered output, \(\gamma_2\) represents the filter size, and  \(\varepsilon_2\) represents blur degree. Here, we employ a relatively large \(\gamma_2\) and \(\varepsilon_2\) since the guidance PANI contains enormous amount of spatial details.

 3) Then, the complete spatial information are obtained.

\(T_{s}=\beta_{1} S D+\beta_{2} G P\)       (14)

where, \(\beta_1\) and \(\beta_2\) are the tradeoff parameters, and \(T_s\) is the total spatial details. The trade off parameters \(\beta_1\) and \(\beta_2\) \((0<​​\beta_1,\beta_2<1)\) command the amount of details which will be injected into the HSU image, and influence fusion performace directly. SD that is the details differences between PANI and HSI need be added more, and GP that is the supplementary details need be added less. So \(\beta_1\) is the larger value that is approximate to 1, while \(\beta_2\) is the smaller value that is close to 0.

2.3 Generating the fused HSI

 The extracted complete spatial infromation is finally injected into the upsampled HSI to obtain the fused HSI.

\(F_{i}=H S U_{i}+T_{s}\)       (15)

where \(F\) is the fused HSI, \(F_i\) is the ith band of \(F\) .

3. Experimental Results and Analysis

 The performance of several comparison algorithms and our method is evaluated by conducting the experiments on three datasets captured by different sensors. In this experiment, in addition to comparing the subjective performance according to the visual renderings, quality evaluation indexes are also used to objectively evaluate the performance of different methods. For the subjective evaluation, spatial quality can be judged visually, but it is difficult to notice the slight color changes only by the subjective evaluation aspect. The quality evaluation indexes can objectively compare the spatial and spectral quality of different methods. In order to verify the effectiveness of the proposed method, the results of the proposed method and several representative pansharpening methods, namely Gram-Schmidt (GS) [4], GS Adaptive (GSA) [10], principal component analysis (PCA) [7-9], guided filter PCA (GFPCA) [16], coupled nonnegative matrix factorization (CNMF) [14] and smoothing filter-based intensity modulation (SFIM) [12]methods, are mainly listed in this section. In the experiments, the filter size and the blur degree of the guided filter are set to \(\gamma_1=15,\gamma_2=58\), and \(\varepsilon_1=\varepsilon_2=10^{-6}\) .

3.1 Quality measures

 For quantitative comparison, cross correlation (CC) [25], spectral angle mapper (SAM) [3], erreur relative globale adimensionnelle de synthèse(ERGAS) [26] and root mean squared error (RMSE) [3] are utilized to assess the performance of different methods.

 Below, we provide the definitions of these indexes operating on the fused image \(FU\in R^{m\times n}\) and on the reference image \(R\in R^{m\times n}\) . m denotes the number of the bands, and n denotes the number of pixels. In the definitions, \(FU_k\) and \(R_k\) denote the \(k^{th}\) columns of \(FU\) and \(R\) , respectively. One column of the matrix corresponds to one spectral vector of HSI. \(FU^i\) and \(R^i\) denote the \(i^{th}\) row of FU and R , respectively. Each band of HSI is mathematically represented as a row of the matrix. The matrices \(P,Q\in R^{1 \times n}\) represent two generic single-band images, \(P_l\) simply is the \(l\)th pixel of P .

 (1) cross correlation (CC)

 CC is used to measure the degree of geometric distortion between the results obtained by different methods and the reference HSI. When the geometric distortion between the two images is smaller, the CC is closer to 1.

\(C C(F U, R)=\frac{1}{m} \sum_\limits{i=1}^{m} C C S\left(F U^{i}, R^{i}\right)\)       (16)

where CCS simply is the cross correlation between two generic single-band images can be formulated as

\(C C S(P, Q)=\frac{\sum_\limits{l=1}^{n}\left(P_{l}-\mu_{p}\right)\left(Q_{l}-\mu_{Q}\right)}{\sqrt{\sum_\limits{l=1}^{n}\left(P_{l}-\mu_{p}\right)^{2} \sum_\limits{l=1}^{n}\left(Q_{l}-\mu_{Q}\right)^{2}}}\)       (17)

where \(\mu_{P}=(1 / n) \sum_{l=1}^{n} P_{l}\) , and \(Pl\) represents the sample mean of \(P\) .

 (2) Spectral angle mapper (SAM)

 Spectral Angle Mapper (SAM) measures the average change in angle of all the spectra lvectors. The closer the value of SAM is to 0, the less spectral distortion will be. It is defined as

\(S A M(F U, R)=\frac{1}{n} \sum_\limits{k=1}^{n} S A M\left(F U_{k}, R_{k}\right)\)       (18)

where, given the spectral vectors , \(M,n\in R^m\)

\(\operatorname{SAM}(M, N)=\arccos \left(\frac{\langle M, N\rangle}{\|M\|_{2}\|N\|_{2}}\right)\)       (19)

where \(\langle M, N\rangle=M^{T} N\) is the mathematical representation of the inner product of \(M\) and \(N\) . SAM is a spectral index. The smaller SAM is, the more similar the spectral of the pansharpening results is to that of the reference image.

 (3) Root mean squared error (RMSE)

 The RMSE which is a global index reflects the difference between FU and R. It can be formulated as follows

\(\operatorname{RMSE}(F U, R)=\frac{\|F U-R\|_{F}}{\sqrt{n^{*} m}}\)       (20)

where \(\|FU-R\|_F=\sqrt{trance((FU-R)^T(FU-R))}\) represents the Frobenius norm. Lower value of RMSE indicates a better pansharpened image.

 (4) Erreur relative globale adimensionnelle de synthèse(ERGAS)

 The ERGAS is a normalized version of RMSE. It is expressed as

\(E R G A S(F U, R)=100 r \sqrt{\frac{1}{m} \sum_\limits{j=1}^{m}\left(\frac{R M S E_{j}}{\mu_{j}}\right)^{2}}\)       (21)

where \(r\) denotes the proportion between the spatial resolutions of PANI and HSI, is formulated as

\(\mathrm{r}=\frac{\text { spatial resolution of } \mathrm{PAN}}{\text { spatial resolution of } \mathrm{HS}}\)       (22)

where \(RMSE_j=(\|FU^j-R^j\|_F/\sqrt{n})\) , and \(\mu_j\) is the sample mean of the \(j^{th}\)band of R.  

3.2 Analysis of the influence of parameter β12

 In the experiments, β1 and β2 are two parameters that control the amount of injection details. They influence the fusion result directly. In order to analyze the influence of parameters β1 and β2( 0≤β1, β2≤1 ) on the results more objectively and directly, we carry out experimental analysis on the Reflective Optics System Imaging (ROSIS) dataset, where the parameters of β1 and β2are random variables. Considering the difference of spatial information between PANI and INT image takes a high proportion of the total spatial details, β1 will be set to a relatively large value. Since the spatial details captured from PANI is served as the supplementary information, βshould be set to a small value.

 As shown in Table 1, we first give βa random value 0.05. According to the quality indexes measures, it is clearly that the performance of the proposed approach is increasing dramatically when β1 is increased from 0.2 to 0.8. While it decreases when β1 equals to 1. We can draw the conclusion that, when β1=0.8 , the proposed algorithm obtains the good performance. In this situation, the values of β2 are set to a range between 0 and 0.08. As shown in Table 2, It can be seen that when β2=0.02 , our method can obtain relative optimal objective index. We have also performed the same experimental analysis on images acquired by a variety of different sensors. We found that β1=0.8, β2=0.02 also give the best performance there. Therefore, β1=0.8, β2=0.02 are set to be the default parameters of the proposed method.

Table 1. Objective performance for the proposed method with different β1 settings. ( β2=0.05 )

Table 2. Objective performance for the proposed method with different β2 settings. ( β1=0.8 )

3.3 Experimental results with the synthetic dataset

 To verify the validity of the algorithm, we experimented separately on the synthetic and real data sets. The first set of synthetic data for the experiment was collected by ROSIS [1]. The dataset is denoted as PaviaU dataset. The collected HSI covers the spectral range of 0.4-0.9 μm, and each HSI used for experiment contains 103 bands. The results for the PaviaU dataset are presented in Fig. 2. The reference HSI is displayed in Fig. 2(a). The simulated HSI and PANI which are utilzed for comparing different pansharprning methods are generated by the Wald’s protocol [27]. The dimensions of the simulated HSI and PANI are 41x35 and 205x175, respectively.

Fig. 2. Subjective visual comparison of PaviaU dataset (a) Reference HSI. (b) Simulated PANI. (c) Interpolated HSI. (d) GS. (e) GSA. (f) PCA. (g) GFPCA. (h) CNMF. (i) SFIM. (j) Proposed.

 The subjective visual images obtained by different comparision methods are presented in Fig. 2(d)-(j), respectively. On the basis of carefully analysing and comparing experimental results with the interpolated HSI displayed in Fig. 2(c), we can draw a conclution that each pansharpening method improves the quality of HSI to varying degrees. Obviously, the GS method maintains the spectral information of the original HSI to the maximum extent. Careful observation of the GS result shown in Fig. 2(d) shows that theedge information of some buildings and vegetation is ambiguous. The GSA approach improves the quality of spatial information slightly, however, the result shown in Fig. 2(e) is suffered from severe spectral distortion. Similar to the GS method, the PCA method preserves the spectral information of the original HSI better, but the image texture is blurred due to the missing edge texture information of the target objects in the image (Fig. 2 (f)). Although the GFPCA method is simple, the result shown in Fig. 2(g) is unsatisfactory in both spectral information preservation and edge texture information saturation. Fig. 2(h) shows that CNMF method injects too much spatial details into the target object in the image, making the target edge of the image looks too sharp. In addition, the spectral distortion of CNMF method on this data set is obvious. Fig. 2(i) and Fig. 2(j) show that both SFIM and the proposed methods have preserved the spectral information of the original HSI well. However, after careful comparison, it is not difficult to find that the image obtained by our method has clear texture, richer edge information and better target details compared with the result of SFIM..

Table 3. Quality metrics for Fig.2. ( β1=0.8, β2=0.02 )

 Table 3 presents the results of objective quality evaluation indexes of different methods on PaviaU dataset. We look at the four metrics together to evaluate the spectral quality and spatial quality. For the convenience of comparison, the optimal values for the different methods under the same index are marked in bold. The proposed method achieves a good performance based on all four quality metrics, i.e., ranks first for the CC, RMSE, and ERGAS. The maximum CC value indicates that the spatial distortion of the result of the proposed method is minimal. The values of RMSE and ERGAS are the smallest which denotes that our method performs best overall. SAM of our method is slightly higher than that of SFIM method. The results displayed in Fig. 2(i)) and Fig. 2 (j) all preserve the spectral information well. However, the analysis about quantitative metrics shows that the proposed method provides the better spatial quality compared to the SFIM method. Those out standing results are all helpful to demonstrate that our proposed algorithm provides the more superior fused result than the other six comparative methods.

 In order to compare the spectral preservation and spatial injection properties of different methods. Fig. 3 shows the RMSE evaluation of each band for different algorithms. Lower value of RMSE indicates a better pansharpened result. It can be observed that the proposed method has the lowest RMSE on almost every band, which validates the proposed method can obtain the excellent spectral preservation and spatial injection properties.

Fig. 3. RMSE evaluation of each band of different methods.

 In order to more intuitively compare the spectral preservation properties of different methods, a spectral curve corresponding to the pixel marked yellow in Fig. 2(a) is extracted and the difference between the spectral vector of the result of each method at the same point and the spectral vector of the reference image at that point is shown in Fig. 4. We first draw a dotted line as the reference line for clarity. The spectral radiance difference vector of GFPCA algorithm changes a lot when it is compared with the dotted line. The proposed method is closest to the dotted line as a whole and has the smallest fluctuation range compared to other comparison methods, which validates the proposed method can obtain the outstanding spectral preservation performance.

Fig. 4. Comparison of spectral radiance difference vectors obtained by different methods at one spatial location which is marked in Fig. 2(a).

 The Moffett Field dataset [1] which is provided by Airborne Visible Infrared Imagespectrometer (AVIRIS) is utilized for the second experiment to demonstrate the potential of the proposed method in synthetic datasets. These HSIs used in the experiment consist of 176 bands in the spectrum of 0.4-2.5μm . The fused images of the Moffett Field datasetare presented in Fig. 5. Same as the first experiment, the synthetic PANI and HSI used in this experiment can be produced by Wald’s protocol [27]. The pixels of HSI and PANI are respectively 37x79 and 185x395 .

Fig. 5. Subjective visual comparison of Moffett field dataset (a) Reference HSI. (b) Interpolated HSI. (c) Simulated PANI. (d) GS. (e) GSA. (f) PCA. (g) GFPCA. (h) CNMF. (i) SFIM. (j) Proposed.

 Fig. 5 shows the reference HSI, synthetic HSI and PANI, as well as the results of different algorithms. In order to compare the advantages and disadvantages of different algorithms, we will analyze the differences in spectral preservation and edge texture details of the results obtained by different algorithms. Fig. 5(d)-(f) show that the color of both foreground targets and background information in the fused images obtained by GS, GSA, and PCA algorithms is darker than that of the reference one. In addition to the white region of the urban area, the spectral information of other regions is greatly different from that of the original HSI. In particular, the spectral distortion of PCA method is serious in the light region. The spatial quality of GS and GSA results in urban areas is very good, without edge blur and halo. Obviously, the performance of GFPCA method in the Moffett Field dataset is very poor. From Fig. 5(g), it can be observed that the color distortion is serious, and the texture information is fuzzy. In particular, the ground object information in urban areas cannot be distinguished. Fig. 5(h)-(j) reflect that the CNMF, SFIM and our proposed algorithms preserve the spectral information of the original HSI to the maximum extent. The colors of the three images in Fig. 5(h)-(j) are very close to the reference image in both rural and urban areas. However, it is not difficult to find that the edge information of ground objects in Fig. 5(h) and Fig. 5(i) obtained by CNFM and SFIM is fuzzy, especially the texture information of buildings in urban areas and river edges in rural areas. In contrast, the proposed method has great potential in spectral preservation and injects appropriate amount of spatial information. The fused image shown in Fig. 5(j) is neither blurred due to too little texture information nor too sharp due to too much injected spatial information.

Table 4. Quality metrics for Fig. 4. ( β1=0.8,β2=0.02 )

 Table 4 lists the numerical values of several objective quality evaluation indexes of different methods on Moffett field dataset. We look at the four metrics together to evaluate the spectral quality and spatial quality. From Table 4, the proposed algorithm yields the best values in all the quality indexes. The CC value is closest to 1, and the values of SAM, RMSE and ERGAS are the smallest. The SFIM algorithm ranked second. Quantitative analysis indicates that the proposed algorithm achieves the better color fidelity and spatial quality compared to other six fusion method.

Fig. 6. RMSE evaluation of each band of different methods.

 To further compare the fusion performance of each method, the RMSE evaluation of each band versus the band number is shown in Fig. 6. The RMSE of the proposed algorithm and the RMSE of the SFIM almost coincide in the bands of 35-95, however, in the bands of 95-176, the RMSE values of our algorithm are significantly lower than that of the SFIM method. The RMSE comparison shows the outstanding spectral preservation and spatial injection properties of the proposed method.

Fig. 7. Comparison of spectral radiance difference vectors obtained by different methods at one spatial location which is marked in Fig. 5(a).

 Same as the first experiment, one random spatial location marked in red in Fig. 5(a) is also choosed to verify the spectral fidelity of different pansharpening algorithms. Fig. 7 shows the comparison of the spectral radiance difference vectors between the reference HSI and the result of each method. From Fig. 7, it can be clearly observed that the spectral radiance difference vector of the proposed algorithm is almost coincides with the benchmark. The result validates that the proposed algorithm can preserve the spectral information of the original HSI to the greatest extent compared with other methods.

3.4 Experimental results with the real dataset

 The first two experiments were carried out on the synthetic datasets. In order to prove that this method is also applicable to the real data set, we selected the Hyperion dataset [1] for the third experiment. Hyperion dataset has been obtained by the EO-1 spacecraft. The EO-1 spacecraft provides PANI with 10-m resolution and HSI with 30-m resolution. The sizes of PANI and HSI are 216x174 and 72x58 , respectively. The HSI used in the experiment contains 128 bands across the spectral range 0.4-2.5μm .

 Fig. 8 exhibits a pair of Hyperion dataset and results of the seven comparison algorithms. Similarly, the edge and texture information of the target objects in Fig. 8 (d) (result of GS method) and Fig. 8 (f) (result of PCA method) are sufficient, but the spectral fidelity is poor. It can be clearly observed that the CNMF (Fig. 8 (h)), GSA (Fig. 8 (e)), SFIM (Fig. 8 (i)) and our algorithms (Fig. 8 (j)) generate results whose color well match with that of the original HS image, however, the CNMF, GSA and SFIM methods have the spatial distortion of different degrees. From Fig. 8(g), we can observe that the result of GFPCA algorithm is seriously blurred since the edge and texture information of the target object is insufficient. Upon observation and analysis of the results obtained by different methods, it can be concluded that the performance of the proposed algorithm is superior to that of other comparison algorithms. The landscape structure is more salient and the spectral fidelity is better preserved in the pansharpening result of our algorithm compared with the other six methods.

Fig. 8. Subjective visual comparison of Hyperion dataset (a) Low resolution HSI. (b) PANI. (c) Interpolated HSI. (d) GS. (e) GSA. (f) PCA. (g) GFPCA. (h) CNMF. (i) SFIM. (j) Proposed.

 Similar to the previous experiment, two pixel points in Fig. 8(b) were randomly selected and marked with red to intuitively compare the spectral preservation performance of different algorithms. The comparison of the spectral radiance difference vectors between the reference HSI and the result of each algorithm is shown in Fig. 9. The dotted line is also served as benchmark. The spectral vector difference curves corresponding to the proposed method in Fig. 9(a) and Fig. 8(b) are the closest to the benchmark line, especially the curve of the proposed algorithm in Fig. 9(a) is almost fitted to the benchmark line, and its fluctuation is smaller than other curves, which further demonstrate that the out standing spectral fidelity of our method.

Fig. 9. Comparison of spectral radiance difference vectors obtained by different methods at two spatial locations which are marked in Fig. 8(b).

5. Conclusion

 We proposed a simple and effective hyperspectral pansharpening algorithm by combing the advantages of adaptive weighted regression and guided filter in this paper. The adaptive weighted regression method can effectively maintain the spatial information, whereas the guided filter has good behaviors near edges. More importantly, the proposed algorithm simultaneously considers the characteristics of PANI and HSI, and reduces the distortion by using the guided filter with PANI and the INT image serving as a guidance image respectively. The spatial information is extracted from PANI as well as HSI. Experiments performed on two synthetic HS datasets and one real HS dataset show that the proposed algorithm is a more effective algorithm in improving the edge and texture information while preserving the spectral information compared with other algorithms. In the future, how to adaptively select the coefficients (β1, β2) can be further researched.

References

  1. A. Mookambiga, and V. Gomathi, "Comprehensive review on fusion techniques for spatial information enhancement in hyperspectral imagery," Multidimensional Systems and Signal Processing, vol. 27, no. 4, pp. 863-889, 2016. https://doi.org/10.1007/s11045-016-0415-2
  2. Z. L. Jing, H. Pan, and G. Xiao, "Application to Environmental Surveillance: Dynamic Image Estimation Fusion and Optimal Remote Sensing with Fuzzy Integral," Intelligent Environmental Sensing, vol. 13, pp. 159-189, 2015. https://doi.org/10.1007/978-3-319-12892-4_7
  3. L. Loncan, L. B. Almeida, J. M. Bioucas-Dias, X. Briottet, J. Chanussot, N. Dobigeon, S. Fabre, W. Z. Liao, G. A. Licciardi, M. Simoes, J. Tourneret, M. A. Veganzones, G. Vivone, Q. Wei, and N. Yokoya, "Hyperspectral pansharpening: a review," IEEE Geoscience Remote Sening Magazine, vol. 3, no. 3, pp. 27-46, 2015. https://doi.org/10.1109/MGRS.2015.2440094
  4. C. Laben, and B. Brower, "Process for enhacing the spatial resolution of multispectral imagery using pan-sharpening", U.S. Patent 6 011 875, Jan. 4, 2000.
  5. W. Carper, T. M. Lillesand, and P. W. Kiefer, "The use of Intensity-Hue-Saturation transformations for merging SPOT panchromatic and multispectral image data," Photogrammetric Engineering and Remote Sensing, vol. 56, no. 4, pp. 459-467, 1990.
  6. T. M. Tu, S.-C. Su, H.-C. Shyu, and P. S. Huang, "A new look a IHS-like image fusion method," Information Fusion, vol. 2, no. 3, pp. 117-186, 2001.
  7. P. S. Chavez, and A. Y. Kwarteng, "Extracting spectral contrast in Landsat thematic mapper image data using selective principal component analysis," Photogrammetric Engineering and Remote Sensing, vol. 55, no. 3, pp. 339-348, 1989.
  8. V. Shettigara, "A generalized component substitution technique for spatial enhancement of multispectral images using a higher resolution data set," Photogrammetric Engineering and Remote Sensing, vol. 58, no. 5, 561-567, 1992.
  9. V. P. Shah, N. Younan, and R. L. King, "An efficient pan-sharpening method via a combined adaptive PCA approach and contourlets," IEEE Transactions on Geoscience and Remote Sensing, vol. 46, no. 5, pp. 1323-1335, 2008. https://doi.org/10.1109/TGRS.2008.916211
  10. B. Aiazzi, S. Baronti, and M. Selva, "Improving component substitution pansharpening through multivariate regression of MS+pan data," IEEE Transactions on Geoscience and Remote Sensing, vol. 45, no. 10, pp. 3230-3239, 2007. https://doi.org/10.1109/TGRS.2007.901007
  11. C. Thomas, T. Ranchin, L. Wald, and J. Chanussot, "Synthesis of multispectral images to high spatial resolution: A critical review of fusion methods based on remote sensing physics," IEEE Transactions on Geoscience and Remote Sensing, vol. 46, no. 5, pp. 1301-1312, 2008. https://doi.org/10.1109/TGRS.2007.912448
  12. J. G. Liu, "Smoothing filter based intensity modulation: A spectral preserve image fusion technique for improving spatial details," International Journal of Remote Sensing, vol. 21, no. 18, pp. 3461-3472, 2000. https://doi.org/10.1080/014311600750037499
  13. B. Aiazzi, L. Alparone, S. Baronti, A. Garzelli, and M. Selva, "MTF-tailored multiscale fusion of high-resolution MS and pan imagery," Photogrammetric Engineering and Remote Sensing, vol. 72, no. 5, pp. 591-596, 2006. https://doi.org/10.14358/PERS.72.5.591
  14. G. Vivone, R. Restaino, M. D. Mura, G. Licciardi, and J. Chanussot, "Contrast and error-based fusion schemes for multispectral image pansharpening," IEEE Geoscience and Remote Sensing Letters, vol. 11, no. 5, pp. 930-934, 2014. https://doi.org/10.1109/LGRS.2013.2281996
  15. S. Baronti, B. Aiazzi, M. Selva, A. Garzelli, and L. Alparone, "A theoretical analysis of the effects of aliasing and misregistration on pansharpened imagery," IEEE Journal of Selected Topics in Signal Processing, vol. 5, no. 3, pp. 446-453, 2011. https://doi.org/10.1109/JSTSP.2011.2104938
  16. N. Yokoya, T. Yairi, and A. Iwasaki, "Coupled nonnegative matrix factorization unmixing for hyperspectral and multispectral data fusion," IEEE Transactions on Geoscience and Remote Sensing, vol. 50, no. 2, pp. 528-537, 2012. https://doi.org/10.1109/TGRS.2011.2161320
  17. R. C. Hardie, M. T. Eismann, and G. L. Wilson, "MAP estimation for hyperspectral image resolution enhancement using an auxiliary sensor," IEEE Transactions on Image Processing, vol. 13, no. 9, pp. 1174-1184, 2004. https://doi.org/10.1109/TIP.2004.829779
  18. W. Liao, X. Huang, F. Coillie, S. Gautama, A. Pizurica, W. Philips, H. Liu, T. Zhu, M. Shimoni, G. Moser, and D. Tuia, "Processing of multiresolution thermal hyperspectral and digital color data: Outcome of the 2014 IEEE GRSS data fusion contest," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 8, no. 6, pp. 2984-2996, 2015. https://doi.org/10.1109/JSTARS.2015.2420582
  19. B. Jin, Z. L. Jing, and R. Pan, "Multi-modality Image Fusion via Generalized Riesz-wavelet Transformation," KSII Transactions on Internet and Information Systems, vol. 8, no. 11, pp. 4118-4136, 2014. https://doi.org/10.3837/tiis.2014.11.026
  20. H. Pan, Z. L. Jing, L. F. Qiao, M. Z. Li, "Visible and infrared image fusion using L0-generalized total variation model," Science China Information Sciences, vol. 61, no. 4, 2018.
  21. K. He, J. Sun, and X. Tang, "Guided image filtering," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 6, pp. 1397-1409, 2013. https://doi.org/10.1109/TPAMI.2012.213
  22. Zhengguo Li, and Jinghong Zheng, "Single Image De-Hazing Using Globally Guided Image Filtering," IEEE Transactions on Image Processing, vol. 30, no. 2, pp. 228-242, 2008.
  23. K. He, J. Sun, and X. Tang, "Single Image Haze Removal Using Dark Channel Prior," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341-2353, 2011. https://doi.org/10.1109/TPAMI.2010.168
  24. T. Hastie, R. Tibshirani, and J.H. Friedman, The Elements of Statistical Learning, Springer, 2003.
  25. X. X. Zhu, and R. Bamler, ''A sparse image fusion algorithm with application to pan-sharpening,'' IEEE Transactions on Geoscience and Remote Sensing, vol. 51, no. 5, pp. 2827-2836, 2013. https://doi.org/10.1109/TGRS.2012.2213604
  26. L. Wald, Data Fusion: Definitions and Architectures-Fusion of Images of Different Spatial Resolutions, Les Presses de l'Ecole des Mines, 2002.
  27. L. Wald, T. Ranchin, and M. Mangolini, "Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images," Photogrammetric Engineering and Remote Sensing, vol. 63, no. 6, pp. 691-699, 1997.