DOI QR코드

DOI QR Code

Deterministic Bipolar Compressed Sensing Matrices from Binary Sequence Family

  • Lu, Cunbo (Institute of Applied Physics and Computational Mathematics) ;
  • Chen, Wengu (Institute of Applied Physics and Computational Mathematics) ;
  • Xu, Haibo (Institute of Applied Physics and Computational Mathematics)
  • Received : 2019.09.25
  • Accepted : 2020.04.01
  • Published : 2020.06.30

Abstract

For compressed sensing (CS) applications, it is significant to construct deterministic measurement matrices with good practical features, including good sensing performance, low memory cost, low computational complexity and easy hardware implementation. In this paper, a deterministic construction method of bipolar measurement matrices is presented based on binary sequence family (BSF). This method is of interest to be applied for sparse signal restore and image block CS. Coherence is an important tool to describe and compare the performance of various sensing matrices. Lower coherence implies higher reconstruction accuracy. The coherence of proposed measurement matrices is analyzed and derived to be smaller than the corresponding Gaussian and Bernoulli random matrices. Simulation experiments show that the proposed matrices outperform the corresponding Gaussian, Bernoulli, binary and chaotic bipolar matrices in reconstruction accuracy. Meanwhile, the proposed matrices can reduce the reconstruction time compared with their Gaussian counterpart. Moreover, the proposed matrices are very efficient for sensing performance, memory, complexity and hardware realization, which is beneficial to practical CS.

Keywords

1. Introduction

Different from Nyquist sampling theorem, compressed sensing (CS) is a new revolutionary signal sampling framework proposed by Candès, Romberg, Tao and Donoho in 2006 [1,2]. It can improve the sampling efficiency by sampling sparse signals at a rate far lower than the Nyquist rate. Its core is to exploit measurement matrix to project an original high-dimensional sparse signal onto a lower-dimensional space. By utilizing the sparsity property, the original high-dimensional sparse signal can be reconstructed accurately from the lower-dimensional measurement vector with high probability by solving an optimization problem. The new idea of CS has caused the extensive attention of academic circles and has been applied to various research areas, such as signal processing, big data, wireless network, image encryption and computed tomography.

The process of CS can be viewed as having two stages: data sampling and signal recovery. In CS theory, the design of the measurement matrix serves an important role. In data sampling,if the reconstruction accuracy remains unchanged, a better measurement matrix can result in a smaller number of measurements. For signal recovery, if the number of measurements remain unchanged, a better measurement matrix can result in a higher reconstruction accuracy. The property of measurement matrix decides whether or not all the significant information of original signal is captured and preserved by the projected measurements during the dimensionality reduction. Restricted Isometry Property (RIP) is an important criteria proposed by Candes and Tao [3]. As long as the measurement matrix satisfies RIP, the original signal can be reconstructed accurately from the lower-dimensional measurement vector with high probability. Coherence is another important criteria to construct CS matrices. Bourgain et al. [4] related the coherence and the RIP: low coherence implies the RIP. RIP [5-9] and coherence[10-22] are both important tools to analyze the property of measurement matrices. In this paper, coherence will be adopted to analyze the property of proposed measurement matrices,because it is computed more easily.

Existing measurement matrices can be classified into two categories: random measurement matrices and deterministic measurement matrices. In scientific research, the most widely used measurement matrices are random matrices, such as Gaussian or Bernoulli ones. However, in random matrices, the value of every element is independent and identically distributed (i.i.d.)from certain probability distribution, where randomness exists. In the generation process of a random matrix, all elements should be stored and the process is repeated when a new realization is needed, which would cost lots of storage resources. The generation of random number has very high hardware requirement, thus limiting the practical CS applications. These deficiencies can be overcome by deterministic measurement matrices, where all elements are precomputed and deterministic. Compared with the random matrices, the deterministic matrices can get rid of the randomness. In the generation process of a deterministic matrix, the computation of every element may require many complex mathematical operations, but all elements can be precomputed and generated on the fly only once, thus providing storage efficiency. In recent years, many researchers have utilizing some techniques to construct deterministic measurement matrices. In [10], Li and Ge constructed deterministic measurement matrices based on near orthogonal systems. In [11], Zeng et al introduced a deterministic construction named TSCM, which combines an orthonormal matrix and achaotic-based Toeplitz one. In [12], Zhang et al constructed a lass of sparse binary deterministic measurement matrices by using the protograph low-density parity-check (LDPC) codes. In [13], Zhang et al presented deterministic bipolar measurement matrices arising from Legendre sequences. In [14], Tian et al proposed a deterministic construction for orthogonal-gradient measurement matrix based on the equiangular tight frame theory. In [15],Naidu et al constructed deterministic measurement matrices based on Euler Squares. In [16],Sasmal et al proposed a specialized composition rule based on the properties of existing binary matrices to produce an optimal deterministic binary CS matrices. In [17], Naidu et al related the construction of deterministic measurement matrices to the extremal set theory. In [18], Lu et al studied the optimal construction of deterministic binary CS matrix with arbitrarily given size by using the idea from bipartite graph. In [19,20], Gan et al constructed deterministic measurement matrices based on Chebyshev chaotic sequence and topologically conjugate chaotic systems, respectively. In [21], Liu et al proposed deterministic measurement matrices based on Bose balanced incomplete block designs and used the embedding operation to develop more flexibility. In [22], Wang et al provided deterministic CS matrices based on optimal codebooks and specific codes. In [23], Liu et al involved the deterministic binary LDPC measurement matrices from complete protographs. In [24], Wang et al involved the deterministic measurement matrices by second-order Reed-Muller sequences. In [25], Hsieh et al designed a deterministic measurement matrix inspired from sparse fast Fourier transform. In [26], Fardad et al designed a low complexity hardware for generating a deterministic measurement matrix based on the Euclidian geometry LDPC code construction.

In this paper, based on the binary sequence family (BSF) in [27], we construct a class of deterministic bipolar measurement matrices named BSFDBM. The trace representative function is first chosen to produce the BSF. And then, by numeric conversion, the BSF is converted to the corresponding bipolar sequence family. The above process is repeated to generate another bipolar sequence family. By putting all sequences of the two bipolar sequence family together as column vectors, the proposed BSFDBM matrix is finally obtained.

The linear feedback shift register (LFSR) implementation of BSFDBM matrices is also given and the proposed BSFDBM matrices are proved to have smaller coherence than the corresponding Gaussian and Bernoulli random matrices. Moreover, the corresponding practical features of BSFDBM are analyzed and compared. Simulation experiments show that the proposed BSFDBM matrices outperform their Gaussian, Bernoulli, binary [18] and chaotic bipolar [9] counterparts in reconstruction accuracy with respect to one-dimensional sparse signals and different kinds of images. Meanwhile, the proposed BSFDBM matrices can reduce the reconstruction time compared with their Gaussian counterpart.

The remainder of this paper is organized as follows. Section 2 introduces the basic theory about CS and finite field. Section 3 presents deterministic construction procedure of BSFDBM matrices and related LFSR implementation. Section 4 uses the coherence to analyze the proposed BSFDBM matrices and compares the practical features of BSFDBM with other constructions. Numerical simulations are given to investigate the performance of proposed BSFDBM matrices in Section 5. Finally, Section 6 concludes this paper.

2. Preliminaries

2.1 Compressed Sensing

Suppose \(\mathbf{x}=\left\{x_{i}\right\}_{i=1}^{N} \in \mathbf{R}^{N}\) is a k - sparse original signal, where \(\|\mathbf{x}\|_{0}=\left|\left\{i \mid x_{i} \neq 0\right\}\right| \leq k\). The observation signal y ∈ RM is obtained from its lower-dimensional linear projection, where M << N . The mathematical relationship between x∈RN and y∈RM can be expressed as y = Αx , where Α∈RMxN is called the measurement matrix. For CS, this linear projection process is also the data sampling process. However, the process of signal recovery is nonlinear. The original signal x∈RN can be reconstructed accurately by solving the following 0l and1 l minimization optimization problems given by (1) and (2), respectively, where \(\|\mathbf{x}\|_{1}=\sum_{i=1}^{N}\left|x_{i}\right|\).

\(\min _{\mathbf{x}}\|\mathbf{x}\|_{0} \text { subject to } \mathbf{y}=\mathbf{A x},\)       (1)

\(\min _{\mathbf{x}}\|\mathbf{x}\|_{1} \text { subject to } \mathbf{y}=\mathbf{A} \mathbf{x},\)       (2)

In problems (1) and (2), the sparsest estimate of x can be obtained by orthogonal matching pursuit (OMP) [28] and basis pursuit (BP) algorithm [29], respectively. RIP serves an important role in CS [2,15], because it establishes the equivalence between problems (1) and (2).

Coherence is another important criteria to construct CS matrices.

Definition 2.1 Suppose a1, a2, ⋅⋅⋅ , aN are the column vectors of matrix Α , then its coherence µ(A) is denoted as

\(\mu(\mathbf{A})=\max _{1 \leq i \neq j \leq N} \frac{\left|\left\langle\mathbf{a}_{i}, \mathbf{a}_{j}\right\rangle\right|}{\left\|\mathbf{a}_{i}\right\|_{2} \cdot\left\|\mathbf{a}_{j}\right\|_{2}},\)       (3)

where \(\left\langle\mathbf{a}_{i}, \mathbf{a}_{j}\right\rangle=\mathbf{a}_{i}^{T} \mathbf{a}_{j}.\)

As seen in [12,30], if \(k<\frac{1}{2}\left[1+\frac{1}{\mu(\mathbf{A})}\right]\), any k - sparse signal x can be reconstructed accurately from its lower-dimensional linear measurement vector y = Ax via OMP or BPalgorithm. Thus, for the design of measurement matrix A , the upper bound of the sparsity k of reconstructed signal can be increased by decreasing the coherence µ(A), which means an increase in reconstruction accuracy. To reconstruct the original signal with higher accuracy,it’s required to decrease the coherence µ(A) as far as possible.

2.2 Finite Field

Definition 2.2 Let β be the primitive field element for finite field GF(q)with q elements,then every elements of GF(q) can be expressed as 0 or the powers of β , that is GF(q) = {0, β0 = 1, β, ... , βq-2}.

Among {0,1,β,...,βq-2 } , the multiplicative group, denoted as GF(q)* , consists of the q −1 nonzero elements . For describing convenience, GF(q) = {0, β0 = 1, β, ... , βq-2}. is simply expressed as {0,1,⋅⋅⋅,q −1}.

Note that in above definition, the element structure of finite filed depends on the choice of the primitive element. For finite field GF(q), if another primitive element γ is chosen, we can obtain a new element structure for GF(q).

Definition 2.3 Suppose that m and n are two positive integers, where m is the factor of n .The trace function from GF(2n) to GF(2m) , denoted as Trnm (x) , is

\(T r_{m}^{n}(x)=x+x^{2^{m}}+\ldots+x^{2^{m^{\prime}} \frac{n^{n}}{m}}, x \in G F\left(2^{n}\right).\)       (4)

When m =1, GF(2m) = GF(2) = {0,1} . For describing convenience, \(\operatorname{Tr}_{1}^{n}(x)\) is simply denoted as Tr(x) .

Definition 2.4 Let β be a primitive element of finite field GF(2n) and its primitive polynomial is g(x) over GF(2) of degree n . All conjugate elements of βi with respect to GF(2) are different elements of the set \(\left\{\beta^{i 2^{j}}\right\}_{j=0}^{n-1}\) . Let \(\beta^{i 2^{d(i)}}=\beta^{i}\) . Then, by using the results from finite field, \(\left\{\beta^{i 2^{j}}\right\}_{j=d(i)}^{n-1} \subset\left\{\beta^{i 2^{j}}\right\}_{j=0}^{d(i)-1}\) is obtained. Therefore, d(i) is the number of conjugate elements of βi and all conjugate elements of βi can be expressed as \(\left\{\beta^{i 2^{j}}\right\}_{j=0}^{d(i)-1}\), that is \(\left\{\beta^{i 2^{0}}=\beta^{i}, \beta^{i 2}, \cdots, \beta^{i 2^{d(i)-1}}\right\}\). The minimal polynomial of βi over GF(2), denoted as gi(x), is

\(g_{i}(x)=\prod_{j=0}^{d(i)-1}\left(x-\beta^{i 2^{j}}\right).\)       (5)

3. Construction and Implementation of BSFDBM

3.1 Construction of BSFDBM

The proposed BSFDBM matrices are a class of (2n-1)x2n+1 deterministic bipolar matrices composed of {1,−1} elements, where n ≥ 3 . The concrete construction procedure of BSFDBM matrices is as follows:

Step-1: According to given signal length N = 2n+1, judge n being odd or even and then choose the trace representative function (6) or (7) given by [27]. If n is odd, let n = 2l +1 and choose (6) ; if n is even, let n = 2l and choose (7), where x∈GF(2n)* , λ ∈GF(2n).

\(f_{\lambda}(x)=\operatorname{Tr}(\lambda x)+\sum_{i=1}^{l} \operatorname{Tr}\left(x^{1+2^{i}}\right)\)       (6)

\(f_{\lambda}(x)=\operatorname{Tr}(\lambda x)+\sum_{i=1}^{l-1} \operatorname{Tr}\left(x^{1+2^{i}}\right)+\operatorname{Tr}_{1}^{l}\left(x^{1+2^{l}}\right)\)       (7)

Step-2: Select a primitive field element β for GF(2n) . Let \(b_{t}^{\lambda}=f_{\lambda}\left(\beta^{t}\right)\), where t ∈{0,1,⋅⋅⋅2n − 2}, and λ ∈ GF(2n) . The sequence \(\left\{b_{t}^{\lambda}\right\}_{t=0}^{2^{n}-2}=\left\{f_{\lambda}\left(\beta^{t}\right)\right\}_{t=0}^{2^{n}-2}\), denoted as bλ , is a binary pseudo-random sequence of period 2n −1. The binary sequence set{ bλ | λ ∈ GF(2n)} constitutes the BSF in [27]. By inputting every element of binary sequence bλ = \(\left\{b_{t}^{\lambda}\right\}_{t=0}^{2^{n}-2}\) into the numeric conversion function (8), we can obtain the associated bipolar pseudo-random sequence cλ \(\left\{c_{t}^{\lambda}\right\}_{t=0}^{2^{n}-2}.\).

\(c_{t}^{\lambda}=(-1)^{b_{t}^{\lambda}}=\left\{\begin{array}{l} 1, \quad b_{t}^{\lambda}=0 \\ -1, b_{t}^{\lambda}=1 \end{array}\right.\)       (8)

For given parameter λ , the bipolar sequence \(\mathbf{c}^{\lambda}=\left\{c_{t}^{\lambda}\right\}_{t=0}^{2^{n}-2}\) is deterministic. By putting together all sequences of { cλ | λ ∈ GF(2n)} as column vectors, we can obtain a (2n − 1) x 2matrix A1 , which is given by

\(\mathbf{A}_{1}=\left[\begin{array}{cccc} c_{0}^{0} & c_{0}^{1} & \cdots & c_{0}^{2^{n}-1} \\ c_{1}^{0} & c_{1}^{1} & \cdots & c_{1}^{2^{n}-1} \\ \vdots & \vdots & \ddots & \vdots \\ c_{2^{n}-2}^{0} & c_{2^{n}-2}^{1} & \cdots & c_{2^{n}-2}^{2^{n}-1} \end{array}\right]\)       (9)

Step-3: Select another primitive field element γ for GF(2n) . Let \(d_{t}^{\lambda}=f_{\lambda}\left(\gamma^{t}\right)\), where t ∈{0,1,⋅⋅⋅2n − 2} and λ ∈ GF(2n) . Repeat the process of Step-2 and we can obtain corresponding bipolar sequence family \(\left\{\mathbf{h}^{\lambda}=\left\{h_{t}^{\lambda}\right\}_{t=0}^{2^{n}-2} \mid \lambda \in G F\left(2^{n}\right)\right\}\) and corresponding matrix A∈ R(2n−1)×2n. The matrix A2 has the following form

\(\mathbf{A}_{2}=\left[\begin{array}{cccc} h_{0}^{0} & h_{0}^{1} & \cdots & h_{0}^{2^{n}-1} \\ h_{1}^{0} & h_{1}^{1} & \cdots & h_{1}^{2^{n}-1} \\ \vdots & \vdots & \ddots & \vdots \\ h_{2^{n}-2}^{0} & h_{2^{n}-2}^{1} & \cdots & h_{2^{n}-2}^{2^{n}-1} \end{array}\right]\)        (10)

Step-4: Concatenate the above two matrices \(\mathbf{A}_{1} \in \mathbf{R}^{\left(2^{n}-1\right) \times 2^{n}}\) and \(\mathbf{A}_{2} \in \mathbf{R}^{\left(2^{n}-1\right) \times 2^{n}}\) in column extension form to obtain the proposed BSFDBM matrix A of size (2n-1)x2n+1 . It has the following form

\(\mathbf{A}=\left[\begin{array}{ll} \mathbf{A}_{1} & \mathbf{A}_{2} \end{array}\right]\\ =\left[\begin{array}{cccc|cccc} c_{0}^{0} & c_{0}^{1} & \cdots & c_{0}^{2^{n}-1} & h_{0}^{0} & h_{0}^{1} & \cdots & h_{0}^{2^{n}-1} \\ c_{1}^{0} & c_{1}^{1} & \cdots & c_{1}^{2^{n}-1} & h_{1}^{0} & h_{1}^{1} & \cdots & h_{1}^{2^{n}-1} \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\ c_{2^{n}-2}^{0} & c_{2^{n}-2}^{1} & \cdots & c_{2^{n}-2}^{2^{n}-1} & h_{2^{n}-2}^{0} & h_{2^{n}-2}^{1} & \cdots & h_{2^{n}-2}^{2^{n}-1} \end{array}\right]\)       (11)

From the construction, it is seen that the BSFDBM matrices have the sampling rate (2n-1)/2n+1 ≈ 0.5. The BSFDBM matrix \(\mathbf{A} \in \mathbf{R}^{\left(2^{n}-1\right) \times 2^{n+1}}\) is composed of two submatrices \(\mathbf{A}_{1} \in \mathbf{R}^{\left(2^{n}-1\right) \times 2^{n}}\) and \(\mathbf{A}_{2} \in \mathbf{R}^{\left(2^{n}-1\right) \times 2^{n}}\), each of which corresponds to a bipolar sequence family.The two bipolar sequence families have similar generation process, of which the only difference lies in the choice of the primitive field element. Without loss of generality, in the following section, we present the implementation of the bipolar sequence family corresponding to \(\mathbf{A}_{1} \in \mathbf{R}^{\left(2^{n}-1\right) \times 2^{n}}\).

Remark 1: For the BSFDBM matrix, the row size is the period of the associated binarysequence, and the column size is twice the family size of associated BSF. For the BSFDBMmatrix, some columns can be discarded to vary the sampling rate.

3.2 LFSR Implementation of BSFDBM

In this section, we present the implementation of sequence in the bipolar sequence family corresponding to \(\mathbf{A}_{1} \in \mathbf{R}^{\left(2^{n}-1\right) \times 2^{n}}\).

For odd n = 2l +1, any column vector of \(\mathbf{A}_{1} \in \mathbf{R}^{\left(2^{n}-1\right) \times 2^{n}}\) can be obtained by first adding (l + 1) m-sequences and then converting the resultant summing sequence using numeric conversion in (8). Hence, \(\mathbf{A}_{1} \in \mathbf{R}^{\left(2^{n}-1\right) \times 2^{n}}\) is easy to implement by summing LFSR outputs and using numeric converter.

For odd n = 2l +1, (l +1) n-stage LFSRs are required to implement sequences in the BSF corresponding to \(\mathbf{A}_{1} \in \mathbf{R}^{\left(2^{n}-1\right) \times 2^{n}}\). The (l + 1) LFSRs have different characteristic polynomials for generating cyclically distinct m-sequences. Let the finite field GF(2n) be generated by a primitive element β satisfying the primitive polynomial g(x) over GF(2) of degree n . For 1≤ i ≤ l , compute gi(x), which is denoted as the minimal polynomial of \(\beta^{1+2^{i}}\) over GF(2). For LFSR 0, the characteristic polynomial is set to be g(x) and the initial state can be arbitrary including 0. For LFSR i with1≤ i ≤ l , the characteristic polynomial is set to be gi(x) and the initial state is given by \(\left\{\operatorname{Tr}\left(\beta^{\left(1+2^{i}\right) j}\right)\right\}_{j=0}^{n-1}\), which is fixed. According to different initial states of n-stage LFSR 0, 2n cyclically distinct binary sequences are generated to constitute the required BSF. The related LFSR implementation of A1 is simply shown in Fig. 1.

E1KOBZ_2020_v14n6_2497_f0001.png 이미지

Fig. 1. LFSR implementation of corresponding matrix

For even n = 2l , the BSF corresponding to \(\mathbf{A}_{1} \in \mathbf{R}^{\left(2^{n}-1\right) \times 2^{n}}\) can be implemented similarly to that case of odd n , except that the size of LFSR l is n/2.

From above implementation, it can be seen that the proposed BSFDBM matrices are very efficient for hardware realization, which is extremely easy via LFSR structures.

4. Performance Analysis

4.1 Coherence Analysis

Coherence is an important criteria to describe the property of matrices. For CS matrices,decreasing the coherence leads to an increase in reconstruction accuracy. This section first gives the coherence of proposed BSFDBM matrices and then derives that the coherence of BSFDBM matrices is smaller than the corresponding Gaussian and Bernoulli randommatrices.

In order to analyze the coherence of the BSFDBM matrices, we first introduce the following two definitions and one lemma [27].

Definition 4.1 For two different binary sequences a = (a0,a1,...,av) and b = (b0,b1,...bv) of period v , the cross-correlation of a and b is defined as \(C_{\max }=\max \left|C_{\mathbf{s}^{(i)}, \mathbf{s}^{(j)}}(\tau)\right|\) for 0 ≤τ ≤ v −1, where i +τ is computed modulo v . If a and b are cyclically equivalent, Ca,b(τ) is the auto-correlation of a or b .

Definition 4.2 Let S = {s(0), s(1), ... , s(r-1)} be the set of r cyclically distinct binary sequences of period v . Define \(C_{\max }=\max \left|C_{\mathbf{s}^{(i)}, \mathbf{s}^{(j)}}(\tau)\right|\) for 0 ≤τ ≤ v −1 and 0 ≤ i, j ≤ r −1, where τ ≠ 0 if i = j . Obviously, Cmax is the maximum value among all auto- andcross-correlations of the sequences in S . Cmax is also called the maximum correlation magnitude of S.

Lemma 4.1 For odd n , the cross-correlation of any two binary sequences a and b in BSF given by (6) is Ca,b(τ)∈{-1,-1±2(n+1)/2} and the maximum correlation Cmax is 1 + 2(n+1)/2.

For even n , the cross-correlation of any two binary sequences a and b in BSF given by (7) is Ca,b(τ)∈{-1,-1±2n/2,-1±2n/2+1} and the maximum correlation Cmax is 1 + 2n/2+1.

Theorem 4.1 Let A be a (2- 1)x 2(n+1) (n ≥ 3) BSFDBM matrix. If n is odd, \(\mu(\mathbf{A})=\frac{1+2^{(n+1) / 2}}{2^{n}-1}\); if n is even, \(\mu(\mathbf{A})=\frac{1+2^{n / 2+1}}{2^{n}-1}\).

Proof: For matrix \(\mathbf{A} \in \mathbf{R}^{\left(2^{n}-1\right) \times 2^{n+1}}\), we have

\(\mu(\mathbf{A})=\max _{1 \leq i \neq j \leq 2^{n+1}} \frac{\left|\left\langle\mathbf{A}^{i}, \mathbf{A}^{j}\right\rangle\right|}{\left\|\mathbf{A}^{i}\right\|_{2} \cdot\left\|\mathbf{A}^{j}\right\|_{2}}\)       (12)

where Ai is the i th column of A . Note that sequences Ai and Aj are bipolar sequence of length 2n −1. Thus

||Ai||2 = ||Aj||2 =(2n-1)1/2       (13)

As seen in Section 3, the BSFDBM matrix A = [A1 | A2] is composed of two submatrices , \(\mathbf{A}_{1} \in \mathbf{R}^{\left(2^{n}-1\right) \times 2^{n}}\) and \(\mathbf{A}_{2} \in \mathbf{R}^{\left(2^{n}-1\right) \times 2^{n}}\), each of which corresponds to a primitive field element.

For s =1,2 , \(\mathbf{A}_{s} \in \mathbf{R}^{\left(2^{n}-1\right) \times 2^{n}}\) has a BSF { bλ(s) | λ ∈ GF(2n)} and a associated bipolar sequence family [cλ(s) | λ ∈ GF(2n)}. The i th column of As , denoted as \(\mathbf{A}_{s}^{i}\) , is the bipolar sequence ci(s) in {cλ(s) | λ ∈ GF(2n)}.

Depending on i and j , the calculation of \(\max _{1 \leq i \neq j \leq 2^{n+1}}\left|\left\langle\mathbf{A}^{i}, \mathbf{A}^{j}\right\rangle\right|\) can be classified into twocases.

Case 1: 1≤i, j≤2n or 2n+1 ≤ i, j≤2n+1.

In this case, \(\max _{1 \leq i \neq j \leq 2^{n+1}}\left|\left\langle\mathbf{A}^{i}, \mathbf{A}^{j}\right\rangle\right|=\max _{1 \leq i \neq j \leq 2^{n}}\left|\left\langle\mathbf{A}_{1}^{i}, \mathbf{A}_{1}^{j}\right\rangle\right|=\max _{1 \leq i \neq j \leq 2^{n}}\left|\left\langle\mathbf{A}_{2}^{i}, \mathbf{A}_{2}^{j}\right\rangle\right|.\)

Assume that \(\mathbf{b}^{i}(s)=\left\{b_{t}^{i}(s)\right\}_{t=0}^{2^{n}-2}\) and \(\mathbf{b}^{j}(s)=\left\{b_{t}^{j}(s)\right\}_{t=0}^{2^{n}-2}\) are any two binary sequences in {bλ(s) | λ ∈ GF(2n)} . From (8), the associated two bipolar sequences \(\mathbf{c}^{i}(s)=\left\{c_{t}^{i}(s)\right\}_{t=0}^{2^{n}-2}\) and \(\mathbf{c}^{j}(s)=\left\{c_{t}^{j}(s)\right\}_{t=0}^{2^{n}-2}\) are obtained.

We have \(\left\langle\mathbf{A}_{s}^{i}, \mathbf{A}_{s}^{j}\right\rangle=\left\langle\mathbf{c}^{i}(s), \mathbf{c}^{j}(s)\right\rangle=\sum_{t=0}^{2^{n}-2} c_{t}^{i}(s) c_{t}^{j}(s)=\sum_{t=0}^{2^{n}-2}(-1)^{b_{t}^{i}(s)+b_{i}^{j}(s)}=C_{\mathbf{b}^{i}(s), \mathbf{b}^{j}(s)}\) (0)

for s =1,2 .

Based on Lemma 4.1, we derive that if n is odd,

\(\begin{aligned} \max _{1 \leq i \neq j \leq 2^{n}}\left|\left\langle\mathbf{A}_{1}^{i}, \mathbf{A}_{1}^{j}\right\rangle\right| &=\max _{1 \leq i \neq j \leq 2^{n}}\left|\left\langle\mathbf{A}_{2}^{i}, \mathbf{A}_{2}^{j}\right\rangle\right| \\ &=\max _{1 \leq i \neq j \leq 2^{n}}\left|C_{\mathbf{b}^{i}(2), \mathbf{b}^{j}(2)}(0)\right| \\ &=\max \left|-1,-1 \pm 2^{(n+1) / 2}\right| \\ &=1+2^{(n+1) / 2} \end{aligned}\)       (14)

In a similar derivation process for odd n , for even n , we have

\(\max _{1 \leq i \neq j \leq 2^{n}}\left\langle\left\langle\mathbf{A}_{1}^{i}, \mathbf{A}_{1}^{j}\right\rangle\left|=\max _{1 \leq i \neq j \leq 2^{n}}\right|\left\langle\mathbf{A}_{2}^{i}, \mathbf{A}_{2}^{j}\right\rangle\right|=1+2^{n / 2+1}\)       (15)

Case 2: 1≤ i ≤ 2n and 2n + 1 ≤ j ≤ 2n+1 .

In this case, \(\max _{1 \leq i \neq j \leq 2^{n+1}}\left|\left\langle\mathbf{A}^{i}, \mathbf{A}^{j}\right\rangle\right|=\max _{1 \leq i, j \leq 2^{n}}\left|\left\langle\mathbf{A}_{1}^{i}, \mathbf{A}_{2}^{j}\right\rangle\right|\).

Assume that \(\mathbf{b}^{i}(1)=\left\{b_{t}^{i}(1)\right\}_{t=0}^{2^{n}-2}\) and \(\mathbf{b}^{j}(2)=\left\{b_{t}^{j}(2)\right\}_{t=0}^{2^{n}-2}\) are any binary sequences of {bλ(1) | λ ∈ GF(2n)} and {bλ(2) | λ ∈ GF(2n)} , respectively. From (8), the associated two bipolar sequences \(\mathbf{c}^{i}(1)=\left\{c_{t}^{i}(1)\right\}_{t=0}^{2^{n}-2}\) and \(\mathbf{c}^{j}(2)=\left\{c_{t}^{j}(2)\right\}_{t=0}^{2^{n}-2}\) are obtained. We have, \(\left\langle\mathbf{A}_{1}^{i}, \mathbf{A}_{2}^{j}\right\rangle=\left\langle\mathbf{c}^{i}(1), \mathbf{c}^{j}(2)\right\rangle=\sum_{t=0}^{2^{n}-2} c_{t}^{i}(1) c_{t}^{j}(2)=\sum_{t=0}^{2^{n}-2}(-1)^{b_{i}^{i}(1)+b_{i}^{j}(2)}=C_{\mathbf{b}^{i}(1), \mathbf{b}^{j}(2)}\) (0)

The sequence families {cλ(1) | λ ∈ GF(2n)} and {cλ(2) | λ ∈ GF(2n)} have similar generation process, of which the only difference lies in the choice of the primitive field element. By using the results from finite field, the two bipolar sequence families {cλ(1) | λ ∈ GF(2n)} and {cλ(2) | λ ∈ GF(2n)} are cyclically equivalent. Correspondingly, {bλ(1) | λ ∈ GF(2n)} and {bλ(2) | λ ∈ GF(2n)} are cyclically equivalent. Therefore, for bj(2) ∈ {bλ(2) | λ ∈ GF(2n)} , there must exist a corresponding cyclically equivalent sequence bl(1) ∈ {bλ(1) | λ ∈ GF(2n)}. Thus, there exists a integer τ to make \(C_{\mathbf{b}^{i}(1), \mathbf{b}^{j}(2)}(0)=C_{\mathbf{b}^{i}(1), \mathbf{b}^{l}(1)}(\tau)\) hold. According to Definition 4.1, τ means the phase shift of sequence. Based on Lemma 4.1, we derive that if n is odd,

\(\max _{1 \leq i, j \leq 2^{n}}\left|\left\langle\mathbf{A}_{1}^{i}, \mathbf{A}_{2}^{j}\right\rangle\right|=\max _{1 \leq i, j \leq 2^{n}}\left|C_{\mathbf{b}^{i}(1), \mathbf{b}^{j}(2)}(0)\right|=\max _{1 \leq i, j \leq 2^{n}}\left|C_{\mathbf{b}^{i}(1), \mathbf{b}^{\prime}(1)}(\tau)\right|=1+2^{(n+1) / 2}.\)       (16)

In a similar derivation process for odd n , for even n , we have \(\max _{1 \leq i, j \leq 2^{n}}\left|\left\langle\mathbf{A}_{1}^{i}, \mathbf{A}_{2}^{j}\right\rangle\right|=1+2^{n / 2+1}.\) Combing the Cases 1 and 2, we have the conclusion that if n is odd,

\(\begin{aligned} & \max _{1 \leq i \neq j \leq 2^{n+1}}\left|\left\langle\mathbf{A}^{i}, \mathbf{A}^{j}\right\rangle\right| \\ =& \max \left\{\max _{1 \leq i \neq j \leq 2^{n}}\left|\left\langle\mathbf{A}_{1}^{i}, \mathbf{A}_{1}^{j}\right\rangle\right|, \max _{1 \leq i, j \leq 2^{n}}\left|\left\langle\mathbf{A}_{1}^{i}, \mathbf{A}_{2}^{j}\right\rangle\right|\right\} \\ =& \max \left\{1+2^{(n+1) / 2}, 1+2^{(n+1) / 2}\right\} \\ =& 1+2^{(n+1) / 2} \end{aligned}\)       (17)

Similarly, if n is even, \(\max _{1 \leq i \neq j \leq 2^{n+1}}\left|\left\langle\mathbf{A}^{i}, \mathbf{A}^{j}\right\rangle\right|=1+2^{n / 2+1}\).

Theorem 4.1 can be proved after substituting above conclusions and (13) into (12). 

To compare the coherence of the BSFDBM matrices with the Gaussian and Bernoulli matrices, we need the following two lemmas [31].

Lemma 4.2 Let \(\left\{x_{i}\right\}_{i=1}^{p}\) and \(\left\{y_{i}\right\}_{i=1}^{p}\) be sequences of i.i.d. zero-mean Gaussian random variables with variance σ2.  Then \(\operatorname{Pr}\left(\left|\sum_{i=1}^{p} x_{i} y_{i}\right| \geq t\right) \leq 2 \exp \left(-\frac{t^{2}}{4 \sigma^{2}\left(p \sigma^{2}+t / 2\right)}\right)\).

Lemma 4.3 Let  \(\left\{x_{i}\right\}_{i=1}^{p}\) and \(\left\{y_{i}\right\}_{i=1}^{p}\) be sequences of i.i.d. zero-mean bounded random variables which satisfy |xi|≤a and |xiyi|≤a2. Then \(\operatorname{Pr}\left(\left|\sum_{i=1}^{p} x_{i} y_{i}\right| \geq t\right) \leq 2 \exp \left(-\frac{t^{2}}{2 p a^{4}}\right).\)

Theorem 4.2 For a (2n - 1)x2n+1 (n ≥ 3) BSFDBM matrix A and its Gaussian counterpart B , µ(A) < µ(B) holds.

Proof: Let bi be the column vector of the matrix \(\mathbf{B} \in \mathbf{R}^{\left(2^{n}-1\right) \times 2^{n+1}}\) for 1 ≤ i ≤ 2n+1 .

Without loss of generality, we prove the theorem in case of even n.

Suppose that \(\left\{x_{i}\right\}_{i=1}^{2^{n}-1}\) and \(\left\{y_{i}\right\}_{i=1}^{2^{n}-1}\) are any two column vectors of matrix B . Based on Lemma 4.2 with \(p=2^{n}-1, t>\frac{1+2^{n / 2+1}}{2^{n}-1}\), and \(\sigma^{2}=\frac{1}{2^{n}-1}\), we have.

\(\operatorname{Pr}\left(\left|\sum_{i=1}^{2^{n}-1} x_{i} y_{i}\right| \geq t\right) \leq 2 \exp \left\{-\frac{\left(2^{n}-1\right) t^{2}}{4+2 t}\right\}.\)       (18)

Let \(z(n, t)=2 \exp \left\{-\frac{\left(2^{n}-1\right) t^{2}}{4+2 t}\right\}\). It is easy to derive that z(n,t) increases with ndecreasing. Thus, we have z(n,t) ≤ z(4,t). We can further derive that

\(\operatorname{Pr}\left(\left|\sum_{i=1}^{2^{n}-1} x_{i} y_{i}\right| \geq t\right) \leq z(4, t)=2 \exp \left\{-\frac{15 t^{2}}{4+2 t}\right\}.\)       (19)

We observe that z(4,t) increases with t decreasing. Thus, we have \(z(4, t)<z\left(4, \frac{1+2^{n / 2+1}}{2^{n}-1}\right)\). Let \(z_{1}(n)=z\left(4, \frac{1+2^{n / 2+1}}{2^{n}-1}\right)\). Obviously, z1(n) increases with n decreasing.

Thus, we have z(4,t) < z1(n) < z1(4) ≈ 2exp(-1.0385). Further, we have

\(\operatorname{Pr}\left(\left|\sum_{i=1}^{2 n-1} x_{i} y_{i}\right| \geq t\right)<z_{1}(4) \approx 2 \exp (-1.0385) \approx 0.708.\)       (20)

According to Definition 2.1, \(\left|\sum_{i=1}^{2^{n}-1} x_{i} y_{i}\right|\) can characterize the coherence µ(B) of matrix B. Let \(S=\left\{\mathbf{b}_{1}, \mathbf{b}_{2}, \ldots, \mathbf{b}_{2^{n+1}}\right\}\). We have

\(\mu(\mathbf{B})=\max _{\left\{x_{i}\right\},\left\{y_{i}\right\}}\left\{\sum_{i=1}^{2^{n}-1} x_{i} y_{i} \mid\left\{x_{i}\right\} \subset S,\left\{y_{i}\right\} \subset S \backslash\left\{x_{i}\right\}\right\}.\)       (21)

Further, we have

\(\operatorname{Pr}\left(\min _{\left\{x_{i}\right\},\left\{y_{i}\right\}}\left|\sum_{i=1}^{2^{n}-1} x_{i} y_{i}\right| \geq t\right)<0.708^{\frac{|s|(|s|-1)}{2}}=0.708^{2^{n}\left(2^{n+1}-1\right)}.\)       (22)

Let \(\delta_{b}(n)=0.708^{2^{n}\left(2^{n+1}-1\right)}\) with n ≥ 4. Obviously, we can further derive that

\(\operatorname{Pr}\left(\min _{\left\{x_{i}\right\},\left\{y_{i}\right\}}\left|\sum_{i=1}^{2^{n}-1} x_{i} y_{i}\right| \leq t\right) \geq 1-\delta_{b}(n) \approx 1.\)       (23)

Hence, \(\mu(\mathbf{B})=\max _{\left.\left\{x_{i}\right\}, y_{i}\right\}} \sum_{i=1}^{2^{n}-1} x_{i} y_{i} \geq t\), By \(\text { By } t>\frac{1+2^{n / 2+1}}{2^{n}-1}\), and \(\mu(\mathbf{A})=\frac{1+2^{n / 2+1}}{2^{n}-1}\), we have µ(B) > µ(A).

Similarly, we can obtain the same conclusion in case of odd n . Thus, we complete the proof of Theorem 4.2

In a similar derivation process of Theorem 4.2, we can obtain the following corollary based on the Lemma 4.3.

Corollary 4.1 For a (2n - 1)x2n+1(n ≥ 3) BSFDBM matrix A and its Bernoulli counterpart D of elements {1,−1} , µ(A) < µ(D) holds.

Remark 2: Theorem 4.2 and Corollary 4.1 can demonstrate that the BSFDBM matrices outperform their Gaussian and Bernoulli counterparts in reconstruction accuracy.

Based on the above work, a novel framework has been presented for constructing bipolar measurement matrices via BSF. Furthermore, through the coherence analysis, we have shown that the BSFDBM matrices are proper candidates for CS matrices, just as Gaussian and Bernoulli do. In fact, by this framework, more bipolar matrices can be explored by many other binary sequence families in Table 1.

Table 1. Comparison of different families of binary sequences

E1KOBZ_2020_v14n6_2497_t0001.png 이미지

Remark 3: We replace the trace representative functions (6) and (7) by functions (24) and(25) given in [27], respectively, where x∈GF(2n)* , λ0, λ1 ∈GF(2n) .

\(f_{\lambda_{0}, \lambda_{1}}(x)=\operatorname{Tr}\left(\lambda_{0} x\right)+\operatorname{Tr}\left(\lambda_{1} x^{3}\right)+\sum_{i=2}^{l} \operatorname{Tr}\left(x^{1+2^{i}}\right)\)       (24)

\(f_{\lambda_{0}, \lambda_{1}}(x)=\operatorname{Tr}\left(\lambda_{0} x\right)+\operatorname{Tr}\left(\lambda_{1} x^{3}\right)+\sum_{i=2}^{l-1} \operatorname{Tr}\left(x^{1+2^{i}}\right)+\operatorname{Tr}_{1}^{l}\left(x^{1+2^{l}}\right)\)       (25)

The BSFDBM A will become a new bipolar measurement matrix of size (2n - 1) x 2n+1, where n ≥ 5 . If n is odd, it has the coherence \(\mu(\mathbf{A})=\frac{1+2^{(n+3) / 2}}{2^{n}-1}\); if n is even, \(\mu(\mathbf{A})=\frac{1+2^{n / 2+2}}{2^{n}-1}\). Without complete detail, it is pointed out that this kind of bipolar matrices can also serve as CS matrices.

Remark 4: For the binary sequence families in Table 1, we can apply our method to these binary systems, and obtain a large family of bipolar CS matrices. The minimum sampling rate equals twice the family size divided by the period of the binary sequence.

4.2 Benefit of BSFDBM

Based on the above coherence analysis, the low coherence of BSFDBM can ensure its good sensing performance. However, for practical CS matrix, we should also consider other practical features, including memory cost, computational complexity and hardware realization. Here, we analyze and compare the practical features of BSFDBM with its counterparts(Gaussian, Bernoulli random matrices, deterministic binary matrices [18] and CsPM [9] ) in Table 2. Among them, binary matrix [18] is obtained by using the idea from bipartite graph with column degree \(d=\operatorname{ceil}(\sqrt{M})\). CsPM is the Chebyshev chaotic bipolar matrix [9]. Note that operator ‘ceil’ rounds the elements to the nearest integers towards infinity. For a faircomparison, let M × N be the matrix size and B be the number of required bits to store every decimal element.

Table 2. The comparison of practical feature of BSFDBM

E1KOBZ_2020_v14n6_2497_t0002.png 이미지

Table 2 shows that the BSFDBM has the following practical advantages:

(1) Low memory cost: The BSFDBM consists of elements of +1 and -1. Therefore BSFDBM requires MN bits to store all elements. Compared with its random Gaussian counterparts, BSFDBM reduces the memory requirement, thus providing storage efficiency. This feature would make BSFDBM be beneficial to practical resource-limited CS applications, such as wireless body network;

(2) Low computational complexity: The proposed BSFDBM matrix being bipolar supports multiplier-less operation, fast data acquisition and recovery. For data acquisition and recovery, the arithmetic operations of BSFDBM are addition and subtraction, whereas the random Gaussian construction demands addition, subtraction and multiplication.

(3) Hardware-friendly realization: As seen in section 3.2, the implementation of BSFDBM is extremely easy by means of LFSR structures, thus providing hardware-friendly realization. However, in the random constructions (Gaussian and Bernoulli), random number generation has very high hardware requirement, which is not hardware-friendly.

From the above analysis, it can be clearly seen that, compared to its random constructions(Gaussian and Bernoulli), the proposed BSFDBM has a good tradeoff among sensing performance, memory cost, computational complexity and hardware realization.

Remark 5: From Table 2, we can see that the proposed BSFDBM has comparable practical features (memory cost, computational complexity and hardware realization) to that of the binary matrices [18] and CsPM [9]. In the following section, we will compare the sensing performance of proposed BSFDM matrices with that of binary matrices [18] and CsPM [9] via numerical simulations.

5. Simulation and Results

In this section, the performance of proposed BSFDBM matrices is investigated through numerical simulations with sparse signals and images. Here, the compared matrices are Gaussian random, Bernoulli random, deterministic binary matrices [18] and CsPM [9]. For Gaussian matrix, the value of every element is i.i.d. from standard normal distribution N(0,1) .For Bernoulli matrix, the value of every element is -1 or 1 with equal probability. As for signal recovery, the OMP algorithm is performed to profit from the lower coherence of CS matrices.

5.1 BSFDBM for Sparse Signals

The reconstruction accuracy of BSFDBM matrices is compared with that of corresponding Gaussian, Bernoulli, binary and CsPM matrices in noiseless and noisy scenarios. Two types of BSFDBM matrices of size (2n-1)×2n+1 are generated: (i) BSFDBM matrices of size 255×512 for even n and n = 8 ; (ii) BSFDBM matrices of size 127× 256 for odd n and n = 7 .

In the simulation, the k -sparse 2n+1 × 1 original signals x with k nonzero locations uniformly randomly and corresponding k nonzero values taken by N(0,1) are considered. Foreach sparsity level k , 1000 trials are averaged to obtain the corresponding numerical result. Let xR be the reconstructed solution from OMP. In noiseless scenario, if \(\left\|\mathbf{x}-\mathbf{x}_{R}\right\|_{2}<10^{-6}\), this reconstruction trial is declared to be successful. The percentage of successful reconstruction times is calculated as the successful reconstruction probability. In noisy scenario, additive Gaussian noise e is added to the signal x , where the signal-to-noise ratio(SNR) is 30dB. The reconstruction SNR is denoted as \(S N R(\mathbf{x})=20 \cdot \log _{10}\left(\|\mathbf{x}\|_{2} /\left\|\mathbf{x}-\mathbf{x}_{R}\right\|_{2}\right) d B\).

Noiseless scenario: For matrices of size 255×512, Fig. 2(a) shows the probability of successful reconstruction of k -sparse 512×1 signals, where 40 ≤ k ≤ 138. For matrices of size 127× 256 , Fig. 2(b) shows the probability of successful reconstruction of k -sparse 256×1 signals, where 15 ≤ k ≤ 85.

E1KOBZ_2020_v14n6_2497_f0002.png 이미지

Fig. 2. Comparison of the successful reconstruction probability of noiseless sparse signals.

(a) Matrices of size 255×512 , (b) Matrices of size 127× 256

It can be seen from Fig. 2 that the reconstruction accuracy of BSFDBM matrix is superior to the Gaussian, Bernoulli, binary and CsPM matrices.

Noisy scenario: For matrices of size 255×512 , Fig. 3(a) shows the reconstruction SNR of k -sparse 512×1 signals, where 40 ≤ k ≤138 . For matrices of size 127× 256 , Fig. 3(b)shows the reconstruction SNR of k -sparse 256×1signals, where 15 ≤ k ≤ 85 .

E1KOBZ_2020_v14n6_2497_f0003.png 이미지

Fig. 3. Comparison of the reconstruction SNR of noisy sparse signals.

(a) Matrices of size 255×512 , (b) Matrices of size 127× 256

It can be seen from Fig. 3 that for all sparsity level, the BSFDBM matrix has more reconstruction SNR than its Gaussian, Bernoulli, binary and CsPM counterparts. This illustrates that the BSFDBM matrix is more robust to noise than other constructions considered.

From above two scenarios, it is observed that, for noiseless and noisy scenarios, the proposed BSFDBM matrices provide better reconstruction accuracy than the corresponding Gaussian, Bernoulli, binary and CsPM matrices.

5.2 BSFDBM for Image Signals

In this part, we compare the performance of image reconstruction of BSFDBM matrices with that of corresponding Gaussian, Bernoulli, binary and CsPM matrices via the block CS algorithm. As shown in Fig. 4, the test images consist of five grayscale images and five color images. The five grayscale test images are “boat” of size 256× 256 , “fruits” of size 256× 256, “liftingbody” of size 512×512, “phantom” of size 256× 256 and “cameraman” of size 256× 256 , whereas the five color test images are “concordaerial” of size 2036×3060×3, “bone1” of size 837×1242×3 , “bone2” of size 692×631×3, “lighthouse” of size 640×480×3 and “Saturn” of size 1500×1200×3 . The sparsifying basis for these six images is the Daubechies 9/7 discrete wavelet transform (DWT). Considering the feature of Daubechies 9/7 DWT and the tradeoff between reconstruction time and accuracy, the block sizes 32×16 and 32×32 are selected, each of which corresponds to one type of BSFDBM matrices. To characterize the image reconstruction performance, the peak signal-to-noise ratio (PSNR) is used as the evaluation criterion. For a two-dimensional image signal x of size m×n with xR be the reconstructed signal, the PSNR is denoted as \(\operatorname{PSNR}(\mathbf{x})=10 \cdot \log _{10}\left(255^{2} /\left(\left\|\mathbf{x}-\mathbf{x}_{R}\right\|_{2}^{2} / m / n\right)\right) d B\).

E1KOBZ_2020_v14n6_2497_f0004.png 이미지

Fig. 4. Test images. (a) Boat, (b) Fruits, (c) Liftingbody, (d) Phantom, (e) Cameraman, (f) Concordaerial, (g) Bone1, (h) Bone2, (i) Lighthouse, (j) Saturn

Note that for a three-dimensional color image x , we first convert the signal x to atwo-dimensional grayscale signal xF by concatenating the R, G, B components in order and column extension form. The resulting signal xF and corresponding reconstructed signal \(\mathbf{x}_{R}^{F}\) are applied to calculate PSNR(xF) .

Tables 3 and 4 show the reconstruction PSNR of various test images with block sizes 32×16 and 32×32, respectively.

Table 3. The reconstruction PSNR (dB) of various test images with block size 32×16

E1KOBZ_2020_v14n6_2497_t0003.png 이미지

Table 4. The reconstruction PSNR (dB) of various test images with block size 32×32

E1KOBZ_2020_v14n6_2497_t0004.png 이미지

From Tables 3 and 4, it can be seen that for all grayscale and color images, the BSFDBM matrix has the highest reconstruction PSNR among the five matrices. Moreover, the reconstruction PSNR increases with the block size increasing.

In the following, we further show the visual results via the “bone2” image reconstruction. Fig. 5 shows the reconstructions with block size 32×16.

E1KOBZ_2020_v14n6_2497_f0005.png 이미지

Fig. 5. Reconstructions with block size 32x16. (a) BSFDBM, (b) Gaussian, (c) Bernoulli, (d) binary, (e) CsPM

From Fig. 5, it can be seen that the BSFDBM matrix provides competitive visualization performance when compared to the Gaussian, Bernoulli, binary and CsPM matrices.

5.3 Computational Complexity of BSFDBM

Table 2 shows that the multiplier-less operation is supported by BSFDBM, Bernoulli, binary and CsPM matrices, and that the Gaussian matrix doesn’t support multiplier-less operation. To compare the computational complexity of the five matrices, we record the reconstruction time, which can characterize the computational complexity. Table 5 shows the reconstruction time in seconds for these images in Fig. 4 with block size 32×16.

Table 5. The reconstruction time (second) of various test images with block size 32×16

E1KOBZ_2020_v14n6_2497_t0005.png 이미지

Table 5 shows that for all these matrices, the Gaussian matrix has the longest reconstruction time. This is because that the additional multiplication operation introduces more reconstruction time. Numerical results show that the BSFDBM can reduce the computational complexity compared with its Gaussian counterpart. Particularly, the complexity gain is large when reconstructing color images “concordaerial”, “bone1” and “Saturn”. This corresponds to a large-scale data scenario. In addition, there is not much difference of reconstruction time among the BSFDBM, Bernoulli, binary and CsPM matrices.

Numerical simulations with sparse signals and images show that the reconstruction accuracy of BSFDBM matrices is superior to the Gaussian, Bernoulli, binary and CsPM matrices, which coincides with the conclusion of Theorem 4.2 and Corollary 4.1. The BSFDBM matrices can also reduce the reconstruction time compared with their Gaussian counterpart. Consequently, the designed BSFDBM matrices inspired from BSF possess the practical features of good sensing performance, low memory cost, low computational complexity and easy hardware implementation. These features can make the proposed BSFDBM matrices applied to practical scenarios of CS application, including sparse signal restore and image block CS.

6. Conclusion

Based on BSF, this paper proposes a novel method of constructing bipolar measurement matrices named BSFDBM and gives related LFSR implementation. For BSFDBM matrices, the coherence is given in different situations and proved to be smaller than that of corresponding Gaussian and Bernoulli matrices via theoretical derivation. Moreover, the corresponding practical features of BSFDBM are analyzed and compared. Simulation experiments show that the BSFDBM matrices outperform their Gaussian, Bernoulli, binary and chaotic bipolar counterparts in reconstruction accuracy. The BSFDBM matrices can also reduce the computational complexity compared with their Gaussian counterpart. The BSFDBM matrices are very efficient for sensing performance, memory, complexity and hardware realization, which is beneficial to practical CS.

References

  1. Emmanuel J. Candès, Justin Romberg and Terence Tao, "Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information," IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489-509, February, 2006. https://doi.org/10.1109/TIT.2005.862083
  2. David L. Donoho, "Compressed sensing," IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289-1306, April, 2006. https://doi.org/10.1109/TIT.2006.871582
  3. Emmanuel J. Candès and Terence Tao, "Decoding by linear programming," IEEE Transactions on Information Theory, vol. 51, no. 12, pp. 4203-4215, December, 2005. https://doi.org/10.1109/TIT.2005.858979
  4. Jean Bourgain, Stephen Dilworth, Kevin Ford, Sergei Konyagin and Denka Kutzarova, "Explicit constructions of RIP matrices and related problems," Duke Mathematical Journal, vol. 159, no. 1, pp. 145-185, 2011. https://doi.org/10.1215/00127094-1384809
  5. Hongping Gan, Zhi Li, Jian Li, Xi Wang and Zhengfu Cheng, "Compressive sensing using chaotic sequence based on chebyshev map," Nonlinear Dynamics, vol. 78, no. 4, pp. 2429-2438, December, 2014. https://doi.org/10.1007/s11071-014-1600-1
  6. Juan Castorena and Charles D. Creusere, "The restricted isometry property for banded random matrices," IEEE Transactions on Signal Processing, vol. 62, no. 19, pp. 5073-5084, October, 2014. https://doi.org/10.1109/TSP.2014.2345350
  7. Hongping Gan, Song Xiao and Yimin Zhao, "A novel secure data transmission scheme using chaotic compressed sensing," IEEE Access, vol. 6, pp. 4587-4598, February, 2018. https://doi.org/10.1109/access.2017.2780323
  8. Mahsa Lotfi and Mathukumalli Vidyasagar, "A fast noniterative algorithm for compressive sensing using binary measurement matrices," IEEE Transactions on Signal Processing, vol. 66, no. 15, pp. 4079-4089, May, 2018. https://doi.org/10.1109/tsp.2018.2841881
  9. Hongping Gan, Song Xiao, Tao Zhang and Feng Liu, "Bipolar measurement matrix using chaotic sequence," Communications in Nonlinear Science and Numerical Simulation, vol. 72, pp. 139-151, June, 2019. https://doi.org/10.1016/j.cnsns.2018.12.012
  10. Shuxing Li and Gennian Ge, "Deterministic sensing matrices arising from near orthogonal systems," IEEE Transactions on Information Theory, vol. 60, no. 4, pp. 2291-2302, April, 2014. https://doi.org/10.1109/TIT.2014.2303973
  11. Li Zeng, Xiongwei Zhang, Liang Chen, Tieyong Cao and Jibin Yang, "Deterministic construction of toeplitzed structurally chaotic matrix for compressed sensing," Circuits, Systems, and Signal Processing, vol. 34, no. 3, pp. 797-813, March, 2015. https://doi.org/10.1007/s00034-014-9873-7
  12. Jun Zhang, Guojun Han and Yi Fang, "Deterministic construction of compressed sensing matrices from protograph LDPC codes," IEEE Signal Processing Letters, vol. 22, no. 11, pp. 1960-1964, November, 2015. https://doi.org/10.1109/LSP.2015.2447934
  13. Guohua Zhang, Rudolf Mathar and Quan Zhou, "Deterministic bipolar measurement matrices with flexible sizes from Legendre sequence," Electronics Letters, vol. 52, no. 11, pp. 928-930, May, 2016. https://doi.org/10.1049/el.2016.0765
  14. Tian Shujuan, Fan Xiaoping, Li Zhetao, Pan Tian, Choi Youngjune and Sekiya Hiroo, "Orthogonal-gradient measurement matrix construction algorithm," Chinese Journal of Electronics, vol. 25, no. 1, pp. 81-87, January, 2016. https://doi.org/10.1049/cje.2016.01.013
  15. R. Ramu Naidu, Phanindra Jampana and C. S. Sastry, "Deterministic compressed sensing matrices: construction via Euler Squares and applications," IEEE Transactions on Signal Processing, vol. 64, no. 14, pp. 3566-3575, July, 2016. https://doi.org/10.1109/TSP.2016.2550020
  16. Pradip Sasmal, R. Ramu Naidu, Challa S. Sastry and Phanindra Jampana, "Composition of binary compressed sensing matrices," IEEE Signal Processing Letters, vol. 23, no.8, pp. 1096-1100, August, 2016. https://doi.org/10.1109/LSP.2016.2585181
  17. R. Ramu Naidu and Chandra R. Murthy, "Construction of binary sensing matrices using extremal set theory," IEEE Signal Processing Letters, vol. 24, no. 2, pp. 211-215, February, 2017. https://doi.org/10.1109/LSP.2016.2638426
  18. Weizhi Lu, Tao Dai and Shu-Tao Xia, "Binary matrices for compressed sensing," IEEE Transactions on Signal Processing, vol. 66, no. 1, pp. 77-85, January, 2018. https://doi.org/10.1109/TSP.2017.2757915
  19. Hongping Gan, Song Xiao, Yimin Zhao and Xiao Xue, "Construction of efficient and structural chaotic sensing matrix for compressive sensing," Signal Processing: Image Communication, vol. 68, pp. 129-137, October, 2018. https://doi.org/10.1016/j.image.2018.06.004
  20. Hongping Gan, Song Xiao and Yimin Zhao, "A large class of chaotic sensing matrices for compressed sensing," Signal Processing, vol. 149, pp. 193-203, August, 2018. https://doi.org/10.1016/j.sigpro.2018.03.014
  21. Liu Haiqiang, Yin Jihang, Hua Gang, Yin Hongsheng and Zhu Aichun, "Deterministic construction of measurement matrices based on Bose balanced incomplete block designs," IEEE Access, vol. 6, pp. 21710-21718, April, 2018. https://doi.org/10.1109/access.2018.2824329
  22. Gang Wang, Min-Yao Niu and Fang-Wei Fu, "Deterministic constructions of compressed sensing matrices based on optimal codebooks and codes," Applied Mathematics and Computation, vol. 343, pp. 128-136, February, 2019. https://doi.org/10.1016/j.amc.2018.09.042
  23. Haiyang Liu, Hao Zhang and Lianrong Ma, "On the spark of binary LDPC measurement matrices from complete protographs," IEEE Signal Processing Letters, vol. 24, no. 11, pp. 1616-1620, November, 2017. https://doi.org/10.1109/LSP.2017.2749043
  24. Jue Wang, Zhaoyang Zhang, Xianbin Wang, Hong Wang and Chunxu Jiao, "A low-complexity reconstruction algorithm for compressed sensing using Reed-Muller sequences," in Proc. of IEEE Int. Conf. on Communications, pp. 1-6, May 20-24, 2018.
  25. Sung-Hsien Hsieh, Chun-Shien Lu and Soo-Chang Pei, "Compressive sensing matrix design for fast encoding and decoding via sparse FFT," IEEE Signal Processing Letters, vol. 25, no. 4, pp. 591-595, April, 2018. https://doi.org/10.1109/LSP.2018.2809693
  26. Mohammad Fardad, Sayed Masoud Sayedi and Ehsan Yazdian, "A low-complexity hardware for deterministic compressive sensing reconstruction," IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 65, no. 10, pp. 3349-3361, October, 2018. https://doi.org/10.1109/tcsi.2018.2803627
  27. Nam Yul Yu and Guang Gong, "A new binary sequence family with low correlation and large size," IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1624-1636, April, 2006. https://doi.org/10.1109/TIT.2006.871062
  28. Joel A. Tropp, "Greed is good: Algorithmic results for sparse approximation," IEEE Transactions on Information Theory, vol. 50, no. 10, pp. 2231-2242, October, 2004. https://doi.org/10.1109/TIT.2004.834793
  29. Scott Shaobing Chen, David L. Donoho and Michael A. Saunders, "Atomic decomposition by basis pursuit," SIAM Journal on Scientific Computing, vol. 20, no. 1, pp. 33-61, August, 1998. https://doi.org/10.1137/S1064827596304010
  30. David L. Donoho and Michael Elad, "Optimally sparse representation in general (nonorthogonal) dictionaries via l1 minimization," Proceedings of the National Academy of Sciences, vol. 100, no. 5, pp. 2197-2202, March, 2003.
  31. Jarvis Haupt, Waheed U. Bajwa, Gil Raz and Robert Nowak, "Toeplitz compressed sensing matrices with applications to sparse channel estimation," IEEE Transactions on Information Theory, vol. 56, no. 11, pp. 5862-5875, November, 2010. https://doi.org/10.1109/TIT.2010.2070191
  32. Robert Gold, "Maximal recursive sequences with 3-valued recursive cross-correlation functions," IEEE Transactions on Information Theory, vol. 14, no. 1, pp. 154-156, January, 1968. https://doi.org/10.1109/TIT.1968.1054106
  33. Udaya Parampalli, "Polyphase and frequency hopping sequences obtained from finite rings," Ph.D. dissertation, Department of Electrical Engineering, Indian Institute of Technology, Kanpur, India, 1992.
  34. Oscar S. Rothaus, "Modified Gold codes," IEEE Transactions on Information Theory, vol. 39, no. 2, pp. 654-656, March, 1993. https://doi.org/10.1109/18.212299
  35. Abhijit. G. Shanbhag, P. Vijay Kumar and Tor Helleseth, "Improved binary codes and sequence families from $Z_4$-linear codes," IEEE Transactions on Information Theory, vol. 42, no. 5, pp. 1582-1587, September, 1996. https://doi.org/10.1109/18.532904