DOI QR코드

DOI QR Code

Maximum Likelihood (ML)-Based Quantizer Design for Distributed Systems

  • Kim, Yoon Hak (Department of Electronic Engineering, College of Electronics and Information Engineering, Chosun University)
  • Received : 2015.05.23
  • Accepted : 2015.07.07
  • Published : 2015.08.28

Abstract

We consider the problem of designing independently operating local quantizers at nodes in distributed estimation systems, where many spatially distributed sensor nodes measure a parameter of interest, quantize these measurements, and send the quantized data to a fusion node, which conducts the parameter estimation. Motivated by the discussion that the estimation accuracy can be improved by using the quantized data with a high probability of occurrence, we propose an iterative algorithm with a simple design rule that produces quantizers by searching boundary values with an increased likelihood. We prove that this design rule generates a considerably reduced interval for finding the next boundary values, yielding a low design complexity. We demonstrate through extensive simulations that the proposed algorithm achieves a significant performance gain with respect to traditional quantizer designs. A comparison with the recently published novel algorithms further illustrates the benefit of the proposed technique in terms of performance and design complexity.

Keywords

I. INTRODUCTION

In distributed estimation systems, many sensor nodes randomly placed at known locations collect their sensor readings for a parameter of interest and quantize them before transmission to a fusion node where the parameter estimation is executed on the basis of the received data. For power-constrained systems such as sensor networks, locally quantizing the measurement at each node has been a crucial task, particularly when the data exchange between sensor nodes is precluded because of the communication overhead that it entails. Thus, the problem of designing locally operating quantizers has been addressed by many researchers, who have developed practically efficient quantization techniques, showing significant performance gains with respect to standard designs [1-8]. The generalized Lloyd design framework has been adopted to design independently operating quantizers for distributed systems that minimize global metrics such as the estimation error. However, design difficulties arise because the Lloyd design has been developed to minimize the local metric, i.e., the reconstruction error; in particular, the quantization partitions should be constructed so as to minimize the global metric, and the encoding of local measurements into such partitions at each node would require the measurements from the other nodes to compute the metric, implying a failure of the local or independent encoding.

To optimize the global metric at each iteration and ensure the independent quantization, novel algorithms have been presented; for example, for distributed detection systems, the distance between the distributions in the case of two hypotheses was used for generating a manageable design procedure [2]. A heuristic algorithm that minimizes the upper bound of the probability of error was devised for a distributed detection system [1]. Lam and Reibman [8] derived the necessary conditions for optimal quantization partitions in distributed estimation systems. A weighted cost function (i.e., local + λ × global) was employed in the design process for acoustic sensor networks [6,7] to guarantee the non-increasing cost function without deviating from the regular design framework, which is accomplished by a search for appropriate weights. Further, note that the resulting scalar quantizers should be non-regular in order to reduce the distortion, implying that the same code word is assigned to several disjoint intervals [9]. Regular quantizers are shown to be systematically transformed into non-regular ones in order to achieve a substantial improvement in the estimation performance [10]. Recent work [4] has focused on designing non-regular quantizers for distributed estimation systems in the Lloyd design framework. As expected, non-regular quantization achieves a significant performance gain with increased design complexity as compared to regular designs.

In this paper, we focus on one of the estimation techniques, i.e., the maximum likelihood (ML) estimation for quantizer design, and propose an iterative design algorithm that seeks to find boundary values with an increased likelihood for the construction of quantization partitions. This approach is motivated by the discussion that the estimation performance can be significantly improved by using the received quantized data with a high probability of occurrence. We present a simple design rule to allow us to determine the interval in which boundary values with an increased likelihood are very likely to be found. We prove that the design rule is guaranteed to generate substantially reduced intervals for the boundary values, facilitating a rapid construction of the quantization partitions. We evaluate the proposed algorithm through extensive experiments, demonstrating that an obvious design benefit can be gained in terms of performance and design complexity as compared to typical designs and the novel techniques recently published [4,7].

The rest of this paper is organized as follows: the problem formulation of the quantizer design is provided in Section II. A simple design rule for boundary values is presented in Section III-A, and the proposed iterative design algorithm is summarized in Section III-B and applied to a source localization system in acoustic sensor networks; this application is briefly discussed in Section IV. The simulation results are given in Section V, and the conclusions are presented in Section VI.

 

II. PROBLEM FORMULATION

Assume that there are M sensor nodes deployed at certain known spatial locations, denoted by xi ∈ R2, i = 1, …, M, and each node measures the unknown parameter θ in a sensor field S ⊂ RN to be estimated. Therefore, the measurement, denoted by zi at node i, can be expressed as follows:

where fi (θ) represents the sensing model employed at node i and ωi denotes the measurement noise that can be approximated by normal distribution N(0, σi2). Note that the measurements are assumed to be independent of each other given the parameter; that is, P(z1,…,zM|θ) = ΠMi P(zi|θ). It is also assumed that the i-th node is equipped with an Ri-bit quantizer αi (zi) with the dynamic range [bi1 biLi+1] to generate the quantization index Qi ∈ Ii = {1, …, 2Ri = Li}, where Li denotes the quantization level and {bij}, j = 1, …, Li represents a set of boundary values that determines the quantization partitions. For example, the quantizer αi generates the j-th quantization index Qij whenever zi belongs to the j-th quantization partition given by the boundary values [bij bij+1]. It is assumed that each node quantizes its measurement and sends its quantization index to a fusion node, which produces the estimate of the parameter on the basis of quantization indices Q1, …, QM from all nodes.

A. Unconstrained Quantization

Obviously, we can make a better decision with more information. Suppose that a local measurement zi (θ) belongs to the k-th quantization bin at sensor i, say αi (zi(θ)) = k. Since only the measurement zi is available to sensor i, the encoder will send an index for the interval that zi belongs to. These standard regular quantizers apply decision rules based on the local distance (see Fig. 1). However, once we know the indices from the other sensors, we can make a better decision for quantization so as to improve the estimation accuracy. Formally, for some parameter θ, we may have

where αi (zi(θ)) = k with the other encoders αm, ∀m ≠ i fixed.

Fig. 1.Standard quantization vs. unconstrained quantization.

This indicates that if the sensor reading zi is assigned to the j-th bin at sensor i, the estimation error can be reduced. Such quantization is called unconstrained quantization (UQ) αiU (·) since the uantization is conducted with all the available quantized sensor readings. UQ will be possible if the local communication between sensors is allowed at the expense of increased power consumption. However, since we adopt a framework where only the communication between sensors and the fusion node is permitted, UQ is practically impossible to conduct. Here, we propose to design quantizers by exploiting the concept of UQ. Notice that UQ may yield a different mapping, αiU from αi, which can be constructed as follows:

Clearly, the set Vij contains sample measurements for which better estimation accuracy would be achieved if they are assigned to the j-th quantization bin. Ideally, if we can obtain the same mapping αi as αiU, we can minimize the estimation error by designing such a quantizer, although this would be impossible in our framework.

In this work, we focus on a quantizer design based on the sets Vij. Note that there may be several techniques about how to exploit Vij in order to design independent quantizers, which reduce the estimation error. See [6,7] where a simple distance rule is applied to obtain quantizers with the possibility of divergence. Here, we seek to design scalar quantizers by presenting a design rule for searching boundary values that are very likely to increase the likelihood and thus, improve the estimation error at each step.

 

III. QUANTIZER DESIGN ALGORITHM

A. Design Criterion: Maximum Likelihood

The estimation has been typically conducted using certain criteria such as ML or minimum mean square error (MMSE). Suppose that there are quantized observations, Q1, …, QM and we can build estimators on the basis of the observations. For example, the ML estimator is given by

where Q = (Q1, …, QM) denotes the observations sent by M sensor nodes and the likelihood function L(θ;Q) is identical to the joint probability density function (PDF) of Q but is viewed as a function of θ. Similarly, the MMSE estimator is obtained using the conditional mean E(θ|Q). Note that the estimators explicitly use the observations, and thus, the performance will heavily depend upon them. The following questions arise here: What if better observations are received for the estimation? Is there any way of generating better observations for the given estimators so as to achieve good estimation accuracy?

Suppose that there are two quantizers αi and at sensor i, which generate observations Qi and respectively, for a given parameter θ to be estimated. We construct the ML estimators denoted by on the basis of the observations Q1 = (Q1,…,Qi,…,QM) and respectively. Formally,

Then, which estimator would provide a better estimate for θ? If max P(Q1|θ) ≥ max P(Q2|θ), we can expect to be more likely to happen than Equivalently, αi can be said to generate a better observation than . On the basis of this discussion, we show that UQ can be constructed to find better observations, which in turn will be used for designing better quantizers.

In this work, we consider a quantizer design that generates better observations in terms of the ML because the ML estimator is often used for estimation because of its low complexity and reasonable performance. Then, the quantization partition for UQ will be constructed as follows:

Our strategy for quantizer design is to update the boundary values of our quantizer αi whenever there are some sample measurements zi (θ) with αi(zi) ≠ αiU (zi) such that the ML-based estimation error in (5) is minimized. In our design, each boundary value is updated in a sequential manner; for example, by updating bij, we collect the sample measurements zi (θ) with αi(zi) = j − 1 and αiU (zi) = j or with αi(zi) = j and αiU (zi) = j − 1 and find the boundary value minimizing the estimation. We present a design rule and prove that it provides a substantially reduced interval in which boundary values with an increased likelihood are very likely to be found.

Now, we are in a position to prove the theorem that states a simple design rule to determine the search interval for bij ; this rule is applicable to the other boundary values as well.

Theorem 1. Suppose that the quantization partitions are constructed for UQ by using (5) to generate αiU and the sample zi’s with αi(zi) = j − 1 and αiU (zi) = j (or the samples with αi(zi) = j and αiU(zi) = j − 1) are collected to form the set denoted by Sij− (or Sij+). Let bij be the j-th the boundary alue for αi at the current iteration and be the updated value of bij for at the next iteration forSij − and Sij+, respectively. Then,

where L(θ;Q,bij) represents the likelihood function computed by using bij and δ denotes a sufficiently small positive value that satisfies respectively.

Proof. We first prove the case of the samples zi ∈ Sij–. In this case, the quantizer in the next iteration is updated by using the rule presented in (6) and the boundary values are given by Then, we have the following from

where and Qi indicate the quantization indices generated by and αi, respectively. Note that the second term in (8) is reduced to the likelihood generated by aiU because aiU (zi) = j, zi ∈ Sij-, and (9) follows from the UQ construction given in (5). Similarly, we can prove the case of samples zi ∈ Sij+. Therefore, we conclude that the boundary values increasing the likelihood function are very likely to be found in the reduced interval given by

Thus, the theorem states that quantizers can be designed to produce better observations in terms of the ML by searching only the reduced interval for the next boundary value , whereas the typical interval for in a quantizer design would be [bij-1 bij+1] in the case of a random sequential search framework (see Fig. 2). Formally,

Fig. 2.Sequential search framework: the next boundary value is updated by searching the reduced interval

In our experiments, we demonstrated that the update rule given in (10) allows us to efficiently design quantizers that reduce the ML-based estimation error at each iteration as compared to a random search for the boundary values.

B. Summary of Proposed Algorithm

Given the number of quantization levels, Li = 2Ri, at sensor i, the algorithm is summarized as follows and is iteratively conducted over all sensors i = 1, ..., M until no change in αi, i = 1, ..., M is achieved.

Algorithm 1. Iterative quantizer design algorithm at sensor i

 

IV. APPLICATION OF DESIGN ALGORITHM

We briefly introduce a source localization system in acoustic sensor networks for an evaluation of the proposed algorithm. For collecting the sensor readings at the nodes, an energy decay model was proposed and verified by the field experiment in [11] and was also used in [12-14]. In particular, the signal energy measured at sensor i and denoted by zi can be written as

where θ denotes the source location to be estimated and the sensing model fi (θ) consists of the gain factor of the i-th sensor gi and an energy decay factor α that is approximately equal to 2. The source signal energy a is assumed to be known in our evaluation although the signal energy is typically unknown and can be jointly estimated with the source location [14]. The measurement noise term wi can be approximated by using a normal distribution, N(0, σi2).

 

V. SIMULATION RESULTS

In this section, we first generate a training set from the assumption of a uniform distribution of the source locations and the model parameters in (11) given by a = 50, α = 2, gi = 1, and σi2 = σ2 = 0, and iteratively optimize the quantizers by using the algorithm proposed in Section III-B. In designing our quantizer denoted by the maximum likelihood-based quantizer (MLQ), we use the equal distancedivided quantizer (EDQ) for the initialization of the quantizers. Note that EDQ was first introduced in [6] and verified to be very efficient for initialization because of its good localization performance in acoustic sensor networks [7]. We also designed typical quantizers such as uniform quantizers and Lloyd quantizers from the same training sets for the sake of comparison. In the experiments, we generated 100 different configurations of M = 5 sensor odes deployed in a sensor field measuring 10 m × 10 m. Further, for each configuration, we designed quantizers by using various algorithms and evaluated them by generating test sets of 1000 source locations from the same simulation conditions. For a performance comparison, we computed the average localization error by using the ML estimation for fast computation.

A. Performance Comparison with Typical Designs

In this experiment, we examined the performance gain achieved by our quantizer with respect to typical designs such as uniform quantizer (Unif Q) and Lloyd quantizer (Lloyd Q). Furthermore, we designed quantizers, denoted as random search-based quantizers (RSQs), by randomly searching the boundary values without using the proposed rule. The random search continued iteratively until the design complexity amounted to that of the proposed algorithm for a fair comparison. A test set for each configuration was generated to collect the signal energy measurements with σi = 0, and the measurements were then quantized by the three different quantizers by varying Ri = 2, 3, 4 bits. The average localization error in meters for each rate Ri was computed and is plotted in Fig. 3.

Fig. 3.Comparison of maximum likelihood-based quantizer (MLQ) with uniform quantizer, Lloyd quantizer, and random search-based quantizer (RSQ). The average localization error in meters is plotted vs. the rate Ri (bits).

It can be easily seen that our design technique yields a significant performance improvement over typical designs because of its design process that generates quantized data with a high probability of occurrence.

B. Performance Comparison with Previous Novel Designs

In this experiment, we compared the proposed algorithm with the novel designs recently published such as the distributed optimized quantizers (DOQ) in [4] and the localization-specific quantizer (LSQ) in [7]. Note that the DOQ causes a huge design complexity while achieving its good performance. We evaluated the design algorithms by generating the two test sets of 1000 random source locations with σi = 0 and σi = 0.15, respectively. In Fig. 4, the rate–distortion (R–D) performance curves are plotted for the sake of comparison and the proposed algorithm is thus observed to perform well with a reasonable design complexity.

Fig. 4.Comparison of MLQ with recent designs. The average localization error in meters is plotted vs. the total rate (bits) consumed by 5 sensors with σ2 = 0 (left) and σ2 = 0.152 (right), respectively. LSQ: localization-specific quantizer, DOQ: distributed optimized quantizer, MLQ: maximum likelihood-based quantizer.

C. Performance Evaluation in the Presence of Measurement Noise

We investigated how the design algorithms operate in the presence of measurement noise by generating a test set of 1000 source locations for each configuration with a = 50 and SNR in the range from 20 dB to 100 dB by varying σ. Here, the SNR was computed by 10 log10 a2/σ2, which was measured at a distance of 1 m from the source. Thus, the noise level that the sensor nodes actually experienced was much severe; for example, SNR = 40 dB is measured as SNR ≈ 5.5 dB at each sensor location on average. Note that practical vehicle targets often generate a much higher SNR than 40 dB and the noise variance is typically σ2 = 0.052 (=60 dB) [11,12]. Fig. 5 illustrates that the proposed design also provides a competitive advantage in a noisy environment.

Fig. 5.Performance evaluation under noisy conditions. The average localization error is plotted vs. SNR (dB) with M = 5, Ri = 3, and a = 50. LSQ: localization-specific quantizer, MLQ: maximum likelihood-based quantizer, SNR: signal-to-noise ratio.

 

VI. CONCLUSIONS

In this paper, we have proposed an iterative design algorithm that allows us to construct quantization partitions with average probabilities of occurrence that increase at each step. We have presented a simple design rule that reduces the sequential search interval for boundary values while maintaining an increased likelihood. By comparing with typical standard designs and previously published novel ones, we demonstrated that the proposed algorithm offers a noteworthy performance improvement with a low design complexity. In the future, we will focus on practical design methodologies for distributed systems by proposing novel distributed clustering techniques.

References

  1. W. A. Hashlamoun and P. K. Varshney, “Near-optimum quantization for signal detection,” IEEE Transactions on Communications, vol. 44, no. 3, pp. 294-297, 1996. https://doi.org/10.1109/26.486322
  2. M. Longo, T. D. Lookabaugh, and R. M. Gray, “Quantization for decentralized hypothesis testing under communication constraints,” IEEE Transactions on Information Theory, vol. 36, no. 2, pp. 241-255, 1990. https://doi.org/10.1109/18.52470
  3. Y. H. Kim, “Functional quantizer design for source localization in sensor networks,” EURASIP Journal on Advances in Signal Processing, vol. 2013, pp. 1-10, 2013. https://doi.org/10.1186/1687-6180-2013-1
  4. Y. H. Kim, “Quantizer design optimized for distributed estimation,” IEICE Transactions on Information and Systems, vol. 97, no. 6, pp. 1639-1643, 2014. https://doi.org/10.1587/transinf.E97.D.1639
  5. Y. H. Kim, “Weighted distance-based quantization for distributed estimation,” Journal of Information and Communication Convergence Engineering, vol. 12, no. 4, pp. 215-220, 2014. https://doi.org/10.6109/jicce.2014.12.4.215
  6. Y. H. Kim and A. Ortega, "Quantizer design for source localization in sensor networks," in Proceedings of IEEE International Conference on Acoustic, Speech, and Signal Processing (ICASSP2005), Philadelphia, PA, pp. 857-860, 2005.
  7. Y. H. Kim and A. Ortega, “Quantizer design for energy-based source localization in sensor networks,” IEEE Transactions on Signal Processing, vol. 59, no. 11, pp. 5577-5588, 2011. https://doi.org/10.1109/TSP.2011.2163401
  8. W. Lam and A. Reibman, “Design of quantizers for decentralized estimation systems,” IEEE Transactions on Communications, vol. 41, no. 11, pp. 1602-1605, 1993. https://doi.org/10.1109/26.241739
  9. N. Wernersson, J. Karlsson, and M. Skoglund, “Distributed quantization over noisy channels,” IEEE Transactions on Communications, vol. 57, no. 6, pp. 1693-1700, 2009. https://doi.org/10.1109/TCOMM.2009.06.070482
  10. Y. H. Kim and A. Ortega, “Distributed encoding algorithms for source localization in sensor networks,” EURASIP Journal on Advances in Signal Processing, vol. 2010, article ID. 781720, pp. 1-13, 2010.
  11. D. Li and Y. H. Hu, “Energy-based collaborative source localization using acoustic microsensor array,” EURASIP Journal on Applied Signal Processing, vol. 2003, no. 4, pp. 321-337, 2003. https://doi.org/10.1155/S1110865703212075
  12. J. Liu, J. Reich, and F. Zhao, “Collaborative in-network processing for target tracking,” EURASIP Journal on Applied Signal Processing, vol. 2003, no. 4, pp. 378-391, 2003. https://doi.org/10.1155/S111086570321204X
  13. A. O. Hero and D. Blatt, "Sensor network source localization via projection onto convex sets (POCS)," in Proceedings of IEEE International Conference on Acoustic, Speech, and Signal Processing (ICASSP2005), , pp. 689-692, 2005.
  14. Y. H. Kim and A. Ortega, "Maximum a posteriori (MAP)-based algorithm for distributed source localization using quantized acoustic sensor readings," in Proceedings of IEEE International Conference on Acoustic, Speech, and Signal Processing (ICASSP 2006), Toulouse, France, 2006.

Cited by

  1. Probabilistic distance-based quantizer design for distributed estimation vol.2016, pp.1, 2016, https://doi.org/10.1186/s13634-016-0389-0