DOI QR코드

DOI QR Code

Evolutionary Algorithm-based Space Diversity for Imperfect Channel Estimation

  • Received : 2013.08.07
  • Accepted : 2014.04.19
  • Published : 2014.05.29

Abstract

In space diversity combining, conventional methods such as maximal ratio combining (MRC), equal gain combining (EGC) and selection combining (SC) are commonly used to improve the output signal-to-noise ratio (SNR) provided that the channel is perfectly estimated at the receiver. However, in practice, channel estimation is often imperfect and this indeed deteriorates the system performance. In this paper, diversity combining techniques based on two evolutionary algorithms, namely genetic algorithm (GA) and particle swarm optimization (PSO) are proposed and compared. Numerical results indicate that the proposed methods outperform the conventional MRC, EGC and SC methods when the channel estimation is imperfect while it shows similar performance as that of MRC when the channel is perfectly estimated.

Keywords

1. Introduction

Diversity is one of the most powerful means to overcome multipath fading in wireless communications which is used to improve the quality of service (QoS) [1]. Diversity combining is a technique used to combine multiple received signals into a single aggregated signal [2]. Such method can lessen the harmful consequences of fading where the principal idea is to extract information from received signals transmitted over several fading channels to increase the received signal-to-noise ratio (SNR). The idea of adjusting a series of weighting coefficients applied to spatially separated received copies of the transmitted signal is depicted in Fig. 1. Assuming that the channel is perfectly estimated, conventional diversity techniques such as maximal ratio combining (MRC), equal gain combining (EGC) and selection combining (SC) are extensively used to maximize the SNR of the output signal. The aim of these methods is to find a series of optimal weights at which the output SNR is maximized. The manner of weighting the coefficients vector is different in MRC, EGC and SC methods [3].

Fig. 1.Diversity combining block diagram

The performance of these methods has been comprehensively examined in the literature. If the channel is perfectly estimated at the receiver, MRC can be applied to maximize the output SNR and minimize the bit error rate (BER) [2]. Nevertheless, since the channel estimation is often imperfect in practice, the estimation error will decay the system performance. While this problem has long been investigated [3], [4], the recent evolutions in mobile communication systems have renewed the attention in comprehending and mitigating the effect of imperfect channel estimation on diversity techniques [5-7]. The error performance of MRC in Rayleigh fading environment with independent and identically distributed (i.i.d.) diversity branches is investigated in [4], [5]. In [6], the SNR distribution is given for similar scenarios. In [7], the error performance of the MRC with independent but not identically distributed (i.n.d.) branches is studied. In [8], the MRC technique shows good diversity performance, but it lacks of hardware implementation difficulties as well as increased computational delay. In practical MRC systems, due to channel estimation inaccuracies, spatial combining can not be performed optimally. Thus, resolving the performance degradation caused by channel error estimation is an open research question [9]. In this paper, iterative Genetic Algorithm- (GA-) and Particle Swarm Optimization- (PSO-) based diversity combining techniques are proposed and compared to existing techniques in which the signals received by the antennas are weighted based on GA and PSO algorithms. It is shown that the proposed diversity combining method does not require the channel estimation and outperforms the MRC when channel estimation is imperfect while it has almost the same performance as MRC when the channel is perfectly estimated.

 

2. System model

The random variable is considered to be the transmitted signal point with equal probability where Es is the average symbol energy which is modulated by binary phase-shift keying (BPSK) modulation. A frequency non-selective and slow fading channel over the length of the transmitted symbol is assumed. We also assume that M diversity branches are employed at the receiver for reception. The received signal at the ith branch is given by

where gi denotes the complex channel gain in which real and imaginary parts are uncorrelated and Gaussian-distributed, each with zero mean and variance of . The noise random variable ni is complex additive white Gaussian noise (AWGN) with zero mean and variance The channel gain gi at two different diversity branches are considered to be uncorrelated and equally distributed. It is also assumed that gi and ni are uncorrelated. The receiver then linearly combines the received signals ri with ωi which is the weighting coefficient of the ith branch. The output r of the linear diversity combiner is then given by

Conditioned on the set , the SNR at the output of the combiner (as per the definition of [4] is expressed as

It is observable that the SNR is highly dependent on ωi. Therefore, the optimal solution is the weighting vector which maximizes γs() in (3) as an objective function. We assume pi is the estimate of the complex gain gi on the ith diversity branch and ei is the estimation error with zero mean and variance of σe2 = σg2(1−ρ2) where ρ∈ [0,1] is the normalized estimation error correlation coefficient. Under Gaussian-error model, gi and pi are related as gi=pi+ei [10]. According to the diversity combining rule, the combined weights take on the ωi=pi* for MRC diversity which, based on Cauchy–Schwartz inequality, maximizes (3) if the channel is perfectly estimated (i.e., ρ=1). Also, in the high ρ region (ρ→1), even a small amount of perturbation in ρ will radically alter the output SNR. However, since channel estimation is often imperfect in practice, the MRC is a suboptimal solution. In the next section, we describe the PSO-based combining method to estimate the channel and obtain the optimal weight vector .

 

3. Evolutionary algorithms for space diversity

3.1 The Genetic algorithm (GA) based technique

GA is classified as an evolutionary algorithm that is a stochastic search method and mimics natural evolution. It has been used to solve difficult problems like, non-deterministic problems and machine learning as well as for other applications such as evaluation of pictures and music. The key advantage of GAs over the other methods is its parallelism properties. GAs move in a pre-defined search space using individuals aiming to make an optimal decision. GA is a population-based method in which each individual is evolving in parallel and the optimal individual is preserved and eventually obtained from the last population.

The GA tool starts with randomly generating a set of chromosomes. These chromosomes establish a population (pops). As the GA models proceed with natural processes such as selection, crossover and mutation, it performs on a population of individuals instead of a single individual. A random population of chromosomes is initialized and then will be evaluated by fitness functions of a particular problem. It is then checked for optimization criterion defined by the engineer and will breed a new population from the previous populations if the termination criterion is not met. These new individuals are usually selected according to their fitness values [11]. The chromosomes which are considered fit or best will be selected as parents and will undergo mating such as crossover and mutation for better production. These new offspring will then be evaluated and became parents of the new generation if the termination criterion is not satisfied. And this cycle will be looped until a certain criterion is met, where each iteration is called a generation.

In our proposed GA method, an initial population of pop possible solutions is generated randomly and each individual is normalized to satisfy the constraints. Our goal is to find the optimal set of weighting vector values to improve the received SNR. When it reaches the predefined maximum generation, GA is terminated and determines a set of weighted vector values () that makes highest fitness and consider as the best solution for objective function in (3). Let us assume that we have M branches and is the weighting vector of the jth individual generation that consists of ω1,ω2,ω3,…ωM , the fitness value for the jth generation is defined as

ps shows SNR value in each generation. The main steps of the GA are selection, crossover, and mutation. The main concept of selection is to choose the best chromosomes for re-creation through crossover and mutation. In this research, we use “Roulette Wheel selection” method. The probability of selecting the jth individual or chromosome, pj, can be written as:

The operational mechanism of GA is presented. Fig. 2 shows an explanatory flowchart of GA mechanism. The algorithm begins by initialization of random population P(t) which represent a set of possible solutions maintained within a predefined search space. The population fitness is then evaluated and the fittest ξ *pops chromosomes are identified and stored as Pξ(t), where ξ is the elitism ratio and pops is the population size. The elitism is used to force the GA to retain a number of the best chromosomes at every generation which might be destroyed by subsequent operations. The main GA operations; selection, crossover, and mutation, are then performed. The population of the new individuals (children) resulting from selection, crossover and mutation are called offspring population and denoted as C(t). Once C(t) is formed, a new population P(t + 1) is constructed by combining C(t) with the fittest chromosomes of the parent population; Pξ(t). The fitness of P(t + 1) is evaluated and a stopping criterion is checked to decide whether or not to terminate the optimization algorithm. The stopping criterion might be a predefined number of generations, a predefined fitness threshold, or when no noticeable improvement is observed over a predefined number of generations. If the stopping criterion is met, the algorithm is terminated and the optimal solution is extracted from P(t + 1). Otherwise, P(t) will be updated with P(t + 1) individuals and the GA evolutionary processes are recycled again as shown. The GA mechanism ensures converges to better set of solutions which hopefully includes an optimal solution to the problem at hand [12].

Fig. 2.Flowchart of GA algorithm

In Fig. 3, the crossover and mutation evolutionary operations of GA are graphically represented. Two chromosomes (parents) of 10-bit each undergoes a crossover operation to produce 2 offspring (children). The Mutation operation is then invoked to randomly invert arbitrary bits of each child chromosomes to obtain the final mutated offspring. Crossover is a means to explore the search space is a large scale whereas mutation is a fine tuning process used to steer the search engine of GA to the promising regions [13].

Fig. 3.Representation of GA crossover and mutation operations

The proposed GA-based reception diversity algorithm used to improve SNR output SNR can be outlined as follows:

Step 1: Randomly produce pops number of M-digits-long chromosomes where M is the number of branches at receiver in the network and set t = 0.

Step 2: Each chromosome in the random population must be decoded into its corresponding weighting coefficients vector where the weighting coefficient vector = [ω1,ω2,…,ωM]T ; ω1 ≥ 0 , satisfying the condition; which is used to maximize SNR at receiver.

Step 3: The weighting coefficient vector = [ω1,ω2,…,ωM]T should be normalized so that the constraint is satisfied.

Step 4: Calculate the fitness value of each weighting vector, obtained from previous step, sort their corresponding chromosomes based on their fitness value and determine the best chromosomes [pops*(1−elite)] , where the elite is a parameter identifying a fraction of pops which determines the number of best chromosomes to be copied directly to the next population without alteration, elite∈[o,1]..

Step 5: Update t = t+1 and with the use of genetic algorithm operations regenerate [pops*(1−elite)] number of new chromosomes.

Step 6: A new set of population is built by linking the newly [pops*(1−elite)] regenerated chromosomes with the best [pops * elite ] found in P (t−1).

Step 7: Repeat the Step 2 and Step 3 for generating a new population pops.

Step 8: Repeat Step 4.

Step 9: If it is equal to predefined number of generation (iterations), stop. Otherwise go to Step 5.

To get the best performance, researchers are supposed to find the optimal setting of GA parameters for that specific problem. In this paper, the convergence performance of GA used to optimize the space diversity has been tested separately and the optimal values are tabulated in Table 1. For this algorithm, total number of generations used is gener =100.

Table 1.Parameter setting for GA

3.2 The particle swarm optimization (PSO) based technique

PSO algorithm, introduced by Kennedy and Eberhart in 1995 [14], is taken from social behavior of a flock of fishes and birds, and has two principal characteristics of position and velocity. In the PSO, each solution is referred to as a ‘particle’. The information about the best position is exchanged among the particles during every iteration. The best position of each particle (pbest) and the best position of the swarm (gbest) are updated as needed. The velocity of each particle is then adjusted based on the experiences of the particle [15]. After an adequate number of iterations or epochs, the algorithm converges to the optimal solution of the objective function. The flow chart of the PSO algorithm is shown in Fig. 4. The particles of the algorithm are first initialized and their fitness is evaluated to determine the local best solution which is then compared with the global best solution which is updated using the conditional operation shown. This global is used to update the particles’ velocities and update their data values. For S-dimensional search space the position of ith particle is represented as Xs=(X1j,X2J,…,XSJ). Each particle has the memory of its previous best position Pbest,j=(Pjbest,1,Pjbest,2,...,Pjbest,s) and the best of all particles in the population is represented as Pgbest = (Pg1,Pg2,...,Pgs). The velocity of each particle is represented as VS=(V1j,V2j,...,Vsj). In each iteration, Pbest, is compared with the previous Gbest ; if it is smaller than Gbest , Gbest is maintained. If it is larger, Pbest is then updated as Pbest= Gbest. Pbest,j and Pgbest are repeatedly used to update the velocities and positions of all the particles in the constructed populations. The two basic equations which covers the working of PSO, that of the velocity vector and position vector given by [16],

where C1 and C2 are individual learning coefficient and social learning coefficient, respectively, which are predefined by the user and r1 and r2 are random numbers vary between [0,1].

Fig. 4.Flowchart for particle swarm optimization algorithm.

In this section, the goal is to determine a set which maximizes the objective function γs () in (3). It is beneficial to introduce additional constraint to reduce the search space on which the PSO works. The used in this work satisfies the conditions 0<ωi<1 and The steps involved in the PSO algorithm to obtain the optimal weighting vector are as follows:

Step 1: Create a population of N particles as = [ω1,ω2,...,ωM]T uniformly distributed over 0 and 1 and N numbers of velocity vector which are initially set to zero as = [0,0,…,0]T.

Step 2: Evaluate each particle’s position according to the objective function. Update the particle’s position Pbest if the current position is better than its previous position.

Step 3: Determine the best position among all particles’ positions and name it as Gbest.

Step 4: Update the velocity by:

where c1 and c2 are individual and social learning acceleration coefficients respectively. r1 and r2 are random numbers with uniform distributions in the range of 0 to 1 which introduce stochastic components to the algorithm. j is the iteration number.

Step 5: Update the particles positions by:

Step 6: Go to Step 2 until stopping criteria are satisfied.

In this paper, the PSO parameters used is as tabulated in Table 2.

Table 2.Parameter setting for PSO

 

4. Effective SNR and BER of conventional diversity techniques

In this section, a brief discussion on receiver diversity in a wireless link is presented. Receiver diversity is a known as one of space diversity techniques where the receiver is equipped with multiple antennas. The main idea of receiver diversity is to effectively make use of the information from all the receive antennas to demodulate the data. In the literature, there are multiple conventional space diversity techniques such as selection diversity (SC), equal gain combining (EGC) and maximal ratio combining (MRC). In this paper, these three diversity techniques are used to benchmark the proposed PSO- and GA-based techniques. In addition, we will assume that the channel is a flat fading Rayleigh multipath channel and the modulation is BPSK. With SC, the receiver selects the antenna with the highest received signal power and ignores observations from the other antennas [17]. The chosen receive antenna is the one which gives the maximum individual SNR written as:

where γi is the instantaneous bit energy to noise ratio at the ith receive antenna is in the presence of channel gain hi and The bit error rate (BER) or probability of error Pe with SC is given by [18],

The effective SNR in the EGC method is the channel power accumulated over all receiving chains which can be expressed as [17] [19],

The probability of error Pe with EGC is expressed by [20],

For MRC, The effective SNR in N receiving antenna case is N times the bit energy to noise ratio for single antenna case [17] [19].

The probability of error Pe with MRC is expressed by [18],

where

 

5. Numerical results and discussions

In this section, Monte-Carlo simulation is employed to present the performance of the proposed PSO-based diversity combining technique and compare it with MRC, EGC and SC methods in two different scenarios; with perfect and imperfect channel estimation. It assumed that the average symbol energy Es = 1 and channel gain and AWGN variances are σg2 = σn2 = 0.5 per dimension. GA and PSO parameters are set as shown in Table 1 and Table 2, respectively. Generally, PSO parameter setting is problem-dependant. Therefore, set-and-test approach is employed in this paper to obtain the optimal PSO parameters.

Fig. 5(a) compares the normalized output SNR of PSO-based combining with MRC, EGC and SC in terms of different number of diversity branches (M ), or the number of receiving antennas, when the channel is perfectly estimated (ρ = 1). As expected, we observe that the MRC provides the best performance when channel estimation is perfect. However, the PSO-based solution demonstrates almost the same SNR gain as MRC without the need for channel estimation which results in reduced complexity in the receiver. The comparison between PSO and MRC methods in imperfect channel estimation environment (ρ = 0) is illustrated in Fig. 5(b). It can be seen that the PSO-based method outperforms MRC when channel estimation is imperfect. The PSO provides the best performance when channel estimation is imperfect among conventional MRC, EGC and SC methods. On the other hand, the GA-based solution demonstrates almost the same SNR gain as MRC without the need for channel estimation which results in reduced complexity in the receiver. The improvement achieved can be justified by the ability of the PSO algorithm to thoroughly investigate the search space and evaluate the objective function in (3) to maximize the output SNR.

Fig. 5.(a) Comparison of the normalized output SNR of PSO-based, MRC, EGC and SC methods when channel estimation is perfect. (b) Comparison of the normalized output SNR of PSO-based, MRC, EGC and SC methods when channel estimation is imperfect.

In Fig. 6, the PSO and GA algorithms are compared with MRC under different imperfect channel estimation conditions where (ρ = 0, 0.25, 0.50 and 0.75). Only MRC is included in this simulation since it shows the best performance as compared with the other two conventional schemes, EGC and SC. As shown, PSO and GA show better performances as compared with the other algorithms with different imperfect channel estimation conditions. Considering BPSK modulation and imperfect channel estimation, the error performance of the MRC and PSO-based methods for 1, 2 and 3 diversity branches is illustrated in Fig 7. It is observable that the bit error rate of the PSO-based technique is considerably lower than that of MRC. For instance, for a two-branch diversity, the MRC approximately requires 2.5 dB higher SNR than that of PSO-based method to achieve a BER = 10-4. In addition, as it is shown, increasing the number of branches results in improved error performance.

Fig. 6.Comparison of effective SNR of PSO- and GA-based as well as MRC methods when the channel estimation is imperfect and with deferent values of ρ.

Fig. 7.Error performance of PSO-, GA-based and MRC method for different number of diversity branches.

Fig. 8 compares the convergence of PSO and GA algorithm used in diversity method. The number of diversity branches or receive antennas is assumed to be 8. It is shown that the PSO-based technique is better than the GA-based one in terms of achievable fitness score (SNR value) and convergence speed (less number of generations). However, both of them are able to find the optimal solution at which SNR is maximized within less than 50 iterations. PSO finds the best solutions in 30 iterations whereas GA finds it in 44 generations. Well, this is a considerably low number of iterations, which we think can suit such a real-time application. For instance, the optimization problem in [21] is somehow similar as the idea is to optimize weighting coefficients so the probability of detecting a primary user by the cognitive radio network is maximized. The simulations results show that the PSO finds the optimal solution in about 50 generations when the problem dimensions (number of cognitive radio users) is only 6. In our simulation, the optimization problem is harder than the one in [21] as it is 20-Dimensional problem since the number of receive antennas is 20. However, both algorithms are able to find the optimal solutions within less than 50 generations, which is so fast that it can ensure the computation complexity of the proposed method meets real time requirements. Theoretically, comparing PSO and GA, PSO has easier implementation and it has fewer parameters to be adjusted [22] [23]. Compared with GA, the information sharing mechanism in PSO is significantly different. In GAs, chromosomes share information with each other. So the whole population moves like one group towards the optimal area. In PSO, only the global (or local) best individual gives out the information to others. It is a one-way information sharing mechanism and the evolution only looks for the best solution. Compared with GA, all the particles tend to converge to the best solution quickly in most cases [24]. The performance of PSO is usually much better than GA [25]. The slow convergence of GA can be explained by its dependence to binary encoding of an inherently continuous domain.

Fig. 8.Comparison of achievable SNR by PSO and GA methods over 100 iterations.

 

6 Conclusion

One of the most important issues in reception antenna diversity occurs when the channel is imperfectly estimated. This defective estimation results in obtaining a vector of the weighting coefficient of the combiner that deteriorates the SNR and BER performance of the system at the receiver. To address the issue, a PSO-based diversity combining method is proposed to optimize the weighting vector which is used to combine the received signals at the receiver. Simulation results validate that the proposed PSO-based method provides higher output SNR gain than that of MRC when channel estimation is imperfect. On the other hand, in perfect channel estimation environment, the proposed method yields as much SNR gain as MRC.

References

  1. N. Kong, "Performance comparison among conventional selection combining, optimum combining and maximal ratio combining," IEEE International conference on Communications (ICC), pp. 1-6, June, 2009.
  2. J. Proakis and Masoud Salehi, Digital Communications, 5th edition, McGraw-Hill, November 2007.
  3. R. Duan, R. Janti, M. Elmusrati, "Capacity for spectrum sharing cognitive radios with MRC diversity and imperfect channel information from primary user," Proceeding of IEEE GLOBECOM, pp. 1-5, December, 2010.
  4. Y. Ma, R. Schober, S. Pasupathy, "Effect of channel estimation error on MRC diversity in rician fading channels," IEEE Transcations on Vehicular Technology, vol. 54. no. 6, pp. 2137-2142, November, 2005.
  5. B. R. Tomiuk, N. C. Beaulieu and A. A. Abu-Dayya, "General forms for maximal ratio diversity with weighting errors," IEEE Transactions on Communications, vol. 47, no. 4, pp. 488-492, April, 1999. https://doi.org/10.1109/26.764914
  6. S. Roy and P. Fortier, "Maximal-ratio combining architectures and performance with channel estimation based on a training sequence," IEEE Transactions on Wireless Communications, vol. 3, no. 4, pp. 1154-1164, July, 2004. https://doi.org/10.1109/TWC.2004.828022
  7. J. S. Thompson, "Antenna array performance with channel estimation errors," in Proc. of Proceeding of ITG Workshop Smart Antennas, pp. 75-78, March, 2004.
  8. R. Annavajjala and L. B. Milstein, "Performance analysis of optimum and suboptimum selection diversity schemes on rayleigh fading channel with imperfect channel estimation," IEEE Transactions on Vehicular Technology, vol. 56, no. 3, pp. 1119-1130, May, 2007. https://doi.org/10.1109/TVT.2007.895569
  9. Y. Tokgoz and B. D. Rao, "The effect of imperfect channel estimation on the performance of maximum ratio combining in presence of cochannel interference," IEEE Transactions on Vehicular Technology, vol. 55, no. 5, pp. 1527-1534 , September, 2006. https://doi.org/10.1109/TVT.2006.878556
  10. R. You, H. Li and Y. Bar-Ness, "Diversity combining with imperfect channel estimation," IEEE Transactions on Communications, vol. 53, no. 10, pp. 1655-1662, May, 2005. https://doi.org/10.1109/TCOMM.2005.857156
  11. R. L. Haupt and S. E. Haupt, Practical Genetic Algorithms, Wiley, New Jersey, 2004.
  12. D. Ashlock, Evolutionary computation for modeling and optimization, Springer, 2006.
  13. M. Gen, R. Cheng and L. Lin, Network models and optimization: multi-objective genetic algorithm approach, Springer, 2008.
  14. J. Kennedy and R. Eberhart, "Particle swarm optimization," in Proc. of Proceedings of IEEE International Conference on Neural Networks, vol. 4, pp. 1942-1948, November/December, 1995.
  15. V. Kachitvichyanukul, "Comparison of three evolutionary algorithms: GA, PSO, and DE," Journal of Industrial Engineering & Management system, vol. 11, no 3, pp. 215-223, September, 2012. https://doi.org/10.7232/iems.2012.11.3.215
  16. J. J. Liang, A. K. Qin, S. Baskar, "Comprehensive learning particle swarm optimizer for global optimization of multimodal functions," IEEE Transactions on Evolutionary Computation, vol.10, no. 3, pp. 281-295, June, 2006. https://doi.org/10.1109/TEVC.2005.857610
  17. R. Janaswamy, Radiowave Propagation and Smart Antennas for Wireless Communications, Kluwer Academic Publishers, 2000.
  18. J. R. Barry, E. A. Lee and D. G. Messerschmitt, Digital Communication, Kluwer Academic Publishers, September, 2003.
  19. L. C. Godara, Handbook of Antennas for Wireless Communications, CRC Press, 2002.
  20. Q. Zhang, "Probability of error for equal gain combiners over rayleigh channels: some closed form solutions," IEEE Transactions on Communications, vol. 45, no. 3, pp. 270-273, March, 1997. https://doi.org/10.1109/26.558680
  21. S. Zheng, C. Lou and X. Yang, "Cooperative spectrum sensing using particle swarm optimisation," Electronics Letters, vol. 46, no. 22, pp. 1525-1526, October, 2010. https://doi.org/10.1049/el.2010.2115
  22. Y. Q. Qin, D. B. Sun, N. Li and Y. G. Cen, "Path planning for mobile robot using the particle swarm optimization with mutation operator," in Proc. of Proceedings of the 3rd International Conference on Machine Learning, vol. 4, pp. 2473 - 2478 , August, 2004.
  23. Y. Shi and R. C. Eberhart, "Empirical study of particle swarm optimization," in Proc. of Proceedings of the 1999 Congress on Evolutionary Computation (CEC), vol. 3, 1999.
  24. R. C. Eberhart and Y. Shi, "Comparision between genetic algorithms and particle swarm optimization," in Proc. of Proceedings of the 7th Internatinal Conference on Evolutionary Programming, pp. 611-616, 1998.
  25. M. Zubair, M. A. S. Choudhry, A. Naveed and I. M. Qureshi, "Particle swarm with soft decision for multiuser detection of synchronous multicarrier CDMA," IEICE Transactions on Communucations, vol. E91-B, no. 5, pp. 1640-1643, May, 2008. https://doi.org/10.1093/ietcom/e91-b.5.1640