MEANSQUAREDANALYSISOF THE PARTIALUPDATENLMS ALGORITHM
Stefan Werner, Marcello L. R. de Campos, and Paulo S. R. Diniz
Abstract 
In this paper, we present meansquared convergence analysis for the partialupdate normalized leastmeansquare (PUNLMS) algorithm with closedform expressionsfor the case of white input signals. The analysis uses orderstatistics and the formulas presented here are more accuratethan the ones found in the literature for the PUNLMS algorithm. Simulation results show excellent agreement with thethe results predicted by the analysis.
Keywords:
Adaption algorithms, efﬁcient algorithms, partial update, MSE analysis, order statistics, NLMS algorithm.
Resumo 
Neste artigo apresentamos a an´alise da convergˆencia na m´edia quadr´atica do algoritmo leastmean
square normalizado com utilizac¸˜ao parcial dos dados (PUNLMS) para o caso de sinais de entrada brancos. A an´aliseusa conceitos de estat´ıstica de vari´aveis ordenadas e asf´ormulas apresentadas mostraramse mais precisas que outros resultados encontrados na literatura para o algoritmoPUNLMS. Resultados de simulac¸˜oes mostram ´otima concordˆancia com aqueles previstos pela teoria.
Palavraschave:
Algoritmos adaptativos, algoritmos efﬁcientes, algoritmos com atualizac¸˜ao parcial de dados, estat´ıtica de vari´aveis ordenadas, algoritmo LMS normalizado.
1. INTRODUCTION
When implementing an adaptiveﬁltering algorithm, theaffordable number of coefﬁcients that can be used will depend on the application in question, the adaptation algorithm, and the hardware chosen for implementation. Withthe choice of algorithms ranging from the simple leastmeansquare (LMS) algorithm to the more complex recursive leastsquares (RLS) algorithm, tradeoffs between performancecriteria such as, e.g., computational complexity and convergence rate, have to be made. In certain applications, the useof the RLS algorithm is prohibitive due to the high computational complexity and in such cases we must resort to simpler algorithms. As an example, consider the acoustic echocancellation applicationwhere the adaptiveﬁlter may requirethousands of coefﬁcients [1]. This large number of ﬁlter coefﬁcients mayimpaireventhe implementationoflowcomputational complexity algorithms, such as the normalized least
S. Werner is with the Signal Processing Laboratory, HelsinkiUniversity of Technology, Espoo, Finland (Email: stefan.werner@hut.ﬁ)M. L. R. de Campos and P. S. R. Diniz are with COPPE/EEUniversidade Federal do Rio de Janeiro, RJ, Brazil, (Emails:
{
campos,diniz
}
@lps.ufrj.br).
mean square (NLMS) algorithm [1].As an alternative, instead of reducing ﬁlter order, one maychoose to update only part of the ﬁlter coefﬁcient vector ateach time instant. Such algorithms, referred to as partialupdate (PU) algorithms, can reduce computational complexity while performing close to their fullupdate counterpartsin terms of convergence rate and ﬁnal meansquared error(MSE). In the literature one can ﬁnd several variants of theLMSandtheNLMSalgorithmswithpartialupdates[2]–[10],as well as more computationally complex variants based onthe afﬁne projection algorithm [10, 11].The objective of this paper is to analyze the partialupdateNLMS (PUNLMS) algorithm introduced in [10, 11]. Theresults from our analysis, which is based on order statistics,yield moreaccurate boundson step size and on the predictionofexcessMSEwhencomparedtotheresultspresentedin[10,11]. We also clarify the relationship between the PUNLMSand MMax NLMS [5, 6] algorithms, whereby we show thatthe MMax NLMS algorithm uses an instantaneous estimateof the step size that achieves the fastest convergence in theMSE.The organization of the paper is as follows. Section 2 reviews and discusses the PUNLMS algorithm. Section 3 provides an analysis in the meansquared sense that is novel forthis algorithmandallows new insights to its behavior. In Section 4 we validate our analysis of the PUNLMS algorithmand compare our results with those available in the literature.Conclusions are given in Section 5.
2. THE PUNLMS ALGORITHM
This section reviews and discusses the partialupdateNLMS (PUNLMS) algorithm proposed in [10, 11].The objective in PU adaptation is to derive an algorithmthat only updates a fraction of coefﬁcients of the adaptive ﬁlter in each iteration. Let us start by partitioning the inputsignal vector
x
(
k
)
∈
R
N
and the adaptive ﬁlter vector
w
(
k
)
∈
R
N
into
B
blocks of
L
=
N/B
coefﬁcients each,
x
(
k
) = [
x
(
k
)
x
(
k
−
1)
···
x
(
k
−
N
+ 1)]
T
= [
x
T
1
(
k
)
x
T
2
(
k
)
···
x
T
B
(
k
)]
T
(1)
w
(
k
) = [
w
1
(
k
)
w
2
(
k
)
···
w
N
(
k
)]
T
= [
w
T
1
(
k
)
w
T
2
(
k
)
···
w
T
B
(
k
)]
T
(2)Assuming a sequence of desired signals
{
d
(
k
)
}
∞
k
=1
, we canwrite the sequence of output errors
{
e
(
k
)
}
∞
k
=1
as
e
(
k
) =
d
(
k
)
−
w
T
x
(
k
)
Our goal is to ﬁnd an adaptation algorithm which updates
N
B
blocks out of the
B
available blocks. Partitioning the1
ﬁlter into blocks of coefﬁcients,
L >
1
, rather than singlecoefﬁcients,
L
= 1
, may at ﬁrst sight seem to lack any motivation but it has been shown that choosing
L >
1
can reducethe computational load and amount of memory required forthe implementation [8]. However, for a given number of coefﬁcients to be updated, choosing
L
= 1
will result in thefastest convergence rate for white input signals. The reasonwhy a slowdown in convergence speed occurs for
L >
1
isexplained at the end of this section.Let the
N
B
blocks of coefﬁcients to be updated attime instant
k
be speciﬁed by an index set
I
N
B
(
k
) =
{
i
1
(
k
)
, ,..., i
N
B
(
k
)
}
with
{
i
j
(
k
)
}
N
B
j
=1
taken from the set
{
1
, ..., B
}
. Note that
I
N
B
(
k
)
depends on the time instant
k
. As a consequence, the
N
B
blocks of coefﬁcients to beupdated can change between consecutive time instants. Aquestion that naturally arises is “Which
N
B
blocks should beupdated?” The answer to this question can be related to theoptimization criterion chosen for the algorithm derivation.In the conventional NLMS algorithm, the new coefﬁcientvector can be obtained as the vector
w
(
k
+1)
that minimizesthe Euclidean distance
w
(
k
+ 1)
−
w
(
k
)
2
subject to theconstraint of zero
a posteriori
error. Applying the same ideafor the partial update of vector
w
(
k
)
, we take the updatedvector
w
(
k
+ 1)
as the vector minimizing the Euclidean distance
w
(
k
+ 1)
−
w
(
k
)
2
subject to the constraint of zero
a posteriori
error
with the additional constraint of updatingonly
N
B
blocks of coefﬁcients, where each block contains
L
coefﬁcients.
For this purpose, we introduce the
N
×
N
blockselection matrix
A
I
N B
(
k
)
having
N
B
identity matrices
I
L
×
L
on its diagonal and zeroes elsewhere. The matrix multiplication
A
I
N B
(
k
)
w
(
k
)
now removes all the blocksthat do not belong to the adaptive ﬁlter update. Deﬁning thecomplementary matrix
˜
A
I
N B
(
k
)
=
I
−
A
I
N B
(
k
)
will give
˜
A
I
N B
(
k
)
w
(
k
+ 1) = ˜
A
I
N B
(
k
)
w
(
k
)
, which means that only
N
B
blocks are updated.With this notation the optimization criterion for the partialupdate can be formulated as
w
(
k
+ 1) = min
w
w
−
w
(
k
)
2
subject to
x
(
k
)
T
w
=
d
(
k
)˜
A
I
N B
(
k
)
(
w
−
w
(
k
)) =
0
(3)Applying the method of Lagrange multipliers (see Appendix I) gives
w
(
k
+ 1) =
w
(
k
) +
e
(
k
)
A
I
N B
(
k
)
x
(
k
)
A
I
N B
(
k
)
x
(
k
)
2
.
(4)We seefrom(4)thatonlytheblocksofcoefﬁcientsof
w
(
k
)
indicated by the index set
I
N
B
(
k
)
are updated, whereas theremainingblocks arenot changedfromiteration
k
to iteration
k
+ 1
.We now concentrateon thechoice ofthe indexset
I
N
B
(
k
)
.Substituting the recursions in (4) into (3) we get the Euclidean distance as
E
(
k
) =
w
(
k
+ 1)
−
w
(
k
)
2
=
e
2
(
k
)
A
I
N B
(
k
)
x
(
k
)
2
For a given value of
e
2
(
k
)
, we can conclude that
E
(
k
)
achieves its minimum when
A
I
N B
(
k
)
x
(
k
)
is maximized.
Table 1
. The PUNLMS Algorithm
PUNLMS ALGORITHM
for each
k
{
e
(
k
) =
d
(
k
)
−
x
T
(
k
)
w
(
k
)
z
= [
x
1
(
k
)
2
,
···
,
x
B
(
k
)
2
][
y
,
i
] =
sort
[
z
]
%
y
,
i
: sorted vector and index vector
i
=
i
(
B
:
−
1 :
B
−
N
b
+ 1)
%
N
b
largest norm blocks
Ax
2
=
N
b
j
=1
y
(
B
−
j
+ 1)
for
i
= 1
to
N
b
{
I
temp
= [(
i
(
i
)
−
1)
L
+ 1 :
i
(
i
)
L
]
w
(
I
temp
) =
w
(
I
temp
) +
µe
(
k
)
x
(
I
temp
)
/
Ax
2
}}
In other words, we should update the
N
B
blocks of coefﬁcients of
w
(
k
)
with the largest norm
x
i
(
k
)
2
,
i
=0
,
1
, ..., B
.In order to control stability, convergence speed, and errorin the meansquared sense a step size is required, leading tothe following ﬁnal recursion for the PUNLMS algorithm
w
(
k
+ 1) =
w
(
k
) +
µe
(
k
)
A
I
N B
(
k
)
x
(
k
)
A
I
N B
(
k
)
x
(
k
)
2
(5)The pseudocode for the PUNLMS algorithm is shown inTable 1. Note that in a practical implementation, the sortingalgorithm and calculation of the largest norm blocks need tobe implemented with care, see [10].It was mentioned in the beginning of this section that aslowdown in convergence rate will occur for
L >
1
in caseof white input signals. The reason is that deviation from theoptimal inputsignal direction
x
(
k
)
is increasing with
L
. AgeometricalinterpretationofthePUNLMSalgorithmupdateis given in Figure 1 for the case of
N
= 3
ﬁlter coefﬁcientsand
N
B
= 1
block to be updated, where each block contains
L
= 1
coefﬁcient. In the ﬁgure, the component
x
(
k
−
2)
is the element of largest magnitude in
x
(
k
)
, therefore thematrix
A
I
1
(
k
)
, which speciﬁes the coefﬁcients to update in
w
(
k
)
, is equal to
A
I
1
(
k
)
=
diag
(0 0 1)
. The solution
w
⊥
in Figure 1 is the solution obtained by the NLMS algorithmabiding the orthogonality principle. The angle
θ
shown inFigure 1 denotes the angle between the direction of update
A
I
1
(
k
)
x
(
k
) = [0 0
x
(
k
−
2)]
T
and the input vector
x
(
k
)
,and is given from standard vector algebra by the relation
cos
θ
=

x
(
k
−
2)

√

x
(
k
)

2
+

x
(
k
−
1)

2
+

x
(
k
−
2)

2
. In the general case,with
N
B
blocks of
L
coefﬁcients in the update, the angle
θ
in
R
N
is given by
cos
θ
=
A
I
N B
(
k
)
x
(
k
)
x
(
k
)
.Finally we note that for a step size
µ
(
k
) =¯
µ
A
I
N B
(
k
)
x
(
k
)
2
/
x
(
k
)
2
, the PUNLMS in (5) with
L
=1
becomes identical to the MMax NLMS algorithm of [5].For
¯
µ
= 1
, the solution is the projection of the solution of the NLMS algorithm with unity step size onto the directionof
A
I
N B
(
k
)
x
(
k
)
, as illustrated in Figure 2. In next section,where the PUNLMS algorithm is analyzed, it will be clearthat the choice
µ
=
A
I
N B
(
k
)
x
(
k
)
2
/
x
(
k
)
2
corresponds2
w
(
k
)
w
(
k
+ 1)
w
⊥
[
x
(
k
) 0 0]
T
[0
x
(
k
−
1) 0]
T
A
I
1
(
k
)
x
(
k
) = [0 0
x
(
k
−
2)]
T
x
(
k
) = [
x
(
k
)
x
(
k
−
1)
x
(
k
−
2)]
T
d
(
k
)
−
w
T
x
(
k
) = 0
θ
Figure 1
. Geometric illustration of an update in
R
3
using
N
B
= 1
block with
L
= 1
coefﬁcient in the partial update,and with

x
(
k
−
2)

>

x
(
k
−
1)

>

x
(
k
)

, the directionof the update is along the vector
[0 0
x
(
k
−
2)]
T
forming anangle
θ
with the input vector
x
(
k
)
.to theinstantaneousestimate ofthestep size givingthefastestconvergence.
w
(
k
)
w
(
k
+ 1)
x
(
k
)
A
I
N B
(
k
)
x
(
k
)
w
NLMS
d
(
k
)
−
w
T
x
(
k
) = 0
Figure 2
. The solution
w
(
k
+ 1)
is the PUNLMSalgorithm update obtained with a timevarying step size
µ
(
k
) =
A
I
N B
(
k
)
x
(
k
)
2
/
x
(
k
)
2
, or equivalently, the MMax NLMS algorithm [5] with unity step size.
3. CONVERGENCE ANALYSIS
In this section, the PUNLMS algorithm [10, 11] is analyzed in the meansquared sense. New and more accuratebounds on the step size are provided together with closedform formulas for the prediction of the excess MSE. For theanalysis we adopt a simpliﬁed model for the signals
x
(
k
)
and
A
I
N b
(
k
)
x
(
k
)
. The model described in detail in Appendix IIuses a simpliﬁed distribution for the inputsignal vector byemployingreducedandcountableangularorientationsfortheexcitation which are consistent with the ﬁrst and the secondorder statistics of the actual inputsignal vector. The modelwas used for analyzing the NLMS algorithm [12], and wasshown to yield accurate results. The model was also successfully used to analyze the quasiNewton (QN) [14] and the binormalized datareusingLMS (BNDRLMS) [15] algorithms.ItisshowninAppendixIIthatforthePUNLMSalgorithmto be stable in the meansquaredsense, the step size
µ
shouldbe bounded as follows:
0
< µ <
2
E
r
2
(
k
)˜
r
2
(
k
)
≈
2
E
˜
r
2
(
k
)
Nσ
2
x
where
r
2
(
k
)
hasthesameprobabilitydistributionas
x
(
k
)
2
,which in this particular case is a sample of an independentprocess with chisquare distribution with
N
degrees of freedom, E
r
2
(
k
)
=
Nσ
2
x
, and
˜
r
2
(
k
)
has the same probabilitydistribution as
A
I
N B
(
k
)
x
(
k
)
2
,
˜
r
2
(
k
) =
E
A
I
N B
(
k
)
x
(
k
)
2
=
E
i
∈I
N B
(
k
)
x
i
(
k
)
2
(6)where
x
i
(
k
)
2
Bi
=1
is a sample of a process with a chisquare distribution with
L
degrees of freedom. Because onlythe
N
B
blocks with largest norm are considered in the calculation of
˜
r
2
(
k
)
we need to evaluate the expression in Equation (6) using
orderstatistics
. From Appendix III we get thefollowing formulaE
˜
r
2
(
k
)
=
B
j
=
B
+1
−
N
B
B
!(
j
−
1)!(
B
+ 1
−
j
)!
×
∞
0
y
2
F
j
−
1
z
(
y
)(1
−
F
z
(
y
))
B
−
j
f
z
(
y
)
dy
(7)where
F
z
(
y
)
and
f
z
(
y
)
are the cumulative distribution function and the density function, respectively, of a chisquaredvariable with
L
degrees of freedom. For given
B
and
N
B
,Equation (7) can be evaluated numerically.In general the expectationin Equation (6) needs to be evaluated numerically. For the special case of
L
= 2
, the chisquare distribution is equal to the exponential distribution,and a closedform solution can be found (see Lemma 1 inAppendix III)
E
[˜
r
2
L
=2
(
k
)] =
B
j
=
B
−
N
B
+1
j
−
1
k
=0
2(
−
1)
k
B
!(
B
−
j
)!
k
!(
j
−
1
−
k
)!(
B
+ 1 +
k
−
j
)
2
(8)
It can also be shown that
σ
2
x
N
B
L
≤
E
˜
r
2
(
k
)
≤
Nσ
2
x
for i.i.d input signals (see Lemma 2 in AppendixIII). A morepessimistic bound on the step size,
0
≤
µ
≤
2
N
B
L/N
=2
N
B
/B
, was given in [10] as a consequence of the crude approximation E
˜
r
2
(
k
)
≈
N
B
Lσ
2
x
. For
L
= 1
an easily calculated bound that does not require the evaluation of Equation (6) is the one combining the pessimistic bound abovewith the results for
L
= 2
. In general E
˜
r
2
(
k
)
can also beestimated recursively during the adaptation.If the step size is chosen within its stability bounds, theﬁnal excess MSE after convergence is given by (see Appendix II)
∆
ξ
exc
≈
N µσ
2
n
σ
2
x
2
−
µ
E
r
2
(
k
)˜
r
2
(
k
)
E
1˜
r
2
(
k
)
≈
N µσ
2
n
σ
2
x
2
E
[˜
r
2
(
k
)]
−
µNσ
2
x
(9)3
Table 2
. Summary PUNLMS Algorithm Analysis
Stability range:
0
< µ <
2
E
[˜
r
2
(
k
)]
Nσ
2
x
, where E
[˜
r
2
(
k
)]
is given by Eq. (6)
Excess MSE:
∆
ξ
exc
≈
N
µσ
2
n
σ
2
x
2
E
[˜
r
2
(
k
)]
−
µNσ
2
x
Maximum convergence speed:
µ
=
E
[˜
r
2
(
k
)]
Nσ
2
x
Recursive estimation of E
[˜
r
2
(
k
)]
:
˜
r
2
(
k
) =
α
˜
r
2
(
k
) + (1
−
α
)
A
I
N B
(
k
)
x
(
k
)
2
,
0
.
9
< α <
1
Easily calculated bounds:
L
= 1
:
0
< µ <
max[
N
b
/B,
Eq. (8)
]
L
= 2
:
0
< µ <
Eq. (8)
L >
3
:
0
< µ < N
b
/B
When
N
B
L
=
N
(full update), Equation (9) is consistentwith the results obtained for the conventional NLMS algorithm in [12].By observing the time evolution of the excess MSEin Equation (16) in Appendix II one can conclude thatthe maximum convergence speed is obtained for
µ
=
E
[
r
2
(
k
)
/
˜
r
2
(
k
)]
−
1
≈
E
[˜
r
2
(
k
)
/Nσ
2
x
]
. Use of larger stepsizes will neither increase the convergence rate nor decreasethe misadjustment. In other words, in practice step sizesabove
µ
max
/
2
will not be used. The same can be said aboutthe NLMS algorithm, for which the maximum value for thestep size is 2 to guarantee stability but only values smallerthan or equal to 1 are used.Table 2 summarizes the results of the analysis of the PUNLMS algorithm.
4. SIMULATION RESULTS
In this subsection, ouranalysis ofthe PUNLMS algorithmis validated using a systemidentiﬁcation setup. The numberof coefﬁcients in the plant chosen was
N
= 64
, and the input signal was zeromean Gaussian noise with
σ
2
x
= 1
. Thesignaltonoise ratio (
SNR
) was set to 30 dB.Figure 3 shows the learning curves for the case of block size
L
= 1
using
N
b
= 4
,
N
b
= 8
,
N
b
= 16
, and
N
b
= 64
coefﬁcients in the partial update. The curves were obtainedthrough averaging 500 trials. The step size for each valueof
N
b
was chosen such that convergence to the same levelof misadjustment was achieved. The corresponding theoretical learningcurves obtainedfrom evaluatingEquation(16)inAppendix II were also plotted. As can be seen from the ﬁgure, the theoretical curves are very close to the simulations.In Figure 4, the number of coefﬁcients in the partial updateis kept ﬁxed,
N
b
L
= 8
, and the number of coefﬁcients in
05001000150020002500−35−30−25−20−15−10−505SimulationTheoryN
b
= 4, L = 1N
b
= 8, L = 1N
b
=16, L=1N
b
= 64, L = 1
M S E ( d B )
k
Figure 3
. Learning curves for the PUNLMS algorithm for
N
= 64
,
L
= 1
coefﬁcient in each block
N
b
= 4
,
N
b
= 8
,
N
b
= 16
and
N
b
= 64
,
SNR
= 30
dB.the
N
b
blocks are varied. As can be seen from the ﬁgure,for a given number
N
b
L
coefﬁcients in the update, the convergence speed is decreasing with increasing
L
. Figure 5 repeats the previous experiment using the recursive estimationof E
[˜
r
2
(
k
)]
in Table 2. The resulting curves are very closeto the theoretical, validating use of the recursive formula in apractical scenario when a limited knowledge of E
[˜
r
2
(
k
)]
canbe assumed.Figure 6 shows the excess MSE as a function of
µ
ranging from
0
.
05
µ
max
to
0
.
8
µ
max
for different values of
N
b
,where
µ
max
is given by Equation (18) in Appendix II. Notethat the axis is normalized with respect to the maximum stepsize
µ
max
, which is differentfor each value
N
b
. The quantityE
˜
r
2
(
k
)
needed for the calculation of
µ
max
was obtainedthrough numerical integration. For
N
b
= 4
,
N
b
= 8
, and
N
b
= 16
the corresponding values were E
˜
r
2
(
k
)
= 20
.
04
,E
˜
r
2
(
k
)
= 31
.
484
, and E
˜
r
2
(
k
)
= 45
.
794
, respectively.As can be seen from Figure 6, the theoretical results are veryclose to the simulations within the range of step sizes considered. Using step sizes larger than
0
.
8
µ
max
resulted in pooraccuracy or caused divergence. This is expected due to theapproximations made in the analysis. However, only stepsizes in the range
µ
≤
0
.
5
µ
max
are of practical interest because larger values will neither increase convergence speednor decrease misadjustment. This fact is illustrated in Figure 7, where the theoretical convergence curves were plottedfor different values of
µ
using
N
b
= 8
and
N
= 64
. Therefore, we may state that our theoretical analysis is able to predict very accurately the excess MSE for the whole range of practical step sizes.In Figure 8 we compare our results (solid lines) with thoseprovided by [10] (dashed lines). As seen from Figure 8, theresults presented in [10] are not accurate even for reasonablyhigh values of
N
b
, whereas Figure 6 shows that our analysisis accurate for a large range of
N
b
. This comes from the factthat in [10] order statistics was not applied in the analysis,resulting in poor estimates of E
[
A
I
N B
(
k
)
x
(
k
)
2
]
for mostvalues of
N
b
< B
.4
05001000150020002500−35−30−25−20−15−10−505SimulationTheoryN
b
= 1, L = 8N
b
= 2, L = 4N
b
= 4, L = 2N
b
= 8, L = 1
M S E ( d B )
k
Figure 4
. Learning curves for the PUNLMS algorithm for
N
= 64
and
N
b
L
= 8
in the partial update,
N
b
= 1
,
N
b
= 2
,
N
b
= 4
, and
N
b
= 8
,
SNR
= 30
dB.
05001000150020002500−35−30−25−20−15−10−505SimulationTheoryN
b
= 1, L = 8N
b
= 2, L = 4 N
b
= 4, L = 2 N
b
= 8, L = 1
M S E ( d B )
k
Figure 5
. Learning curves for the PUNLMS algorithm for
N
= 64
and
N
b
L
= 8
in the partial update using recursiveestimation of E
[˜
r
2
(
k
)]
(see Table 2) with
α
= 0
.
95
,
N
b
= 1
,
N
b
= 2
,
N
b
= 4
, and
N
b
= 8
,
SNR
= 30
dB.
5. CONCLUSIONS
Thispaperstudiednormalizedpartialupdateadaptationalgorithms. Convergence analysis for the conventional partialupdate NLMS (PUNLMS) algorithm was presented, whichgave further insight to the algorithm in terms of stability,transient and steadystate performances. The analysis wasvalidated through simulations showing excellent agreement.Newstabilityboundsweregivenforthestepsizethatcontrolsthe stability, convergencespeed, and ﬁnal excess MSE of thePUNLMS algorithm. It was shown that the step size givingthe fastest convergence could be related to the timevaryingstep size of the MMax NLMS algorithm. These results extend and improve in accuracy previous results reported in theliterature. The excellent agreement between the theory andthe simulations presented here for the PUNLMS algorithmhas advanced signiﬁcantly the study of orderstatisticbasedadaptive ﬁltering algorithms.
0.10.20.30.40.50.60.70.8−45−40−35−30−25−20−15Simulation (L=32)Theory (L=32)Simulation (L=8)Theory (L=8)Simulation (L=4)Theory (L=4)
E x c e s s M S E ( d B )
×
µ
max
20
Figure 6
. Excess MSE for the PUNLMS algorithm versusthe step size
µ
for
N
= 64
,
L
= 1
coefﬁcient in each block,
N
b
= 4
,
N
b
= 8
and
N
b
= 32
blocks,
SNR
= 30
dB.
050010001500200025003000−30−25−20−15−10−5050.5
µ
max
0.4
µ
max
0.6
µ
max
0.8
µ
max
M S E ( d B )
k
Figure 7
. Theoretical learning curves for different choice of step size in the PUNLMS algorithm for
N
= 64
,
L
= 1
and
N
b
= 4
,
SNR
= 30
dB.
APPENDIX I
The optimization problem in (3) can be solved by themethod of Lagrange multipliers having the following objective function
f
(
w
,λ
1
,
λ
2
) =
w
−
w
(
k
)
2
+
λ
1
(
d
(
k
)
−
x
T
(
k
)
w
)+
λ
T
2
˜
A
I
N B
(
k
)
(
w
−
w
(
k
))
(10)where
λ
1
is a scalar and
λ
2
is an
N
×
1
vector. Setting thederivative of (10) with respect to the elements of
w
equal tozero and solving for the new coefﬁcient vector gives us
w
=
w
(
k
) +
λ
1
2
x
(
k
)
−
˜
A
I
N B
(
k
)
λ
2
2
(11)In order to solve for the constraints, multiply Equation (11)by
˜
A
I
N B
(
k
)
and subtract
˜
A
I
N B
(
k
)
w
(
k
)
from both sides, i.e.,
˜
A
I
N B
(
k
)
(
w
−
w
(
k
)) =
0
= +
λ
1
2˜
A
I
N B
(
k
)
x
(
k
)
−
˜
A
I
N B
(
k
)
2
2
5