A robust iterative algorithm for reconstruction from redundant filter banks [image coding example]

A robust iterative algorithm for reconstruction from redundant filter banks [image coding example]
of 4
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
  AROBUSTITERATIVEALGORITHMFORRECONSTRUCTIONFROMREDUNDANTFILTERBANKS  R. Bernardini, R. Rinaldo Universit`a di UdineDipartimento di Ingegneria Elettrica, Gestionale e Meccanica,Via delle Scienze 208, Udine, Italy. ABSTRACT Becauseoftheirinteractivenature,multimediastreamsmustbe sent over UDP and suitable countermeasures for mini-mizing the effect of data loss have to be taken. Among thevarious proposed techniques, coding with redundant  fi lterbanks has been proposed as a mean to add robustness tothe data stream. In order to avoid the computationally ex-pensivematrix inversionnecessaryfor signal reconstructionin presence of packet loss, an iterative reconstruction algo-rithmcanbeused. Unfortunately,theclassical algorithmforiterative reconstruction does not necessarily converge if toomany coef  fi cients are lost. In this paper we propose a mod-i fi ed version which converges even in the case of excessivelosses. 1. INTRODUCTION Becauseoftheirinteractivenature,multimediastreamsmustbe sent over UDP and suitable countermeasures for mini-mizing the effect of data loss have to be taken. Most of the proposed approaches make the stream more robust byadding some redundancy to it.An effective way to achieve such a goal in coding ap-plications is to use a redundant  fi lter bank. The problem of reconstructing the srcinal signal from the  fi lter bank out-put can be easily solved by recognizing that the action of an oversampled  fi lter bank can be interpreted in terms of frames [1]. More precisely, general theorems of frame the-ory claim that the srcinal signal can be obtained by lin-early combining a set of suitable signals (known as the  dual frame ) using as coef  fi cients the outputs of the analysis  fi lterbank. It can be shown that the dual frame corresponding toan oversampled fi lter bank has a  fi lter bank structure [1].If some coef  fi cient is lost, one can reconstruct the signalby using the dual of the “subframe” obtained by deletingfrom the srcinal frame the functions corresponding to thelost samples. However, the construction of the new dualframe requires a computationally expensive matrix inver-sion. Because of this, an iterative algorithm for frame re-construction has been proposed in [2]. Unfortunately, thealgorithm of [2], converges only if the received coef  fi cientsstill correspondto analysis with a complete set of functions,but suchan hypothesisis notnecessarilysatis fi ed inthe caseof random losses.In this paper we propose a modi fi ed version of the algo-rithm of [2] which converges even if the subframe is not aframe anymore. It is shown that the modi fi ed version con-verges with the same velocity of the srcinal one. 2. OVERSAMPLEDFILTERBANKSANDFRAMES In the following we will consider the notation for 1D sig-nals, but the results are also valid for multidimensional sig-nals and  fi lterbanks. A possible way to achieve some re-silience against packet losses is to code signal  x   by meansof a redundant fi lter bank  y c    n   ∑ m   x    m   h c    Mn   m   (1)where  M   is the sampling factor,  h c ,  c   1   N  , is the im-pulse response of the  c -th channel and  N    M   is the num-ber of channels. One possible way to obtain a redundant fi lter bank is by oversampling, but other designs are cer-tainly possible [1]. Eq. (1) can be interpreted as a scalarproduct between the input sequence and the analysis func-tion  φ  k    h  c    Mn   , with  k    c   nN  . In operator form,we can write  y k    Fx   k  ∆   x    φ  k    . In the case of a redun-dant  fi lter bank, functions  φ  k   constitute a frame, i.e., the set Φ   φ  k    k    satis fi es  A   x    2  ∑ k    x    φ  k   2   B   x    2  (2)for some constants  A   B   0 called the frame bounds. Inparticular, the  fi rst inequality in (2) guaranties stable recon-struction of the input from  y k  . The problem of the recon-struction of   x   from  y k   is the in fi nite-dimensionalcounterpartof an overdeterminedlinear system y   Fx where F is a fullrank  N    M   matrix with  N    M  . In the  fi lter bank context,the counterpartsof  y , x and F are, respectively,the sequence II - 9730-7803-8484-9/04/$20.00 ©2004 IEEEICASSP 2004              of received coef  fi cients ˆ  y c    n   , the input signal  x   and the lin-ear map  F   associated with the analysis  fi lter bank. As amatter of fact, for  fi nite length inputs and FIR  fi lters, thecase we will consider in the following, one can express thesynthesis  fi lter bank operation as a  fi nite dimension matrix-vector product [3]. It is well-known that the solution ˆ x  of an overdetermined system of equations can be obtained asˆ x   F † y , where F †  F t  F     1 F t  is the  pseudo-inverse  of   F [4]. Such a solution has the property to be the minimumnorm input best describing  y , i.e.,   Fx   y   is minimumeven if   y  does not belong to the space Im   F   generated bythe rows of matrix F , because, for instance, of quantizationof the coef  fi cients. In operatorform, the general reconstruc-tion formula uses the pseudo-inverse  F  † of   F  , namelyˆ  x    F  † ˆ  y   F    F      1 F    ˆ  y   F    F      1 ∑ k   φ  k   ˆ  y k    ∑ k   ˜ φ  k  ˆ  y k   (3)The reconstructed signal is obtained by linearly combining,with coef  fi cients ˆ  y k  , functions ˜ φ  k    F    F      1 φ  k  , which arethe in fi nite-dimensional counterpart of the columns of   F † .Set  Φ   ˜ φ  k    k    is called the  dual frame  of  Φ   φ  k    k    . 3. PROBLEMSTATEMENTANDSOLUTION In this section, in view of the fact that we are concernedwith the derivation of an iterative algorithm for the recon-structionofthe inputgiventhe receivedcoef  fi cients,wewillimplicitly assume that the input has  fi nite length and thatthe analysis  fi lters are FIR, so that the operator  F   is a  fi -nite dimension matrix. In [2], a recursive algorithm for thecomputationof the pseudo-inversesolution (3) is presented.Starting from a frame  Φ   φ  k    k    with bounds  A   B , onecan write  x    2  A   BF    Fx     I d   2  A   BF    F    x    2  A   BF    Fx    Rx   It is possible to show that   R    1 [2], so that we have  x    I d    R     1  2  A   BF    Fx    2  A   B ∑ k   R k  F    Fx   In general, for a generic ˆ  y  resulting for instance from quan-tization of the received coef  fi cients  y   Fx  , it is immediateto verify that [2]ˆ  x    2  A   B  I d    R     1 F    ˆ  y   lim  N    ∞ 2  A   B  N  ∑ k    0  R k  F    ˆ  y  (4)is the pseudo-inversesolution (3). By simple manipulationsof (4), one can expressˆ  x   N   2  A   B  N  ∑ k    0  R k  F    ˆ  y as a function of ˆ  x   N     1  and derive an iterative algorithm.In Multiple Description coding, some of the coef  fi cientsin ˆ  y  can be lost during transmission. Denoting by ˆ  y  I   theset of received coef  fi cients, we can pretend that they are de-rived from the analysis of the input  x   with the operator  F   I  ,obtained by deleting the rows of matrix  F   with indices inthe complement of set  I  . It may happen, however, that therows of the resulting matrix  F   I   are not a frame anymore,i.e., that the rows span a proper subspace of the input space.In this case, the pseudo-inverse solution requires to  fi nd aminimum norm input vector ˆ  x  , belonging to the space gen-erated by the rows of   F   I  , such that   F   I   x    ˆ  y  I    is minimum.It is easy to show, by direct calculations, that this solutioncan be obtained asˆ  x   2  A   B   I d    R   † F    I   ˆ  y   R    I d   2  A   B F    I   F   I     (5)Note that, in the above expression, constants  A  and  B  canbe those of the  srcinal  frame, before row cancellation in F  . It is not dif  fi cult to show that   R    1 if the rows of   F   I  still constitute a frame, possibly with bounds different from  A  and  B , while   R   1 in general. In particular, if the rowsof   F   I   span a proper subset  S   I   of the input space, we have  Rx    x   for any input belonging to the space orthogonal to S   I  , and   R   1. Unfortunately,   I d    R   is not invertiblein this case and we cannot use the expansion (4). In thefollowing, we present the main contribution of the paper,i.e., an iterative algorithm for the computation of    I d    R   † in (5), when   R   1. 4. ANITERATIVE ALGORITHMFOR   I   R   † Inthissectionweshowanalgorithmforcomputing   I  d    R   † ,where  R is symmetricandpositivede fi nite,whichconvergeseven if   R  has unitary eigenvalues. The  fi rst step will beto  fi nd a suitable decomposition of   R  which separates the“good” eigenvalues (whose absolute value is less than one),from the “bad” ones. Lemma1.  If R is symmetric, positive de  fi nite and    R   1  ,then there exist (andare unique)two matrices G andH suchthat  R   G   H   (6a) G n  G   n   0 (6b)lim n   ∞  H  n  0 (6c) GH    HG   0 (6d)Before proving Lemma 1, observe that it is easy to ver-ify by induction that if   G  and  H   satisfy (6), then  R k   G   H  k   k    1   (7) II - 974              Proof.  Since  R  is symmetric, one can  fi nd a unitary matrix U   such that  R     U  t   DU   (8)where  D  is diagonal. De fi ne   D 1   ii     0 if   D ii      11 if   D ii     1   D  1   ii      D ii  if   D ii      10 if   D ii     1(9)Itiseasytoverifythat G     U  t   D 1 U   and  H      U  t   D  1 U   satisfyconditions (6). In order to show that  G  and  H   are unique,observe that from (7) it follows that lim k     ∞  R k     G  whichimplies that  G  and  H      R   G  are unique. Property1.  If R, G and H satisfy (6) as in Lemma 1, then   I  d    R   †    ∞ ∑ k    0  H  k  (10) Proof.  Byexploiting(8)onecanwrite  I  d    R     U  t  U    U  t   DU     U  t    I  d    D   U  . It can be easily veri fi ed that   I  d    R   †    U  t    I  d    D   † U   (11)where   I  d    D   †  ii     0if   D ii     1and   I  d    D   †  ii     1    1   D ii   if   D ii      1. It follows that   I  d    D   †    ∑ ∞ k    0  D k   1 . Byexploiting such a result in (11), one obtains   I  d    R   †    U  t  ∑ ∞ k    0  D k   1 U      ∑ ∞ k    0  H  k  The  fi nal step is to transform equation (10) into an it-erative algorithm for the computation of   x    I  d    R   †  y . Apossible implementation is described by equations a 0     y b 0     y  start (12a) a  N      Ra  N    1  b  N      b  N    1   a  N   iteration (12b)  x   N      b  N    Na  N   end (12c) Claim1.  Ifa  N   ,b  N   ,yandx   N   areasin(12),then lim  N     ∞  x   N      I  d    R   †  y.Proof.  It is easy to show by induction that  a  N      R  N   y  and b  N      ∑  N k    0  R k   y . By exploiting (7), one obtains b  N      y   N  ∑ k    1  R k   y     NGy   N  ∑ k    0  H  k   y   (13a) a  N      R  N   y   G   H   N    y     Gy   H   N   y   (13b)From (12c) and (13) one obtains  x   N      ∑  N k    0  H  k   y   NH   N   y .Since   H     1, it follows that lim  N     ∞  x   N    I  d    R   †  y .By usual techniques it is possible to  fi nd the followingupper bound of the approximation error   x    x   N    H    N    N    H   1   H      y   (14)Equation (14) shows that the convergence is controlled bythe largest eigenvalue of   H   and it is comparable to the con-vergence of the algorithm of [2]. 4.1. Implementation remarks Note that algorithm (12) is expecially suited to the case of a sparse  R , since it requires only matrix-vector productswhich can be ef  fi ciently computed when  R  is sparse.Moreover, if the frame has been obtained by means of an oversampled  fi lter bank, operator  R     I  d    F   I    F   I   can beeasily implemented by means of the srcinal  fi lter bank byobserving that F   I    y     ∑ k    I  c φ  k   y k      ∑ k   φ  k    χ   I   y  k      F    χ   I   y  (15)where   χ   I   y  k      y k   if   k    I   and   χ   I   y  k      0 if   k    I  . Re-membering that operator  F    corresponds to a synthesis  fi l-ter bank having the (time reversed)  frame functions  as im-pulse responses (note that  it is not   the  dual  bank), equation(15) shows that  F   I    can be computed by setting to zero thecomponents of   y  corresponding to the lost coef  fi cients andfeeding the result into a synthesis bank. Similarly,  F   I   can beimplementedbyrunningthesrcinalanalysis fi lterbankanddiscarding the values corresponding to the lost coef  fi cients.Overall, operator  R  can be implemented as a concatenationof a synthesis and an analysis  fi lter bank. 5. EXPERIMENTAL RESULTS To illustrate the application of the theory presented above,we consider in this section a Multiple Description codingscheme for images. The  fi lter bank has  fi ve channels, fol-lowed by factor 2 subsamplingin the row andcolumndirec-tions. The fi rstfour fi ltershaveimpulseresponses h i   j   m   n    δ    m   i   δ    n   j   ,  i     0   1   j     0   1, while the  fi fth  fi lter islow-pass and obtainedas the separableextensionof the wellknown Daubechies’s wavelet  fi lter with length 4 [2]. Thus,four subimages are obtained with dimension 1/4 of that of theoriginalimageandcorrespondingtoitsspatialpolyphasecomponents. An additional subimage is obtained by low-pass  fi ltering and subsampling by a factor 2 in the row andcolumn directions. The coding scheme has a redundancy5/4. It is possible to show that the  fi lterbank corresponds toa frame expansion with bounds  A     1,  B     2.Starting from a 512   512 input image, each of the  fi ve256   256 subimages is divided into slices of 8   64 pix-els, which are sent as packets over the network. Each of the packets is lost independentlywith probability  P e . At thereceiver, the iterative algorithm outlined in Section 4 is ap-plied to the received coef  fi cients to reconstruct the image.Fig. 1.a shows the image “Lenna” after one step of the re-constructionalgorithmanda loss probability P e     0   02. Theareascorrespondingtolostpacketsareclearlyvisible. Inthesame  fi gure, we show the reconstructed image after 300 it-erations. In Fig. 1.c we show a detail of the reconstructedimage after 300 iterations. The detail is positioned belowthe chin, where packet loss had incurred. Fig. 2 reports the II - 975              (a)(b)(c) Fig. 1 . (a) Recovered image “Lenna” after one step ( P e   0   02) (b) Recovered image “Lenna” after 300 steps ( P e   0   02), (c) A detail of the reconstructed image after 300 it-erations (packet loss positioned below the chin,  P e    0   02) 0 100 200 300 400 500 600 70010203040506070Iterations       P      S      N      R Fig. 2 . PSNR vs. Number of Iterations of the iterative algo-rithm for  P e    0   01 (solid line) and  P e    0   02 (dotted line)average PSNR, in 10 independentexperiments, between thesrcinal and reconstructed images as a function of the num-ber of iterations for two values of the probability of error,i.e.,  P e    0   02 and  P e    0   01. It is seen that the convergenceis quite slow, and, to have a reasonable complexity, it is im-portant to apply the reconstruction only to image regionsaffected by losses. 6. CONCLUSIONS A robust iterative algorithm for reconstruction from redun-dant  fi lter banks has been presented. The advantage of thepresented algorithm,with respect to the usual iterative algo-rithm for frame reconstuction, is the fact that it convergesto the least square solution even in the case of excessivelosses. It has been shown that the proposed algorithm con-verges with the same velocity of the srcinal one. 7. REFERENCES [1] ZoranCvetkovi´cand MartinVetterli, “Oversampled fi l-ter banks,”  IEEE Transactions on Signal Processing ,vol. 46, no. 5, pp. 1245–1255,May 1998.[2] Ingrid Daubechies,  Ten Lecturs on Wavelets , SIAM,Philadelphia, 1992.[3] M. Vetterli and J. Kova ˘ cevi´c,  Wavelets and Subband Coding , Signal Processing. Prentice-Hall, EnglewoodCliffs, NJ, 1995.[4] G. Strang,  Linear Algebra and its Applications , Aca-demic Press, New York, 1980. II - 976            
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks