News & Politics

A robust variational approach for simultaneous smoothing and estimation of DTI

A robust variational approach for simultaneous smoothing and estimation of DTI
of 9
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
  A robust variational approach for simultaneous smoothing and estimation of DTI Meizhu Liu  a, ⁎ , Baba C. Vemuri  b , Rachid Deriche  c a Siemens Corporate Research & Technology, Princeton, NJ, 08540, USA b Dept. of CISE, University of Florida, Gainesville, FL, 32611, USA c Project-Team ATHENA, Inria Sophia Antipolis - Méditerranée, 06902, France a b s t r a c ta r t i c l e i n f o  Article history: Accepted 7 November 2012Available online 17 November 2012 Keywords: DTI estimationVariational principleNon-local meansTotal Kullback – Leibler divergenceDiffusion MRILimited memory quasi-Newton method Estimating diffusion tensors is an essential step in many applications  —  such as diffusion tensor image (DTI)registration, segmentation and  fi ber tractography. Most of the methods proposed in the literature for thistask are not simultaneously statistically robust and feature preserving techniques. In this paper, we proposea novel and robust variational framework for simultaneous smoothing and estimation of diffusion tensorsfrom diffusion MRI. Our variational principle makes use of a recently introduced total Kullback – Leibler(tKL) divergence for DTI regularization. tKL is a statistically robust dissimilarity measure for diffusion tensors,and regularization by using tKL ensures the symmetric positive de fi niteness of tensors automatically. Further,the regularization is weighted by a non-local factor adapted from the conventional non-local means  fi lters.Finally, for the data  fi delity, we use the nonlinear least-squares term derived from the Stejskal – Tannermodel. We present experimental results depicting the positive performance of our method in comparisonto competing methods on synthetic and real data examples.© 2012 Elsevier Inc. All rights reserved. Introduction Diffusion weighted magnetic resonance imaging (MRI) is a verypopular imaging technique that has been widely applied ( Jones,2010) in recent times. It uses diffusion sensitizing gradients tonon-invasively capture the anisotropic properties of the tissue beingimaged. Diffusion tensor imaging (DTI) approximates the diffusivityfunction by a symmetric positive de fi nite tensor of order two(Basser et al., 1994). DTI is an MRI modality that provides informationabout the movement of water molecules in a tissue. DTI describes thediffusiondirection of watermoleculesin the brainwhichis associatedwith the direction of   fi ber tracts in the white matter. When thismovement is hindered by membranes and macromolecules, waterdiffusion becomes anisotropic. Therefore, in highly structured tissuessuch as nerve  fi bers, this anisotropy can be used to characterize thelocal structure of the tissue. Consequently, many applications arebased on the estimated diffusion tensor  fi elds, such as registration(Gur and Sochen, 2007; Jia et al., 2011; Wang et al., 2011; Yanget al., 2008; Yeo et al., 2009), segmentation (Descoteaux et al.,2008; Goh and Vidal, 2008; Hasan et al., 2007; Lenglet et al., 2006;Liu et al., 2007; Motwani et al., 2010; Savadjiev et al., 2008; Vemuriet al., 2011; Wang and Vemuri, 2005), atlas construction (Assemlalet al., 2011; Barmpoutis and Vemuri, 2009; Mori et al., 2008; Xieet al., 2010), anatomy modeling (Faugeras et al., 2004),  fi ber tractrelated applications (Burgela et al., 2006; Durrleman et al., 2011;Lenglet et al., 2009; Mori and van Zijl, 2002; Savadjiev et al.,2008; Wang et al., 2010, 2012; Zhu et al., 2011) and so on. All of these latter tasks will bene fi t from the estimation of smooth diffusiontensors.Estimating the diffusion tensors (DTs) from DWI is a challengingproblem, since the DWI data are invariably affected by noise duringits acquisition process (Poupon et al., 2008b; Tang et al., 2009;Tristan-Vega and Aja-Fernandez, 2010). Therefore, a robust DTIestimation method which is able to perform a feature preservingdenoising is desired. For most of the existing methods, the DTs are es-timated by using the raw diffusion weighted echo intensity image(DWI). At each voxel of the 3D image lattice, the diffusion signalintensity  S   is related with its diffusion tensor  D ∈ SPD (3) 1 via theStejskal – Tanner equation (Stejskal and Tanner, 1965) S   ¼  S  0  exp  − b g  T  Dg    ;  ð 1 Þ where  S  0  is the signal intensity without diffusion,  b  is the  b -value and g   is the direction of the diffusion sensitizing gradient.There are various methods (Barmpoutis et al., 2009a; Batcheloret al., 2005; Chang et al., 2005; Chefd'hotel et al., 2004; Fillard et al.,2007; Hamarneh and Hradsky, 2007; Mangin et al., 2002; Mishraet al., 2006; Niethammer et al., 2006; Pennec et al., 2006; Pouponet al., 2008b; Salvador et al., 2005; Tang et al., 2009; Tristan-Vega NeuroImage 67 (2013) 33 – 41 ⁎  Corresponding author. E-mail address: (M. Liu).  1 SPD (3) represents the space of 3×3 symmetric positive de fi nite matrices.1053-8119/$  –  see front matter © 2012 Elsevier Inc. All rights reserved. Contents lists available at SciVerse ScienceDirect NeuroImage  journal homepage:  and Aja-Fernandez, 2010; Tschumperle and Deriche, 2003, 2005;Vemuri et al., 2001; Wang et al., 2003, 2004) in existing literature,to estimate  D  from  S  . A very early one is a direct tensor estimation(Westin et al., 2002), which gives an explicit solution for  D  and  S  0 .Though time ef  fi cient, it is sensitive to noise because only 7 gradientdirections are used to estimate  D  and  S  0 . Another method is the min-imumrecoveryerror(MRE)estimationorleast squares fi tting(Basseret al.,1994)whichminimizesthe error whenrecoveringthe DTsfromthe DWI. MRE is better than direct estimation because it uses moregradient directions, which increase its reliability. However, it does notsmoothen the DWI or the DTI, and thus it is subject to noise in theinput data.With this in mind, many denoising frameworks (Gilboa et al.,2004; Spira et al., 2007) have been proposed to improve the signalto noise ratio (SNR). Some methods perform denoising on the DWIand then estimate the DTI. Typical approaches to DWI denoising aredesigned according to the statistical properties of the noise. Most of these approaches assume that the noise follows the Rician distribu-tion (Descoteaux et al., 2008; Koay and Basser, 2006; Landman etal., 2007; Piurica et al., 2003), and when denoising, they use thesecond order moment of the Rician noise (McGibney and Smith,1993), maximum likelihood (ML) (Sijbers and den Dekker, 2004) and expectation maximization (EM) approaches (DeVore et al.,2000; Marzetta, 1995), wavelets (Nowak, 1999), anisotropic Wiener fi ltering (Martin-Fernandez et al., 2007), total variation schemes(McGraw et al., 2004), Markov random  fi elds (Zhang et al., 2001),nonparametric neighborhood statistics techniques like non-localmeans (NLM) (Awate and Whitaker, 2005; Coupe et al., 2008) and un-biasedNLM(Manjonetal.,2008;Wiest-Daessleetal.,2008)algorithms,Perona – Malik-likesmoothing(Basuetal.,2006)orthelinearminimummean square error (LMMSE) scheme (Aja-Fernandez et al., 2008).Alternatively, some methods  fi rst estimate the diffusion tensorsfrom the raw DWI and then perform denoising on the tensor  fi eld(Moraga et al., 2007). One representative method is by using theNLM framework incorporating a log-Euclidean metric (Fillard et al.,2005). The drawback of such two-stage processes is that the errorsmight be accumulated from one stage to the other.Bearing these de fi ciencies in mind, researchers developed a varia-tional framework (VF) based estimation (Chefd'hotel et al., 2004;Tschumperle and Deriche, 2003; Wang et al., 2003, 2004). These ap-proaches take into account the SPD (symmetric positive de fi nite)constraint on the diffusion tensors. The smoothing in all theseapproaches involves some kind of weighted averaging over neighbor-hoods which de fi ne the smoothing operators resulting from the vari-ational principles. Some of these smoothing operators are locallyde fi ned and do not capture global geometric structure present inthe image. Moreover, they are not statistically robust.To overcome the aforementioned drawbacks, we propose a novelstatistically robust variational non-local approach for simultaneoussmoothing and tensor estimation from the raw DWI data. Thisapproach combines the variational framework, non-local means and astatistically robust regularizer on the tensor  fi eld. The main contribu-tions of this approach are three-fold. First, we use a statistically robustdivergence measure total Bregman divergence to regularize thesmoothness measure on the tensor  fi eld. Combined with the Choleskydecomposition of the diffusion tensors, this automatically ensures thepositive de fi niteness of the estimated diffusion tensors, which over-comes the common problem for many techniques that manually forcethetensortobepositivede fi niteorresorttoaccuracyof  fi niteprecisionarithmeticleadingtotheequivalencebetweentestingsforpositivedef-initeness and positive semide fi niteness as in Wang et al. (2003, 2004).Second, it uses an adaptation of the NLM to  fi nd the weight for thesmoothness regularization terms. This preserves the global structureof the tensor  fi eld while denoising. Finally, it achieves simultaneousdenoisingand DTIestimation, whichisable to avoid theerror propaga-tionof atwostageapproachdescribed earlier.Besides,thismethodcanbeeasily extendedto a higher order tensor estimation. We will explainthese points at length in the rest of the paper.Therestofthepaperisorganizedasfollows.IntheProposedmethodsection, we introduce our proposed method, followed by the empiricalvalidation in the Experiments section. Finally we conclude. Proposed method Our proposed integrated variational non-local approach has threecomponents, minimizing the data  fi delity error, smoothing over  S  0 and the tensor  fi eld. The proposed model is given by the followingequation:min S  0 ; D ∈ SPD E  S  0 ; D ð Þ¼  1 − α  −  β  ð Þ ∫ Ω X ni ¼ 1 S  i − S  0  exp  − b g  T i  Dg  i    2 d  x  þ α  ∫ Ω  ∫ V   x  ð Þ  w 1  x  ;  y  ð Þ  S  0 − S  0  y  ð Þð Þ 2 d  y  d  x  þ  β  ∫ Ω  ∫ V   x  ð Þ  w 2  x  ;  y  ð Þ δ  D ; D y  ð Þð Þ d  y  d  x  ; ð 2 Þ where  Ω  is the domain of the image,  n  is the number of diffusion gra-dients,  V   x  ð Þ  is the search window at voxel  x  , and  δ ( D ,  D (  y  )) is thetotal Kullback – Leibler (tKL) divergence (Vemuriet al., 2011)betweentensors  D  and  D (  y  ) which will be explained in detail later. The  fi rstterm captures the non-linear data  fi tting error, the second and thirdterms are smoothness constraints on  S  0  and  D .  α   and  β   are constantsbalancing the  fi tting error and the smoothness. 2 w 1 (  x  ,  y  ) and  w 2 (  x  ,  y  )are the regularization weights for  S  0  and  D . Since  S  0  and  S   are linearlyrelated, while  D  and  S   are  “ logarithmically ”  related, so we use differ-ent methodsto calculate w 1 (  x  ,  y  ) and  w 2 (  x  ,  y  ). Note,  S  i ,  S  0  and D by de-fault represent the values at voxel  x  , unless speci fi ed otherwise.The discrete case of Eq. (2) ism in S  0 ; D ∈ SPD E  S  0 ; D ð Þ¼  1 − α  −  β  ð Þ ∑  x  ∈ Ω X ni ¼ 1 S  i − S  0  exp  − b g  T i  Dg  i    2 þ α  ∑  x  ∈ Ω ∑  y  ∈ V   x  ð Þ w 1  x  ;  y  ð Þ  S  0 − S  0  y  ð Þð Þ 2 þ  β  ∑  x  ∈ Ω ∑  y  ∈ V   x  ð Þ w 2  x  ;  y  ð Þ δ  D ; D y  ð Þð Þ : ð 3 Þ Sincemost of the time, DTI estimationproblems are in the discretecase, we will focus on the discrete case in this work. Computation of the weights w 1 (  x  ,  y  ) and w  2 (  x  ,  y  )w 1 (  x  ,  y  ) and  w 2 (  x  ,  y  ) are the regularization weights of the smooth-ness terms. If   w 1 (  x  ,  y  ) is large, it requires  S  0  and  S  0 (  y  ) to be similar.Similarly, if   w 2 (  x  ,  y  ) is large, it requires  D  and  D (  y  ) to be similar. Usu-ally,one requires S  0 's and D 's to be respectively similaronlyif the cor-respondingdiffusionsignalsaresimilar.Wewillcompute  w 1 (  x  ,  y  )and w 2 (  x  ,  y  ) according to the statistical properties of the diffusion weight-ed signals. It has been recognized that the diffusion signal  S   followsthe Rician distribution (Descoteaux et al., 2008; Koay and Basser,2006; Piurica et al., 2003), i.e.,  p S   ; S   ; σ  2   ¼  S  σ  2  exp  − S  2 þ   S  2 2 σ  2  ! I  0 S   S  σ  2   ;  ð 4 Þ 2 Usually, the noisier the image, the larger will  α   and  β   be, and vice-versa. The noisein the images can be estimated by using any of the popular methods described in Aja-Fernandez et al. (2009) and Tristan-Vega and Aja-Fernandez (2010). 34  M. Liu et al. / NeuroImage 67 (2013) 33 – 41  where   S   is the signal without noise,  σ   is the variance of the Riciannoise. Since  S  0  is linearly related with  S   and  D  is  “ logarithmically ”  re-lated with  S  , we set the regularization weights for  S  0  and  D  to be w 1  x  ;  y  ð Þ ¼  1  Z  1  x  ð Þ  exp  −  S   N   x  ð Þð Þ − S   N   y  ð Þð Þk k 2 h σ  2  ! ;  ð 5 Þ w 2  x  ;  y  ð Þ ¼  1  Z  2  x  ð Þ  exp  −  log S   N   x  ð Þð Þ − log S   N   y  ð Þð Þk k 2 h σ  2  ! ;  ð 6 Þ where  Z  1  and  Z  2  are normalizers,  h  is the  fi ltering parameter (Coupeet al., 2006), and  σ   is the standard variation of the noise, which isestimated by using the  fi rst mode of the background (Aja-Fernandezet al., 2009; Tristan-Vega and Aja-Fernandez, 2010).  N   x  ð Þ  and  N   y  ð Þ denote the neighborhoods of   x   and  y   respectively. The neighborhoodof   x   can be viewed as the voxels around  x   or a square centered at  x  with a user de fi ned radius. Furthermore, S   N   x  ð Þð Þ − S   N   y  ð Þð Þk k 2 ¼ X m j S   μ   j   − S   ν   j   2  ;  andlog S   N   x  ð Þð Þ − log S   N   y  ð Þð Þk k 2 ¼ X m j log S   μ   j   − log S   ν   j   2 ; where  μ   j  and ν   j are the  jth voxels in the neighborhoods  N   x  ð Þ  and  N   y  ð Þ respectively, and  m  is the number of voxels in each neighborhood.From Eq. (5), we can see that if the signal intensities for twovoxels are similar,  w 1 (  x  ,  y  ) and  w 2 (  x  ,  y  ) are large. Consequentlyaccording to Eq. (3),  S  0  and  S  0 (  y  ),  D  and  D (  y  ) should be similarrespectively.NLM is known for its high accuracy and high computational com-plexity. To address the computational load problem, we use twomethods. One is to decrease the number of computations performedby selecting voxels in the search window, and the other is to makeuse of parallel computing. Concretely, we will pre fi lter the voxels inthe search window whichare not similarto the voxel under consider-ation if their diffusion weighted signal intensities are not similar. Thisis speci fi ed as w 1  x  ;  y  ð Þ ¼ 1  Z  1  x  ð Þ  exp  −  S   N   x  ð Þð Þ − S   N   y  ð Þð Þk k 2 h σ  2  ! ; if   S   N   x  ð Þð Þk k 2 S   N   y  ð Þð Þk k 2 ∈  τ  1 ; τ  2 ½  0 ;  Otherwise : 8>>>>>><>>>>>>: w 2  x  ;  y  ð Þ ¼ 1  Z  2  x  ð Þ  exp  −  log S   N   x  ð Þð Þ − log S   N   y  ð Þð Þk k 2 h σ    2  ! ; if   S   N   x  ð Þð Þk k 2 S   N   y  ð Þð Þk k 2 ∈  τ  1 ; τ  2 ½  0 ;  Otherwise : 8>>>>>><>>>>>>: τ  1  and τ  2  arethethresholdsforpre fi ltering.Weset τ  1 =0.1and τ  2 =10in our experiments.In the context of parallel computing, we divide the computationsinto smaller parts and assign the computations to several processors.Since the smaller parts for NLM are not correlated, thus it can improvethe ef  fi ciency a lot by using parallel computing. In our case, we dividethevolumesinto8subvolumes,andassigneachsubvolumetoonepro-cessor, and a desktop with 8 processors is used. This multi-threadingtechnique greatly enhances the ef  fi ciency. Computation of the tKL divergence tKL divergence is a special case of the recently proposed totalBreman divergence (tBD) (Liu et al., 2010; Vemuri et al., 2011). Thisdivergence measure is based on the orthogonal distance betweenthe convex generating function of the divergence and its tangentapproximation at the second argument of the divergence. The  totalBregman divergence  δ  f   associated with a real valued strictly convexand differentiable function  f   de fi ned on a convexset  X   betweenpoints  x ,  y ∈  X   is de fi ned as, δ  f   x ;  y ð Þ ¼  f x ð Þ −  f y ð Þ −  x −  y ; ∇  f y ð Þh i  ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ  ∇  f y ð Þk k 2 q   ;  ð 7 Þ where  〈 ⋅ ,  ⋅ 〉  is the inner product, and  ‖ ∇  f  (  y ) ‖ 2 = 〈 ∇  f  (  y ),  ∇  f  (  y ) 〉  gen-erally. tBD has been proven to have the property of being intrinsicallyrobust to noise and outliers. Furthermore, it yields a closed form for-mula for computing the median (an  ‘ 1 -norm average) for a set of symmetric positive de fi nite tensors. When  f  (  x )= − log  x  and  X   isthe set of probability density functions (pdf), Eq. (7) becomes thetKL divergence, which is δ  f   x ;  y ð Þ ¼ ∫  x log  x y  ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ ∫  y  1 þ  log  y ð Þ 2 q   :  ð 8 Þ Motivated by an earlier use of the tKL divergence as a dissimilaritymeasure between DTs for DTI segmentation (Vemuri et al., 2011), weuse tKL to measure the dissimilarity between tensors and apply it inthe DTI regularization. It has been shown that the tKL divergence(Vemuri et al., 2011) based  ‘ 1 -norm average, termed by the  t  -center,is invariant to special linear group transformations (denoted by SL(n) ). 3 This is detailed in the following.Since order-2 SPD tensors can be seen as covariance matrices of zero mean Gaussian pdf (Wang et al., 2004). Let  P ,  Q  ∈ SPD ( l ), thentheir corresponding pdf are  p  t ; P ð Þ ¼  1  ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 π  ð Þ l det  P q   exp  − t T  P − 1 t 2  ! ; q  t ; Q  ð Þ ¼  1  ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 π  ð Þ l det  Q  q   exp  − t T  Q  − 1 t 2  ! ; and the tKL between them is explicitly given by, δ  P ; Q  ð Þ ¼ ∫  p log  p q d t  ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ ∫  1 þ  log q ð Þ 2 qd t q  ¼ log det  P − 1 Q     þ  tr   Q  − 1 P   − l 2  ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi c  1  þ  log det Q  ð Þð Þ 2   4  − c  2  log det Q  ð Þ q   ; where  c  1  ¼  3 l 4  þ  l 2 log2 π  2  þ  l log2 π  ð Þ 2 4  and  c  2  ¼  l  1 þ log  2 π  ð Þ 2  .Moreover, the partial minimization of the third term in Eq. (3)min D ∑  y  ∈ V  x  ð Þ δ  D ; D y  ð Þð Þ leads to the  t  -center for the set of   D (  y  ). The  t  -center has been wellstudied in Vemuri et al. (2011). Given a set of tensors { Q  i }, the t  -center  P ∗ minimizes the  ‘ 1 -norm divergence to all the tensors, i.e., P  ¼  arg min P ∑ i δ  P ; Q  i ð Þ ;  ð 9 Þ 3 An  n × n  matrix  A ∈ SL ( n ) implies det(  A )=1.35 M. Liu et al. / NeuroImage 67 (2013) 33 – 41  and  P ∗ is explicitly expressed as P  ¼  ∑ i a i ∑  j a  j Q  − 1 i  ! − 1 ; a i  ¼  2  ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi c  1  þ  log det Q  i ð Þð Þ 2 4  − c  2  log det Q  i ð Þ s 0@1A − 1 : ð 10 Þ The  t  -center for a set of DTs is the weighted harmonic mean, whichisinclosedform.Moreover,theweightisinvariantto SL(n) transforma-tions, i.e.,  a i ( Q  i )= a i (  A  T  Q  i  A  ), ∀  A  ∈ SL ( n ). The  t  -center after the transfor-mation becomes ^ P  ¼  ∑ a i  A  T  Q  i  A    − 1   − 1 ¼  A  T  P   A  :  ð 11 Þ This means that if { Q  i } i =1 m are transformed by some member of  SL(n) , the  t  -center will undergo the same transformation. It was alsofound that the  t  -center will be robust to noise in that the weightwill be smaller if the tensor has more noise (Vemuri et al., 2011).These properties make it an appropriate tool for the DTI applications. The SPD constraint  It is knownthat if a matrix D ∈ SPD , there existsa uniquelower di-agonal matrix  L   with its diagonal values all positive, and  D = LL  T  (GolubandLoan,1996).Thisis the wellknownCholeskyfactorizationtheorem. Wang et al. (2003, 2004) were the  fi rst to use Cholesky fac-torization to enforce the positive de fi niteness condition on the esti-mated smooth diffusion tensors from the DWI data. They use theargument that testing for positive de fi niteness is equivalent to testingfor positive semide fi niteness under  fi nite precision arithmetic andhence their cost function minimization is set on the space of positivesemide fi nite matrices, which is a closed set that facilitates the exis-tence of a solution within that space. Unlike (Wang et al. (2003,2004),weuseCholeskydecompositionandtKLdivergence toregular-ize the smoothness of the tensor  fi eld, and this automatically ensuresthe diagonal values of   L   to be positive. This argument is validated asfollows.Substituting  D = LL  T  into Eq. (9), we get δ  L  ; L y  ð Þð Þ¼ ∑ 3 i ¼ 1  log L  ii  y  ð Þ − log L  ii ð Þ þ  tr   L  − T   y  ð Þ L  − 1  y  ð Þ LL  T    − 1 : 5  ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi c  1  þ  ∑ 3 i ¼ 1 log L  ii  y  ð Þ   24  − c  2 ∑ 3 i ¼ 1  log L  ii  y  ð Þ r   :  ð 12 Þ Because by using the  “ log  ”  function in the computation, Eq. (12)automatically requires  L  ii s  to be positive, therefore we do not needto manually force the tensor to be SPD. The detailed explanation isgiven in Appendix A. Numerical solution In this section, we present the numerical solution to the variation-al principle (3). The partial derivative equations of Eq. (3) with re- spect to  S  0  and  L   can be computed explicitly and are, ∂ E ∂ S  0 ¼  − 2 1 − α  −  β  ð Þ X ni ¼ 1 v i − 2 α   ∑  y  ∈ V  x  ð Þ w 1  x  ;  y  ð Þ  S  0 − S  0  y  ð Þð Þ ; ∂ E ∂ L   ¼  4 1 − α  −  β  ð Þ X ni ¼ 1 bS  0 v i L  T  g  i g  T i − 2  β   ∑  y  ∈ V  x  ð Þ w 2  x  ;  y  ð Þ  L  − 1 − L  T  L  − T   y ð Þ L  − 1  y ð Þ   ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi c  1  þ  ∑ 3 i ¼ 1  log L  ii  y  ð Þ   24  − c  2 ∑ 3 i ¼ 1  log L  ii  y  ð Þ r   ; ð 13 Þ where v i  ¼  S  i − S  0  exp  − b g  T i  LL  T  g  i    exp  − b g  T i  LL  T  g  i   :  ð 14 Þ To solve Eq. (13), we use the limited memory quasi-Newtonmethod described in Nocedal and Wright (2000). This method is use-ful for solving large problems with a lot of variables, as is in our case.This method maintains simple and compact approximations of Hessian matrices making them require, as the name suggests, modest Fig. 1.  (a) Ground truth synthetic DTI  fi eld, (b) the srcinal DWI corresponding to  g  1 ,(c) the Rician noise affected DWI, the DTI estimation by using (d) MRE, (e) VF,(f) LMMSE, (g) NLM, and (h) the proposed technique.36  M. Liu et al. / NeuroImage 67 (2013) 33 – 41  storage, besides yielding a linear rate of convergence. Speci fi cally, weuse the linear Broyden – Fletcher – Goldfarb – Shanno (L-BFGS) method(Nocedal and Wright, 2000) to construct the Hessian approximation. Experiments We evaluated our method on synthetic datasets with variouslevels of noise, and real datasets, including rat spinal cord datasetsand human brain datasets. Based on the estimated tensor  fi elds byusing the technique presented in this paper, we achieved DTI seg-mentation for the rat spinal cord datasets, and some preliminary fi ber tracking on human brain datasets. However, since the mainthrust of this paper is the estimationof smooth diffusion tensor fi elds,the segmentation and  fi ber tracking results are not presented here.We compared our method with other state-of-the-art techniquesincluding VF (Tschumperle and Deriche, 2003), NLM (Wiest-Daessle et al., 2008) and LMMSE (Tristan-Vega and Aja-Fernandez, 2010) respectively. We also presented the MRE method for comparison sinceseveral software packages (3DSlicer 3.6 4 and fanDTasia (Barmpoutisetal.,2009b))usethistechniqueduetoitssimplicity.WeimplementedVF by ourselves since we did not  fi nd any open source versions on theweb. For LMMSE, we used the implementation in a 3DSlicer 3.6. ForNLM, we used an existing code 5 for DWI denoising and used our ownimplementation of the least squares  fi tting to estimate DTI from thedenoised DWI. To ensure fairness, we tuned all the parameters of eachmethod for every experiment, and chosethe set of parameters yieldingthebestresults.Thevisualandnumericalresultsshowthatourmethodyields better results than the competing methods. DTI estimation from synthetic datasets There are two groups of synthetic datasets. The  fi rst one is a16×16 DTI with two homogeneous regions as shown in Fig. 1(a).Each region is a repetition of a tensor, and the two tensors are  D 1 =[3.3,1.8,1.3,0,0,1.2] ′  and  D 2 =[3, 2.2, 3,  − 1, 0, 0] ′ . 6 To generate theDWI based on this DTI, we let  S  0 =5,  b =1500 s /mm 2 , and  g   be 22uniformly-spaced directions on the unit sphere starting from(1,0,0). Substituting the DTI,  S  0 ,  b ,  g   into the Stejskal – Tanner equa-tion, we generate a 16×16×22 DWI  S . One representative slice of   S is shown in Fig. 1(b). Then following the method proposed in Koay and Basser (2006), we add the Rician noise to  S  and get  ˜  S , by usingthe formula  ˜  S   x  ð Þ ¼  ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi S   x  ð Þ þ  n r  ð Þ 2 þ  n 2 i q   , where  n r  ,  n i  ∼ N  (0, σ  2 ). Byvarying  σ  , we get different levels of noise and therefore a widerange of SNR (SNR   ¼  mean  S ð Þ σ   ).In our experiments, we set  α  =0.1 and  β  =0.4. The search win-dow size is set to be 25 and the neighborhood size is 9. Fig. 1(c)shows the slice in Fig. 1(b) after adding noise (SNR=60). Theestimated DTI from using MRE, VF, NLM, LMMSE and the proposedmethod are shown in Fig. 1. The fi gure visually depicts that our meth-od can estimate the tensor  fi eld more accurately.To quantitatively assess the proposed variational uni fi ed model,we determine the accuracy of the computed principle eigenvectorsof the tensors. Let   θ  be the average angle between the principle ei-genvectors of the estimated tensor  fi eld and the srcinal known ten-sor  fi eld. Besides we compare the difference, denoted as   S  0 ,between the estimated and ground truth  S  0 . The results are shownin Table 1, from which it is evident that our method outperformsothers and the signi fi cance in performance is more evident at highernoise levels. Even though the accuracy of NLM and our proposedmethod is very similar at high SNR, however, our method is muchmore computationally ef  fi cient than NLM. The average CPU timetaken to converge for our method on a desktop computer with Intel8 Core 2.8 GHz, 24 GB of memory, GNU Linux and MATLAB (Version2010a) is 3.51 s, whereas, NLM requires 5.26 s (note both methodsare executed by using multi-core processors).Wealsoevaluatedtheimportanceofthetworegularizationtermsseparately.  α  =0 means removing the regularization on  S  0 , while  β  =0 means removing the regularization on  D . For these two cases,we evaluate the   θ  and   S  0 , and the results are shown in the last twocolumns of  Table 1. The results show that removing either the regu-larization term will increase the DTI estimation error. This impliesthat both regularization terms are necessary in order to get accurateestimation results.We also evaluated our method on the 64×64  fi bercup dataset(Fillard et al., 2011; Poupon et al., 2008a) with a voxel size of 3×3×3 mm 3 , a  b -value of 1500 s /mm 2 and 130 gradient directions.For the parameter settings in the proposed model, we chose  α  =0.1and  β  =0.4. The search window size is 9 and the neighborhood size is4. We showed the estimated  S  0 , D11, D12, D13, D22, D23, D33, FA andthe visualization of the estimated DTI by using fanDTasia (Barmpoutiset al., 2009a) in Fig. 2. The results depict that the proposed method can give a well smoothened and feature preserved tensor  fi eld. DTI estimation from real datasets We also did DTI estimation on a 100×80×32×52 3D rat brain DWI.The data was acquired by using a PGSE technique with TR=1.5 s,TE=28.3 ms, bandwidth=35Khz, 52 diffusion weighted imageswith a  b -value of 1334 s/mm 2 .We compared with several other methods on the DTI estimation,however, to save space, we only show the results of MRE, LMMSEand our proposed method. We present D11, D22, D33,  S  0 , FA, andmean trace for each estimated result. The DTI estimation resultsof MRE, LMMSE and our proposed method are shown in Figs. 3, 4and 5 respectively.We used a human brain DWI dataset (256×256×72) provided byAlfredAnwanderof the MaxPlanckInstitutefor Human Neuroscience(Makuuchi et al., 2009). The DWIs were acquired with a whole-body3 T Magnetom TRIO operating at 3 T (Siemens Medical Solutions) 4 5 6 D  is written as [D11,D22,D33,D12,D23,D13].  Table 1 Error in estimated DTI and  S  0 , by using different methods, from synthetic DWI with different levels of noise.SNR Error MRE VF LMMSE NLM Proposed  α  =0  β  =050   θ  0.204±0.064 0.107±0.038 0.096±0.060 0.085±0.054 0.082±0.045 0.112±0.049 0.179±0.062  S  0  0.269±0.138 0.217±0.106 0.208±0.080 0.191±0.112 0.133±0.103 0.171±0.098 0.224±0.08240   θ  0.550±0.351 0.275±0.312 0.203±0.304 0.195±0.301 0.108±0.102 0.169±0.192 0.539±0.348  S  0  0.584±0.354 0.278±0.359 0.215±0.320 0.216±0.328 0.140±0.226 0.218±0.225 0.542±0.35530   θ  0.588±0.359 0.434±0.394 0.230±0.363 0.217±0.348 0.144±0.130 0.214±0.251 0.570±0.359  S  0  0.753±0.441 0.485±0.370 0.282±0.401 0.275±0.352 0.173±0.231 0.286±0.432 0.607±0.45115   θ  0.727±0.561 0.504±0.388 0.396±0.426 0.384±0.422 0.218±0.252 0.372±0.283 0.711±0.577  S  0  0.968±0.570 0.622±0.517 0.479±0.482 0.478±0.457 0.235±0.272 0.481±0.474 0.921±0.5807   θ  1.092±0.595 0.688±0.656 0.460±0.526 0.519±0.469 0.265±0.276 0.457±0.472 1.001±0.591  S  0  1.334±1.094 0.827±0.680 0.543±0.583 0.562±0.554 0.273±0.283 0.592±0.57 1.306±1.09537 M. Liu et al. / NeuroImage 67 (2013) 33 – 41
Similar documents
View more...
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks