General

A Matrix Variance Inequality for k-Functions

Description
A Matrix Variance Inequality for k-Functions
Categories
Published
of 8
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
Share
Transcript
  MATEMATIKA, 2007, Volume 23, Number 1, 1–8c  Department of Mathematics, UTM. A Matrix Variance Inequality for  k -Functions 1 Norhayati Rosli &  2 Wan Muhamad Amir W Ahmad 1 Fakulti Kejuruteraan Kimia & Sumber Asli, Kolej Universiti Kejuruteraan & Teknologi Malaysia25000 Kuantan, Pahang, Malaysia 2 Departments of Mathematics, Faculty of Science and TechnologyKolej Universiti Sains & Teknologi MalaysiaMengabang Telipot, 21030 Kuala Terengganu, Terengganu, Malaysiae-mail:  1 norhayati@kuktem.edu.my Abstract  In this paper a course of solving variational problem is considered. [2]obtained what appears to be specialized inequality for a variance, namely, that for astandard normal variable  X   ,  V ar [ g ( x )] ≥ E  [ g ′ ( x )] 2 . However both of the simplicityand usefulness of the inequality has generated a plethora of extensions, as well asalternative proofs. [5] had focused on a result of two random variables for the normaland gamma distribution. They obtained the result of normal distribution with  k functions, without proving and the proof is presented here. This paper also extend theresult obtained by [5] to the  k  functions for the gamma distribution. Keywords  Normal Distribution, Gamma Distribution, Laguerre Family, HermitePolynomials 1 Introduction In solving a variational problem, [2] obtained what appears to be specialized inequality fora variance. Let  X   be normally distributed with density  ϕ ( x ) and mean 0 and variance 1.If   g  is absolutely continuous and  g ( X  ) has finite variance, then E  [ g ′ ( X  )] 2 ≥ V ar [ g ( X  )] .  (1)Equality in (1) is achieved for linear functions. This inequality have arisen earlier,especially because of its use in variational problems. There are many papers that deal withinequality (1) and in many cases they relate to the single function. However, the randomvariables might have multivariate distributions. So, we present the study of matrix varianceinequality for the normal and gamma distribution with  k -functions. 2 Literature Review Chernoff’s proof is based on expanding  g ( X  ) in orthonormalized Hermite polynomials withrespect to the normal density g ( X  ) =  a 0  +  a 1 H  1 ( X  ) +  a 2 H  2 ( X  ) + ···  (2)  2  Norhayati Rosli & Wan Muhamad Amir W Ahmad with probability 1. Let E  [ H  i ( X  )] = 0 , E  [ H  i ( X  ) H  j ( X  )] =  δ  ij ,  (3) dH  i ( x ) dx √  iH  i − 1 ( x ) a i  =  E  [ g ( X  ) H  i ( X  )] (4)and V ar ( g ( X  )) =  a 21  +  a 22  + ··· +  a 2 n  + ···  ,  (5) g ′ ( X  ) =  a 1  + √  2 a 2 H  1 ( X  ) + ··· + √  na n H  n − 1 ( X  ) +  R ′ n ( X  ) . So that if   g ′ ( X  ) has a second moment, E  [ g ′ ( X  )] 2 =  ia 2 i  ≥ V ar [ g ( X  )] .  (6)And if   g ′ ( X  ) has no second moment then   ia 2 i  is infinite. For a logconcave densityexp[ − ϕ ( x )] , [4] proved that V ar [ g ( X  )] ≤ E  [ g ′ ( X  ) /ϕ ′′ ( X  )] 2 ,  (7)and for the normal density, (7) is reduced to (1).[1] shows that E  [ g ′ ( X  )] =  E  [ Xg ( X  )] (8)has a similar flavor to that of (1). Stein’s proof is essentially based on integration byparts, but can also be proven by using Hermite polynomials. [6] and [7] provide an alter-native proof based on the Cauchy-Schwarz inequality. [6] prove that for the normal density ϕ ( x ) , ϕ ′ ( x ) = − xϕ ( x ). [7] extends (1) to the case that  X  1 ,...,X  k  are independent  N  (0 , 1)random variables and  g  is defined on  R k . Then V ar [ g ( X  )] ≤ E  [ g 1 ( X  )] 2 + ··· +  E  [ g k ( X  )] 2 , where  g i ( x ) =  δg ( x )/ δx i  and  X   = ( X  1 ,...,X  k ) . [10] provides other extensions, and in particular, the lower bounds in (1);[ Eg ′ ( X  )] ≤ V ar [ g ( X  )] ≤ E  [ g ′ ( X  )] 2 , [ Eg ′ ( X  )] 2 + 12[ Eg ( X  )] 2 ≤ V ar [ g ( X  )] . [11] improved the bounds for families other than normal. They also obtain the lowerbound for the normal distribution n  k =1  EG ( k ) ( X  )  2 k !  ≤ V ar [ G ( X  )] , with  n  = 2 . [4] discussed a generalization to operators and use a complete orthonormal system. [8]obtain an equality similar to (1) by consider the double exponential distribution with densityexp( −| x | )/2 .  A Matrix Variance Inequality for  k -Functions  3 3 Characterizations In this paper we provide a proof of matrix variance inequality for the normal and gammadistributions of   k -functions. Actually the result of matrix variance inequality for the normaldistribution is obtained by [5] without proving. We extend this paper to find the matrixvariance inequality for the gamma distribution with k-functions. We used the propositionobtained in [2]. This proposition later studied by [5] and shows the proof of matrix varianceinequality of normal and gamma distribution for 2-functions. Proposition.  Let  X   be a  N  (0 , 1) random variable,  g 1 ,...,g k  absolutely continuousfunctions with finite variances. Let  H   = ( h ij ) and  C   = ( c ij ) be  k × k  matrices defined by h ij  =  E  [ g ′ i ( X  ) g ′ j ( X  )] ,c ij  =  Cov [ g i ( X  ) ,g j ( X  )] . (9)Then  H   ≥ C   in the Loewner ordering, i.e,  H  − C   is nonnegative definite.The proof of this proposition is discussed in Section 4. 4 A Matrix Variance Inequality for The Normal Distribution We show the proof of a matrix variance inequality for k-functions which are normal distri-bution. To prove the proposition, we expand  g 1 ( X  ) ,g 2 ( X  ) ,...,g k ( X  ) in orthonormalizedHermite polynomials: g 1 ( X  ) =  a 0  +  a 1 H  1 ( X  ) +  a 2 H  2 ( X  ) + ··· g 2 ( X  ) =  b 0  +  b 1 H  1 ( X  ) +  b 2 H  2 ( X  ) + ··· ............... g k ( X  ) =  u 0  +  u 1 H  1 ( X  ) +  u 2 H  2 ( X  ) + ··· (10)with probability 1, where E  [ H i ( X  )] = 0 , E  [ H i ( X  ) H j ( X  )] =  δ  ij ,  (11) d H i ( x ) dx √  i H i − 1 ( x ) a i  =  E  [ g 1 ( X  ) H i ( X  )] b i  =  E  [ g 2 ( X  ) H i ( X  )]......... u i  =  E  [ g k ( X  ) H i ( X  )] . Then, from (5) we have  4  Norhayati Rosli & Wan Muhamad Amir W Ahmad V ar [ g 1 ( X  )] = ∞  i =1 a 2 i , V ar [ g 2 ( X  )] = ∞  i =1 b 2 i ,..., V ar [ g k ( X  )] = ∞  i =1 u 2 i  (12) Cov [ g 1 ( X  ) ,g 2 ( X  )] = ∞  i =1 a i b i ,..., Cov [ g 1 ( X  ) ,g k ( X  )] = ∞  i =1 a i u i ,  (13) Cov [ g 2 ( X  ) ,g k ( X  )] = ∞  i =1 b i u i .  (14)Hence,  H  and  C  are  k × k  matrices in the form H  =  ia 2 i  ia i b i  ...   ia i u i  ia i b i  ib 2 i  ...  ib i u i ...... ... ...  ia i u i  ib i u i  ...   iu 2 i  and (15) C  =  a 2 i  a i b i  ...  a i u i  a i b i  b 2 i  ...   b i u i ...... ... ...  a i u i  b i u i  ...  u 2 i  ,  respectively. (16)Then, we get H − C  =  ( i − 1) a 2 i  ( i − 1) a i b i  ...   ( i − 1) a i u i  ( i − 1) a i b i  ( i − 1) b 2 i  ...  ( i − 1) b i u i ...... ... ...  ( i − 1) a i u i  ( i − 1) b i u i  ...  ( i − 1) u 2 i  .  (17)Let α i  = √  i − 1 a i , β  i  = √  i − 1 b i ,..., γ  i  = √  i − 1 u i where τ  (1)  = ( α 1 ,α 2 ,α 3 ,... ) , τ  (2)  = ( β  1 ,β  2 ,β  3 ,... ) ,...,  and  τ  ( k )  = ( γ  1 ,γ  2 ,γ  3 ,... ) .  (18)Actually (17) is in the form of  H − C  =  √  i − 1 a i √  i − 1 a i  √  i − 1 a i √  i − 1 b i  ...   √  i − 1 a i √  i − 1 u i  √  i − 1 a i √  i − 1 b i  √  i − 1 b i √  i − 1 b i  ...   √  i − 1 b i √  i − 1 u i ...... ... ...  √  i − 1 a i √  i − 1 u i  √  i − 1 b i √  i − 1 u i  ...  √  i − 1 u i √  i − 1 u i   A Matrix Variance Inequality for  k -Functions  5It is obvious that if   α i  = √  i − 1 a i , then  α ′ i  = √  i − 1 a i . Consequently we can rewrite(17) as H − C  =  α i α ′ i  α i β  ′ i  ...  α i γ  ′ i  β  i α ′ i  β  i β  ′ i  ...   β  i γ  ′ i ...... ... ...  γ  i α ′ i  γ  i β  ′ i  ...  γ  i γ  ′ i  . Hence, we obtained H − C  =  τ  (1) τ  ′ (1)  τ  (1) τ  ′ (2)  ... τ  (1) τ  ′ ( k ) τ  (2) τ  ′ (1)  τ  (2) τ  ′ (2)  ... τ  (2) τ  ′ ( k ) ...... ... ... τ  ( k ) τ  ′ ( k )  τ  ( k ) τ  ′ (2)  ... τ  ( k ) τ  ′ ( k )  =  τ  (1) τ  (2) ... τ  ( k )   τ  ′ (1)  τ  ′ (2)  ... τ  ′ ( k )  ≥ 0 . (19) 5 A Matrix Inequality for Gamma Distribution The gamma density function is defined by g ( X  ) =  x α e − x Γ( α  + 1) , α > − 1 . [9] uses the Laguerre family of orthogonal family to obtain the inequality V ar [ g ( X  )] ≤ EX  [ g ′ ( X  )] 2 ,  (20)with equality if and only if   g ( x ) is linear. The key features of the Laguerre family are E  [ L ( α ) n  ( X  ) L ( α ) n  ( X  )] =   n  +  αn  δ  ij , dL ( α ) n dx  = − L ( α +1) n − 1  ( x ) .  (21)Let say g 1 ( x ) =  a n L ( α ) n  ( x ) ,g 2 ( x ) =  b n L ( α ) n  ( x ) ,g 3 ( x ) =  c n L ( α ) n  ( x ) , ......... g k ( x ) =  u k L ( α ) k  ( x ) . (22)Then
Search
Similar documents
View more...
Tags
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks