Bayesian Prediction Intervals for Future Order Statistics From the Generalized Exponential Distribution

Description
Bayesian Prediction Intervals for Future Order Statistics From the Generalized Exponential Distribution
Categories
Published

View again

All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
Share
Transcript
JIRSS (2007)Vol. 6, No. 1, pp 17-30 Bayesian Prediction Intervals for FutureOrder Statistics from the GeneralizedExponential Distribution Ahmad A. Alamm 1 , Mohammad Z. Raqab 1 , Mohamed T.Madi 2 1 Department of Mathematics, University of Jordan, Amman 11942,Jordan. (a2a2aa@yahoo.co.uk, mraqab@ju.edu.jo) 2 Department of Statistics, UAE University, Al Ain, United Arab Emirates.(mmadi@uaeu.ac.ae) Abstract. Let X  1 ,X  2 ,...,X  r be the ﬁrst r order statistics from asample of size n from the generalized exponential distribution withshape parameter θ . In this paper, we consider a Bayesian approachto predicting future order statistics based on the observed ordereddata. The predictive densities are obtained and used to determineprediction intervals for unobserved order statistics for one-sample andtwo-sample prediction plans. A numerical study is conducted to il-lustrate the prediction procedures. 1 Introduction Let X  1 ,X  2 ,...,X  n denote the order statistics of a sample of size n fromgeneralized exponential (GE) distribution with probability densityfunction (pdf) f  ( x ; θ ) = θ (1 − e − x ) θ − 1 e − x ; x > 0 , θ > 0 . (1)  18 Alamm et al. and cumulative distribution function (cdf) F  ( x ; θ ) = (1 − e − x ) θ ; x > 0 , θ > 0 , (2)where θ is a shape parameter. When θ = 1, the GE distributionreduces to the standard exponential distribution. When θ is an in-teger, the GE distribution is the distribution of the maximum of asample of size n from the standard exponential distribution. TheGE distribution has a unique mode and its median is –ln(1-(0.5) 1 /θ ),where ln denotes the natural logarithm. Gupta and Kundu (1999)used the above given distribution for analyzing skewed data. Guptaand Kundu (2001a) showed that the GE distribution can be used as agood alternative to the gamma or the Weibull models. They observedthat this distribution has more similarities to gamma family than toWeibull family in terms of hazard function. It has an increasing haz-ard function if  θ > 1 and decreasing hazard function if  θ < 1. Thedensity function varies signiﬁcantly depending on the shape parame-ter. Therefore this distribution can also be used in a situation wherethe course of disease is such that mortality reaches a peak after someﬁnite period, and then slowly decline. For example, in a study of cur-ability of breast cancer, Langlands et al. (1979) found that the peakof mortality occurred after three years. It is therefore, important toanalyze such data sets with appropriate models like gamma, Weibullor GE distributions. The GE distribution has many properties thatare quite similar to those of the gamma distribution, but it has a dis-tribution function similar to that of the Weibull distribution whichcan be computed simply. The GE family has likelihood ratio order-ing on the shape parameter; so it is possible to construct uniformlymost powerful test for testing one-sided hypothesis on the shape pa-rameter, when the scale and location parameters are known. Guptaand Kundu (2003) used the ratio of the maximized likelihoods in dis-criminating between the Weibull and the GE distributions. Raqaband Ahsanullah (2001) and Raqab (2002) obtained the estimation of the location and scale parameters of the GE distributions based onorder statistics and record values, respectively. Recently Raqab andMadi (2005) used importance sampling techniques in the Bayesianestimation and prediction for the GE distribution.Let X  1 ,X  2 ,...,X  n be the order statistics from a sample of sizen from GE distribution. Let X = ( X  1 ,X  2 ,...,X  r ), r ≤ n , be thecensored sample. Prediction problem does arise naturally in the con-text of order statistics. Two prediction scenarios are considered; ﬁrst,  Bayesian Prediction Intervals for ... 19given the r observed order statistics x 1 ≤ x 2 ≤ ... ≤ x r , we predictthe remaining order statistics x r +1 ,x r +2 ,...,x n . This is referred toas one-sample prediction. The second scenario, known as the two-sample prediction, consists of predicting the ﬁrst m order statisticsin a future sample. Prediction intervals for diﬀerent statistics of fu-ture observations are discussed in the literature. Ahsanullah (1980)developed the best linear unbiased predictors (BLUP’s) of the futurerecord statistics from the exponential distribution. Raqab (1997) ob-tained the modiﬁed maximum likelihood predictors of future orderstatistics from normal samples. Other prediction problems can befound in Lawless (1973), Kaminsky and Nelson (1974), Evans andRagab (1983) and Sartawi and Abu-Salih (1991).In the context of prediction, we say that ( L ( X ) ,U  ( X )) is a 100(1 − α )% prediction interval for a future random variable Y  if  P  ( L ( X ) < Y < U  ( X )) = 1 − α, where L ( X ) and U  ( X ) are lower and upper prediction limits for therandom variable Y, and 1 − α is called the conﬁdence predictioncoeﬃcient.In this paper, we use Bayesian statistical analysis to predict fu-ture order statistics from GE distribution on the basis of some ordereddata. In Section 2, we obtain prediction intervals for order statisticsusing a one-sample prediction plan. In Section 3, we present predic-tion intervals for future order data based on a two-sample predictionplan. Section 4 includes an illustration of the proposed methods usinga simulated data set and diﬀerent choices of prior parameters. 2 Bayesian Prediction Interval for the ( r + s ) th Order Statistic: One-Sample Case The likelihood for θ of the given type II censored sample X = ( X  1 ,X  2 ,...,X  r ) is given by:  20 Alamm et al. f  ( x | θ ) = n !( n − r )! θ rr  i =1 e − x i r  i =1 (1 − e − x i ) θ − 1  1 −  1 − e − x r  θ  n − r (3)0 ≤ x 1 ≤ ... ≤ x r ,θ > 0 . We assume that θ follows a Gamma distribution with densityfunction π ( θ ) = b a Γ( a ) θ a − 1 e − bθ ,θ > 0 . (4)From (3) and (4), we get the posterior density of  θ as π ( θ | x ) = θ a + r − 1 e − bθ v θ (1 − ω θ ) n − r  n − rk =0  n − rk  ( − 1) k ( b − ln( ω k v )) − a − r Γ( a + r ) , (5)where v = r  i =1 (1 − e − x i ) and ω = 1 − e − x r . Let X  r +1 , X  r +2 , ...,X  n be the future remaining order statistics.The extended likelihood function is f  ( x 1 ,x 2 ,..,x r + s ,..,x n | θ ) = n ! f  ( x 1 | θ ) ...f  ( x r + s | θ ) ...f  ( x n | θ ) , 0 ≤ x 1 ≤ ... ≤ x r ≤ ... ≤ x n (6)By integrating (6), with respect x r +1 ,x r +2 ,..,x r + s − 1 ,x r + s +1 ..,x n ,we have f  ( x 1 ,x 2 ,..,x r ,x r + s | θ ) = n !( s − 1)!( n − s − r )! r  i =1 f  ( x i )[ F  ( x r + s ) − F  ( x r )] s − 1 [1 − F  ( x r + s )] n − r − s f  ( x r + s ) . (7)On using (1) and (2), we obtain f  ( x 1 ,x 2 ,..,x r ,x r + s | θ ) = n !( s − 1)!( n − s − r )! θ r +1 r  i =1  e − x i (1 − e − x i ) θ − 1  e x r + s (1 − e − x r + s ) θ − 1  1 − (1 − e − x r + s ) θ  n − r − ss − 1  i =1 ( − 1) i i !( s − i − 1)!(1 − e − x r ) θi (1 − e − x r + s ) θ ( s − i − 1) (8)  Bayesian Prediction Intervals for ... 21It follows from (3) and (8) that f  ( x r + s | θ, x ) = s  n − rs  θ  e − x r + s (1 − e − x r + s ) θ − 1  1 − (1 − e − x r + s ) θ  n − r − s (1 − ω θ ) r − ns − 1  i =0 ( − 1) i  s − 1 i  (1 − e − x i ) θi (1 − e − x r + s ) θ ( s − i − 1) (9)Forming the product of  f  ( x r + s | θ, x ) and the posterior densityof  θ given in (5) and integrating out θ , it may be shown that for1 ≤ s ≤ n − r, the predictive density function of  x r + s given x is  p ( x r + s | x ) = s  n − rs  e − x r + s  s − 1 i =0  n − r − s j =0 ( − 1) i +  j  s − 1 i  n − r − s j  W  ij  n − rk =0  n − rk  ( − 1) k ( b − ln( ω k v )) − a − r Γ( a + r ) ,x r + s > 0 , (10)where W  ij =(1 − e − x r + s ) − 1 Γ( a + r + 1)  b − ln  v ω i (1 − e − x r + s ) (  j + s − i )  a + r +1 . The predictive density p ( x r + s | x ) can be used to ﬁnd the predictionbounds on x r + s . Note that P  ( X  r + s ≥ y | x ) = s  n − rs  T  s − 1  i =0 n − r − s   j =0 ( − 1) i +  j  s − 1 i  n − r − s j  (  j − i + s )[ b − ln( ω i v )] − ( a + r )  1 −  1 − (  j − i + s )ln(1 − e − y ) b − ln( ω i v )  − ( a + r )  (11)where T  = n − r  k =0  n − rk  ( − 1) k ( b − ln( ω k v )) − ( a + r ) . Let L ( x ) and U  ( x ) be the lower and upper bounds for 100(1 − α )%prediction interval, respectively. Then 100(1 − α )% prediction boundscan be obtained by equating (11) to 1 − α/ 2 for the lower limit and

Mar 10, 2018

How to do comparison in a language without degrees: a semantics for the comparative in Fijian

Mar 10, 2018
Search
Similar documents

View more...
Tags

Probability Theory

Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us. Thanks to everyone for your continued support.

No, Thanks
SAVE OUR EARTH

We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth.

More details...

Sign Now!

We are very appreciated for your Prompt Action!

x