Description

Disaggregation procedures for stochastic hydrology based on nonparametric density estimation

All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.

Related Documents

Share

Transcript

Disaggregation procedures for stochastic hydrology based onnonparametric density estimation
David G. Tarboton, Ashish Sharma,
1
and Upmanu Lall
Utah Water Research Laboratory, Utah State University, Logan
Abstract.
Synthetic simulation of streamﬂow sequences is important for the analysis of water supply reliability. Disaggregation models are an important component of thestochastic streamﬂow generation methodology. They provide the ability to simulatemultiseason and multisite streamﬂow sequences that preserve statistical properties atmultiple timescales or space scales. In recent papers we have suggested the use of nonparametric methods for streamﬂow simulation. These methods provide the capabilityto model time series dependence without a priori assumptions as to the probabilitydistribution of streamﬂow. They remain faithful to the data and can approximate linear ornonlinear dependence. In this paper we extend the use of nonparametric methods todisaggregation models. We show how a kernel density estimate of the joint distribution of disaggregate ﬂow variables can form the basis for conditional simulation based on aninput aggregate ﬂow variable. This methodology preserves summability of the disaggregateﬂows to the input aggregate ﬂow. We show through applications to synthetic data andstreamﬂow from the San Juan River in New Mexico how this conditional simulationprocedure preserves a variety of statistical attributes.
1. Introduction
A goal of stochastic hydrology is to generate syntheticstreamﬂow sequences that are statistically similar to observedstreamﬂow records. Such synthetic streamﬂow sequences areuseful for analyzing reservoir operation and stream manage-ment policies. Often, multiple reservoir sites and stream sec-tions need to be considered as part of a system operation plan,and the operating horizon may extend from a few days toseveral years. For proper system operation it is important thatthe streamﬂow sequences generated for the different sitesand/or time periods be “compatible.” Practically, this suggeststhat (1) the ﬂow recorded at a downstream gage be repre-sented as the sum of the tributary ﬂows and channel losses/ gains, (2) the annual ﬂow represent a sum of the monthly ﬂows,(3) the monthly fractions of ﬂows in wet/dry years be repre-sentative of wet/dry years respectively, and (4) the relativedelay between the rise and fall of streams in the basin bereproduced. Statistically, this implies that the joint probabilitydistribution of the ﬂow sequences at the different sites andtime periods needs to be preserved. As the number of sites/ time periods increases this entails the estimation/speciﬁcationof a high dimensional density function from a relatively smallnumber of data points. Recognizing this problem, a signiﬁcantbody of hydrologic literature evolved on disaggregation models[
Harms and Campbell
, 1967;
Valencia and Schaake
, 1972;
Mejia and Rousselle
, 1976;
Curry and Bras
, 1978;
Lane
, 1979;
Salas et al
., 1980;
Svanidze
, 1980;
Stedinger and Vogel
, 1984;
Bras and Rodriguez-Iturbe
, 1985;
Stedinger et al
., 1985;
Grygier and Ste- dinger
, 1988;
Santos and Salas
, 1992]. The essence of thesemodels is to develop a staging framework [e.g.,
Santos andSalas
, 1992], where ﬂow sequences are generated at a givenlevel of aggregation and then disaggregated into componentﬂows (e.g., seasonal from annual, or monthly from seasonal). At each stage a low dimensional estimation problem is solved.Summability of disaggregated ﬂows and their mutual correla-tion structure (after some transformation) is preserved. His-torically, a parametric structure has been used for this process.In this paper we present a nonparametric approach to thedisaggregation of streamﬂow. Disaggregation is the simulationof the components of a vector of disaggregated variables
X
given (i.e., conditional on) an aggregate variable
Z
. The prob-lem is posed in terms of sampling from the conditional prob-ability density function.
f
X
Z
f
X
,
Z
f
X
,
Z
d
X
(1)In this equation
f
(
X
,
Z
) is the joint probability density func-tion of the vector
X
of disaggregate variables (monthly ortributary streamﬂows) and
Z
the aggregate variable (annual ormain stem streamﬂow) obtained from an aggregate model ateach aggregate time step. The denominator in (1) above is themarginal probability density function of the aggregate variable
Z
derived by integrating the joint distribution over all thecomponents of
X
. Speciﬁcally, we consider a
d
-dimensional vector
X
(
X
1
,
X
2
,
X
d
)
T
with aggregate variable
Z
X
1
X
2
X
d
. The superscript
T
denotes transpose.Vectors are taken to be column vectors. The model is esti-mated from
n
observations of
X
and
Z
, denoted
x
i
and
z
i
. Thecomponents of
x
i
are the historical disaggregate components,such as monthly, seasonal, or tributary ﬂows that comprise thehistorical aggregate
z
i
. We use kernel density estimation tech-niques to estimate the joint and conditional densities in (1).These methods are data adaptive; that is, they use the histor-ical data (the historical aggregate and component time series)to deﬁne the probability densities. Assumptions as to the form
1
Now at Department of Water Engineering, School of Civil Engi-neering, University of New South Wales, Sydney, Australia.Copyright 1998 by the American Geophysical Union.Paper number 97WR02429.0043-1397/98/97WR-02429$09.00
WATER RESOURCES RESEARCH, VOL. 34, NO. 1, PAGES 107–119, JANUARY 1998107
of dependence (e.g., linear or nonlinear) or to the probabilitydensity function (e.g., Gaussian) are avoided.Historically, disaggregation approaches to streamﬂow syn-thesis have involved some variant of a linear model of the form
X
t
A
Z
t
BV
t
(2)Here
X
t
is the vector of disaggregate variables at time
t
,
Z
t
isthe aggregate variable, and
V
t
is a vector of independent ran-dom innovations, usually drawn from a Gaussian distribution.
A
and
B
are parameter matrices.
A
is chosen or estimated toreproduce the correlation between aggregate and disaggregateﬂows.
B
is estimated to reproduce the correlation betweenindividual disaggregate components. The many model variantsin the literature make different assumptions as to the structureand sparsity of these matrices and which correlations themodel should be made to directly reproduce. They also con-sider a variety of normalizing transformations applied to thedata, prior to use of (2), to account for the fact that monthlystreamﬂow data is seldom normally distributed. In these mod-els summability, the fact that disaggregate variables should addup to the aggregate quantity, has also been an issue. It can beshown [see
Bras and Rodriguez-Iturbe
, 1985, p. 148] with amodel of the form of (1) that summability of the disaggregate variables to the aggregate variables is guaranteed. However, when a normalizing transformation is used, or when variouselements of the matrices are taken as zero during simpliﬁca-tion, summability is lost. In these cases investigators [e.g.,
Gry- gier and Stedinger
, 1988] have suggested empirical adjustmentprocedures to restore summability.The key idea to recognize from these models, exempliﬁed by(2), is that they provide a mathematical framework where a joint distribution of disaggregate and aggregate variables isspeciﬁed. However, the speciﬁed model structure is paramet-ric. It is imposed by the form of (2) and the normalizingtransformations applied to the data to represent marginal dis-tributions.Some of the drawbacks of the parametric approach are thefollowing.1. Since (2) involves linear combinations of random vari-ables, it is mainly compatible with Gaussian distributions.Where the marginal distribution of the streamﬂow variablesinvolved is not Gaussian (e.g., perhaps there is signiﬁcantskewness), normalizing transformations are required for eachstreamﬂow component. Equation (2) would then be applied tothe normalized ﬂow variables. It is difﬁcult to ﬁnd a generalnormalizing transformation and retain statistical properties of the streamﬂow process in the untransformed multivariablespace.2. The linear nature of (2) limits it from representing anynonlinearity in the dependence structure between variables,except through the normalizing transformation used. Giventhe current recognition of the importance of nonlinearity inmany physical processes [e.g.,
Tong
, 1990;
Schertzer and Love- joy
, 1991], we prefer at the outset not to preclude or limit therepresentation of nonlinearity.Following in the spirit of our recent work [
Lall and Sharma
,1996;
Sharma et al
., 1997], the purpose of this paper is todevelop a nonparametric disaggregation methodology. Thenecessary joint probability density functions are estimated di-rectly from the historic data using kernel density estimates.These methods circumvent the drawbacks of the parametricmethods that were listed. The methods are data driven andrelatively automatic, so nonlinear dependence will be incorpo-rated to the extent suggested by the data. Difﬁcult subjectivechoices as to appropriate marginal distributions and normal-izing transformations are avoided.This paper is organized as follows. First the multivariatekernel density estimator used in the disaggregation model ispresented. This is followed by a description of our nonpara-metric disaggregation approach. The performance of the non-parametric disaggregation procedure is then evaluated by ap-plications to synthetic data from a known nonlinear model andto streamﬂow from the San Juan River near Archuleta, NewMexico, United States. Results from our approach are com-pared to those from SPIGOT [
Grygier and Stedinger
, 1990], apopular disaggregation software package based on linearizingtransformations of the historical streamﬂow time series.
2. Kernel Density Estimation
Kernel density estimation entails a weighted moving averageof the empirical frequency distribution of the data. Most non-parametric density estimators can be expressed as kernel den-sity estimators [
Scott
, 1992, p. 125]. In this paper we use mul-tivariate kernel density estimators with Gaussian kernels andbandwidth selected using least squares cross validation [e.g.,
Scott
, 1992, p. 160]. This bandwidth selection method is one of many available methods. Its performance was compared with various cross-validation estimators for samples of sizes typi-cally encountered in hydrology using a simulation study[
Sharma
, 1996]. Our methodology is intended to be genericand should work with any bandwidth and kernel density esti-mation method. Procedures for bandwidth and kernel selec-tion are an area of active research in the nonparametric sta-tistics community and as better methods become available theycan be easily incorporated into our model. For a review of hydrologic applications of kernel density and distribution func-tion estimators, readers are referred to
Lall
[1995].
Silverman
[1986] and
Scott
[1992] provide good introductory texts. A multivariate Gaussian kernel density estimate for a
d
-dimensional vector
x
can be written as
f ˆ
x
1
n
i
1
n
1
2
d
/2
det (
H
)
1/2
exp
x
x
i
T
H
1
x
x
i
2
(3) where det( ) denotes determinant,
n
is the number of ob-served vectors
x
i
, and
H
is a symmetric positive deﬁnite
d
d
bandwidth matrix [
Wand and Jones
, 1994]. This density esti-mate is formed by adding multivariate Gaussian kernels with acovariance matrix
H
centered at each observation
x
i
. A useful speciﬁcation of the bandwidth matrix
H
is
H
2
S
(4)Here
S
is the sample covariance matrix of the data, and
2
prescribes the bandwidth relative to this estimate of scale.These are parameters of the model that are estimated from thedata. The procedure of scaling the bandwidth matrix propor-tional to the covariance matrix (equation (4)) is called “spher-ing” [
Fukunaga
, 1972] and ensures that all kernels are orientedalong the estimated principal components of the covariancematrix.The choice of the bandwidth,
, is an important issue in
TARBOTON ET AL.: DISAGGREGATION PROCEDURES108
kernel density estimation. A small value of
can result in adensity estimate that appears “rough” and has a high variance.On the other hand, too high a bandwidth results in an “over-smoothed” density estimate with modes and asymmetriessmoothed out. Such an estimate has low variance but is morebiased with respect to the underlying density. This bias- variance trade-off [
Silverman
, 1986, section 3.3.1] plays an im-portant role in choice of
.Several methods have been proposed to estimate the “opti-mal” bandwidth for a given data set. Least squares cross vali-dation (LSCV) [
Silverman
, 1986, pp. 48–52] is one suchmethod that is based on minimizing an estimate of the inte-grated square error of the kernel density estimate.
Sain et al
. [1994] provide an expression for the LSCV scorein any dimension with multivariate Gaussian kernel functionsand
H
, a diagonal matrix.
Adamowski and Feluch
[1991] pro- vide a similar expression for the bivariate case with Gaussiankernels. Here we generalize these results for use with themultivariate density estimator (3) which allows off diagonalterms in
H
:LSCV(
H
)
1
1
n
i
1
n
j
i
exp
L
ij
/4
2
d
/2
1
exp
L
ij
/2
2
d
n
det (
H
1/2
} (5) where
L
ij
x
i
x
j
T
H
1
x
i
x
j
(6)We use numerical minimization of (5) over the single param-eter
with bandwidth matrix from (4) to estimate all thenecessary probability density functions. We recognize thatLSCV bandwidth estimation is occasionally degenerate, and soon the basis of suggestions by
Silverman
[1986, p. 52] and theupper bound given by
Scott
[1992, p. 181], we restrict oursearch to the range between 0.25 and 1.1 times the meansquare error Gaussian reference bandwidth. This is the band- width that would be optimal if the data were from a Gaussiandistribution.
3. Nonparametric Disaggregation Model, NPD
In this section a
d
-dimensional disaggregation model (de-noted NPD) is developed. The model can be used to simulate
d
-dimensional disaggregate vectors
X
t
based on an input ag-gregate series
Z
t
.
Z
t
can be obtained from any suitable modelfor the aggregate streamﬂow series; however, we recommend anonparametric model such as those described by
Sharma et al
.[1997] or
Lall and Sharma
[1996]. Since the same procedure isapplied for each time step, from here on the subscript
t
on
X
t
is dropped to save notation.Disaggregation is posed in terms of resampling from theconditional density function of (1). We need a model that given
Z
, provides realizations of
X
. To use (1), an estimate of the
d
1 dimensional joint density function
f
(
X
1
,
X
2
,
X
d
,
Z
)is required. However, because of summability, this has all itsmass on the
d
-dimensional hyperplane deﬁned by
X
1
X
2
· · ·
X
d
Z
(7)This probability density can then be represented as
f
X
1
,
X
2
, · · ·
X
d
,
Z
f
X
1
,
X
2
, · · ·
X
d
Z
X
1
X
2
· · ·
X
d
(8) where
( ) is the dirac delta function. The dirac delta func-tion is a density function that integrates to one with all its massconcentrated at the srcin. Kernel density estimation is used toestimate
f
(
X
1
,
X
2
,
X
d
) based on the data. The conditionaldensity function is then
f
X
1
,
X
2
, · · · ,
X
d
Z
Z
X
1
X
2
· · ·
X
d
f
X
1
,
X
2
, · · · ,
X
d
over plane
X
1
X
2
· · ·
X
d
Z
f
X
1
,
X
2
, · · · ,
X
d
dA
(9)For a particular
Z
this conditional density function can be visualized geometrically as the probability density on a
d
1dimensional hyperplane slice through the
d
-dimensional den-sity
f
(
X
1
,
X
2
,
,
X
d
), the hyperplane being deﬁned by
X
1
X
2
X
d
Z
. This is illustrated in Figure 1 for
d
2.There are really only
d
1 degrees of freedom in the condi-tional simulation. The conditional probability density function(pdf) in (9) can then be speciﬁed through a coordinate rotationof the vector
X
(
X
1
,
X
2
,
X
d
)
T
into a new vector
Y
(
Y
1
,
Y
2
,
Y
d
)
T
whose last coordinate is aligned perpendic-ular to the hyperplane deﬁned by (7). Gram Schmidt orthonor-malization [e.g.,
Lang
, 1970, p. 138] is used to determine thisrotation.The appendix gives the derivation of the rotation matrix
R
such that
Y
RX
(10)
R
has the property that
R
T
R
1
(see appendix). With thisrotation the last coordinate of
Y
,
Y
d
, is in fact a rescaling of
Z
,denoted
Z
.
Y
d
Z
/
d
Z
(11)We also denote the ﬁrst
d
1 components of
Y
as
U
T
(
Y
1
,
Figure 1.
Illustration of a conditional density estimate
f ˆ
(
X
1
,
X
2
Z
) with
Z
X
1
X
2
as a slice through the joint densityfunction. This illustration for clarity uses only three datapoints, shown as dots in the
X
1
,
X
2
plane. Since the jointdensity estimate is formed by adding bivariate kernels, theconditional density is estimated as a sum of kernel slices.
109TARBOTON ET AL.: DISAGGREGATION PROCEDURES
Y
2
,
Y
d
1
)
T
. These reﬂect the true
d
1 degrees of free-dom in the conditional simulation. With this
Y
(
U
T
,
Z
)
T
.Now we actually resample from
f
(
U
Z
)
f
(
Y
1
,
Y
2
,
Y
d
1
Z
) and recover the disaggregate components of
X
by back rotation. The kernel density estimate
f
(
U
Z
) isobtained by applying (3) in rotated coordinates. Substituting
X
R
T
Y
into (3) with bandwidth matrix
H
from (4), oneobtains
f ˆ
Y
1
n
i
1
n
1
2
d
/2
d
det
S
1/2
exp
Y
y
i
T
RS
1
R
T
Y
y
i
2
2
(12)Now recognize that
RS
1
R
T
(
RSR
T
)
1
S
y
1
representsa rotation of the covariance matrix
S
into
S
y
. Also det (
S
y
)
det (
S
). The resulting density estimate is therefore the same nomatter whether the srcinal or rotated coordinates are used.The conditional density function we resample from is
f ˆ
U
Z
f ˆ
Y
1
,
Y
2
, · · ·
Y
d
1
Z
f ˆ
U
,
Z
f ˆ
U
,
Z
d
U
(13) where
f ˆ
(
U
,
Z
)
f ˆ
(
Y
) is obtained from (12). Recalling that
U
denotes (
Y
1
,
Y
2
,
Y
d
1
)
T
, the vector
Y
without the lastcomponent, the covariance matrix
S
y
is partitioned as follows:
S
y
S
u
S
uzT
S
uz
S
z
(14)
S
u
is the
d
1
d
1 covariance matrix of
U
.
S
z
is the 1
1 variance of
Z
, and
S
uz
is a vector of cross covariance be-tween each component of
U
and
Z
. Substituting (12) in (13) we obtain
f ˆ
U
Z
1
2
2
d
1
/2
det
S
1/2
i
1
n
w
i
exp
U
b
i
T
S
1
U
b
i
2
2
(15) where
w
i
exp
Z
z
i
2
2
2
S
z
j
1
n
exp
Z
z
j
2
2
2
S
z
(16)
S
S
u
S
uz
S
z
1
S
uzT
(17)
b
i
u
i
S
uz
S
z
1
Z
z
i
(18)The conditional density function
f ˆ
(
U
Z
) can therefore beseen as a weighted sum of
n
Gaussian density functions each with mean
b
i
and covariance
2
S
. Equation (16) shows thatthe weight
w
i
which controls the contribution of point
i
to theconditional density estimate depends on the distance of
z
i
from the conditioning value
Z
. Observations that lie closer tothe conditioning value (i.e., where (
Z
z
i
) is small) receivegreater weight. The weights are normalized to add to unity.Resampling from (15) proceeds as follows.Preprocessing:1. Compute the sample covariance matrix
S
from the data
x
i
.2. Solve for
by numerically minimizing (5) with
H
from(4), using 0.25 and 1.1 times
ref
:
ref
4
d
2
1/
d
4
n
1/
d
4
(19) which is the mean square error Gaussian reference bandwidth,to bracket the search.3. Compute
R
,
S
z
, and
S
from (A2), (14), and (17).4. Use singular value decomposition to obtain
B
such that
BB
T
S
. At each time step:5. Given
Z
from the aggregate model at each time step,ﬁrst calculate the weight
w
i
associated with each observation,using (16).6. Pick a point
i
with probability
w
i
.7. Generate a
d
1 dimensional unit Gaussian vector
V
.Each component in
V
is independent
N
(0, 1).8. The simulated
U
is obtained from
U
b
i
BV
.9. Augment this to obtain
Y
,
Y
(
U
T
,
Z
)
T
.10. Rotate back to the original coordinate space.
X
R
T
Y
.Steps 5–10 are repeated for each aggregate time step. A complication can arise because the Gaussian kernels used inthe kernel density estimate have inﬁnite support. Thus theyassign some (hopefully small) probability to regions of thedomain where streamﬂow is negative (i.e., invalid or out of bounds). This leakage of probability across boundaries is aproblem associated with kernel density estimates based onkernels with inﬁnite support. Kernel density estimates alsosuffer from problems of bias near the boundaries. Here weaddress the leakage by checking the ﬂows for validity (posi-tiveness) and if they are invalid repeat steps 7–10 for a giventime step. That is, we regenerate a new vector
V
and try again.This amounts to cutting the portion of each kernel that is outof bounds and renormalizing that kernel to have the appropri-ate mass over the within-bounds domain. We record how oftenthis is done, as frequent boundary normalization is symptom-atic of substantial boundary leakage. Alternative approachesthat use special boundary kernels [
Hall and Wehrly
, 1991;
Wand et al
., 1991;
Djojosugito and Speckman
, 1992;
Jones
, 1993] or work with log-transformed data could be used in cases where thismethod for handling the boundaries is found to be unsatisfactory.
4. Model Evaluation
This section explores the use and effectiveness of the NPDapproach. It is ﬁrst applied to data from a speciﬁed bimodaldistribution. This tests the model’s ability to maintain distribu-tional characteristics such as nonlinearity and bimodality. It isthen applied to simulate monthly streamﬂow in the San JuanRiver.To provide a point of reference, we also generate resultsusing SPIGOT [
Grygier and Stedinger
, 1988, 1990]. SPIGOT isa parametric synthetic streamﬂow generation package that in-cludes an annual streamﬂow generation module and, for an-nual to monthly disaggregation, the condensed model de-scribed by
Grygier and Stedinger
[1988, 1990]. SPIGOT’sautoregressive model of order 1 (AR1) was used to generatethe annual streamﬂow. SPIGOT ﬁrst transforms the historicalannual and monthly (or seasonal) ﬂows to Gaussian using fourchoices for the marginal probability densities. These are (1)Gaussian, (2) two-parameter lognormal, (3) three-parameterlognormal, and (4) an approximate three-parameter gammausing the Wilson-Hilferty transformation [
Loucks et al
., 1981,
TARBOTON ET AL.: DISAGGREGATION PROCEDURES110
p. 286]. The parameters for each distribution are estimated bymatching moments and the best-ﬁtting distribution chosen bymeasuring the correlation of observations to the ﬁtted distri-bution quantiles (Filliben’s correlation statistic [
Grygier andStedinger
, 1990]).The next subsection describes the tests for the synthetic datafrom a speciﬁed distribution. This is followed by the San JuanRiver application.
4.1. Test With Synthetic Data
Here we describe a Monte Carlo investigation to test theability of the NPD approach to approximate a speciﬁed under-lying distribution. Our test distribution, illustrated in Figure 2,is based on distribution
J
of
Wand and Jones
[1993]. It consistsof a mixture of three bivariate Gaussians having different weights
i
, stated as
f
i
13
i
N
i
,
i
(20) where
N
(
i
,
i
) denotes a Gaussian distribution with mean
i
and a covariance matrix
i
. Individual weights, means, andcovariances are shown in Table 1. Simulation from this mixeddistribution is achieved by picking one of the three Gaussiandistributions with probability
i
, then simulating a value fromthat distribution.We simulated 101 bivariate samples, each consisting of 80data pairs from this distribution. One sample is designated asthe “calibration” sample and is used to calibrate the NPD andSPIGOT models. In the case of NPD this involves estimatingthe sample covariance and bandwidth parameter
(based onminimizing the LSCV score as described in previous section).Calibration of SPIGOT involves selection of the best marginaldensity transformation based on Filliben’s correlation statisticand estimation of the coefﬁcients in the condensed disaggre-gation model. The remaining 100 samples are used to form 100aggregate test realizations by adding the components
Z
X
1
X
2
. These 100 aggregate test realizations are input to bothNPD and SPIGOT to generate 100 disaggregate realizationsfrom both models. These disaggregate series are designated“test” samples and serve as a basis to test how closely themodel reproduces statistics of the speciﬁed true distributionand of the calibration sample.SPIGOT was modiﬁed to accept the same aggregate ﬂows asthe NPD model. Boundary corrections (discussed in the pre- vious section for the NPD approach and speciﬁed as an optionin the SPIGOT software) were not imposed on either model.To evaluate the reproduction of marginal distributions byeach model, we applied a univariate kernel density estimate toeach of the 100 disaggregated samples. Figure 3 illustrates
Figure 3.
Marginal distribution of synthetic distribution vari-able
X
1
for the calibration and disaggregation samples fromSPIGOT and NPD. The true marginal density is obtained byintegrating the pdf in Figure 2. The calibration pdf is estimatedby integrating the sample joint density of variables
X
1
and
X
2
(a parametric distribution in case of SPIGOT and a kerneldensity estimate in case of NPD). The boxes show the ranges of the univariate kernel density estimates applied to the 100 dis-aggregation samples with a common bandwidth chosen as themedian amongst set of optimal LSCV bandwidths for eachsample. The univariate KDE in the NPD case is a univariatedensity estimate based on the calibration data with the samebandwidth as for the box plots. The dots above the
x
axisrepresent the calibration sample data points.
Figure 2.
Bivariate distribution used in the synthetic exam-ple to test the disaggregation approach. This is a mixture of thethree bivariate Gaussian density functions described in Table 1.
Table 1.
Parameters of the Test Distribution
GaussianDensity
i
i
i
1 0.4 (1.3, 2.5)
0.360.2520.2520.36
2 0.4 (3.7, 2.5)
0.360.2520.2520.36
3 0.2 (2.5, 2.5)
0.36
0.252
0.2520.36
111TARBOTON ET AL.: DISAGGREGATION PROCEDURES

We Need Your Support

Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks