Description

A correlation tensor-based model for time variant frequency selective MIMO channels

All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.

Related Documents

Share

Transcript

A CORRELATION TENSOR–BASED MODEL FOR TIME VARIANT FREQUENCYSELECTIVE MIMO CHANNELS
Martin Weis, Giovanni Del Galdo, and Martin Haardt
Ilmenau University of Technology - Communications Research LaboratoryP.O. Box 100565, 98684 Ilmenau, Germanymartin.weis@stud.tu-ilmenau.de,
{
giovanni.delgaldo, martin.haardt
}
@tu-ilmenau.de
ABSTRACT
In this contribution we present a new analytical channelmodel for frequencyselective, time variant MIMO systems.The model is based on a correlation tensor, which allows anatural description of multi–dimensionalsignals. By apply-ing the Higher Order Singular Value Decomposition (HO-SVD), we gain a better insight into the multi–dimensionaleigenstructure of the channel. Applications of the modelinclude the denoising of measured channels and the possi-bility to generate new synthetic channels displaying a givencorrelation in time, frequency, and space. The proposedmodelpossessesadvantagesoverexisting2–dimensionalei-genmode–based channel models. In contrast to them, thetensor–based model can cope with frequency and time se-lectivity in a natural way.
1. INTRODUCTION
Multiple Input Multiple Output (MIMO) schemes offer thechancetofulﬁllthechallengingrequirementsforfuturecom-munication systems, as higher data rates can be achievedby exploiting the spatial dimension. To investigate, design,and test new techniques, it is crucial to use realistic channelmodels.We propose a tensor–based analytical channel modelwhich, in contrast to traditional models, can cope with non–stationary time and frequency selective channels. The lat-ter are particularly relevant for wireless communications.We represent the frequency selective, time variant MIMOchannel as a 4–dimensional tensor
H
∈
C
M
R
×
M
T
×
N
f
×
N
t
,where
M
R
and
M
T
are the number of antennas at the trans-mitter and receiver, whereas
N
f
and
N
t
are the number of samples taken in frequency and time, respectively.To visualize the spatial structure of the channel, eigen-mode–based models have been introduced, such as [1, 2].However,these modelsuse a 2–dimensionalcorrelationma-trixwhichconsidersonedimensiononly. Alternatively,byacumbersome stacking of the channel coefﬁcients, as in [2],it is possible to consider more dimensions. Moreover, byfollowing this approach, it is not possible to investigate theeigenmodes of different dimensions separately, whereas theproposed tensor–based channel model allows this.In [3], a tensor–based channel model was introduced.Thelatter is howevera tensorextensionof[1], andthereforeassumes a Kronecker like structure of the eigenmodes. Inthis paper, we introduce a more general tensor–based chan-nel model, which truly captures the nature of MIMO chan-nels. The generalized Higher Order Singular Value Decom-position(HOSVD) [4] gives us the possibility to analyze the eigen-structure of the channel along more dimensions, i.e., alongspace and frequency.The paper is organized as follows: Section 2 gives abrief introduction of the relevant tensor algebra, which isneeded to understand the proposed model. Section 3 intro-duces the tensor–based channel model and its applications.Moreover, this section shows the applicability and validityof the model on channel measurements. In Section 4 theconclusions are drawn.
2. BASIC TENSOR CALCULUS2.1. Notation
To facilitate the distinction between scalars, vectors, matri-ces andhigher–ordertensors, we use the followingnotation:scalars are denoted by lower–case italic letters
(
a,b,...
)
,vectors by boldface lower–case italic letters
(
a
,
b
,...
)
, ma-trices by boldface upper–case letters
(
A
,
B
,...
)
, and ten-sors are denoted as upper–case, boldface, calligraphic let-ters
(
A
,
B
,...
)
. Thisnotationisconsistentlyusedforlower–order parts of a given structure. For example, the entry withrow index
i
and column index
j
in a matrix
A
is symbol-ized by
a
i,j
. Furthermore, the
i
–th column vector of
A
isdenoted as
a
i
. As indices, mainly the letters
i
,
j
,
k
, and
n
are used. The upper bounds for these indices are given bythe upper–case letters
I
,
J
,
K
, and
N
, unless stated other-wise.
2.2.
n
–mode vectors and tensor unfoldings
In the (2–dimensional) matrix case we distinguish betweenrow vectors and column vectors. As a generalization of thisidea, we build the
n
–mode vectors
{
a
}
of an
N
–th ordertensor
A
∈
C
I
1
×
I
2
×···×
I
N
by varying the index
i
n
of theelements
{
a
i
1
,...,i
n
,...,i
N
}
while keeping the other indicesﬁxed. In Figure 1, this is shown for a 3–dimensional ten-sor. Please note that in general there are
(
I
1
·
I
2
···
I
n
−
1
·
I
n
+1
···
I
N
)
suchvectors. Inthe2–dimenionalcasethecol-umn vectors are equal to the 1–mode vectors, and the rowvectors are equal to the 2–mode vectors. The
n
–th unfold-
1
I
2
I
2
I
1
I
3
I
2
I
1
I
3
I
3
I
Fig. 1
. Mode 1, 2, and 3 vectors of a 3–dimensional tensor.ing matrix
A
(
n
)
∈
C
I
n
×
(
I
1
I
2
···
I
n
−
1
I
n
+1
···
I
N
)
is the matrixconsisting of all
n
–mode vectors. In [4], the ordering of the
n
–mode vectors was deﬁned in a cyclic way. In contrast tothe deﬁnition in [4] we deﬁne the
n
–th unfolding matrix asfollows:
A
[
n
]
=
{
a
j,k
} ∈
C
I
n
×
(
I
1
I
2
···
I
n
−
1
I
n
+1
I
n
+2
···
I
N
)
,
with
j
=
i
n
and
k
= 1 +
N
l
=1
,l
=
n
(
i
L
−
1)
·
l
−
1
q
=1
,q
=
n
I
q
.
This deﬁnition assures that the indices of the
n
–mode vec-tors vary faster in the following ascending order
i
1
,i
2
,...,i
n
−
1
,i
n
+2
,...,i
N
.
(1)This ordering becomes particularly important for our laterderivations, especially for equation (23). Please note thatthisunfoldingdeﬁnitionis alsoconsistentwiththe
MATLAB
command
reshape
. Therefore, we will refer to this un-folding as the MATLAB–like unfolding.
2.3. Tensor operations
2.3.1. The
n
–mode product
To perform a generalized Higher Order Singular Value De-composition (HOSVD), it is necessary to transform the
n
–mode vector space of a tensor. This can be done with the
n
–mode product between a tensor and a matrix. Let us as-sume a tensor
A
=
{
a
i
1
,i
2
,...,i
N
} ∈
C
I
1
×
I
2
×···×
I
N
and amatrix
U
∈
C
J
n
×
I
n
. Then the
n
–mode product, denotedby
A
×
n
U
, is a
(
I
1
×
I
2
×···×
I
n
−
1
×
J
n
×
I
n
+1
×···×
I
N
)
tensor, whose entries are given by
(
A
×
n
U
)
i
1
,i
2
,...,i
n
−
1
,j
n
,i
n
+1
,...,i
N
=
I
n
i
n
=1
a
i
1
,i
2
,...,i
n
−
1
,i
n
,i
n
+1
...,i
N
·
u
j
n
,i
n
,
(2)for all possible values of the indices. With the help of theunfolding deﬁnition from above we can write the
n
–modeproduct also in terms of matrix operations. Then, the
n
–thunfolding of the resulting tensor
B
can be calculated as
B
[
n
]
=
U
·
A
[
n
]
.
(3)
2.3.2. The outer product
We nowdeﬁnetheouterproductbetween2tensors. Assumean
N
–th order tensor
A
and a
K
–th order tensor
B
. Then,the outer product, denoted as
(
A
◦
B
)
, is a
(
N
+
K
)
–thdimensional tensor whose entries are given by
(
A
◦
B
)
i
1
,i
2
,...,i
N
,j
1
,j
2
,...,j
K
=
a
i
1
,i
2
,...,i
N
·
b
j
1
,j
2
,...,j
K
,
for all possible values of the indices. Therefore, the outerproduct creates a tensor with all combinations of possiblepairwise element–products.
2.3.3. The
n
–mode inner product
The
n
–mode inner product is denoted as
A
=
B
•
n
C
. Theresulting tensor
A
has order
N
+
K
−
2
, where
N
and
K
are the orders of
B
∈
C
I
1
×···×
I
N
and
C
∈
C
J
1
×···×
J
N
,respectively. It is related to the outer product and impliesan additional summation over the
n
–th dimension of bothtensors. Therefore, we deﬁne the
n
–mode inner product as
A
=
I
n
l
=1
B
i
n
=
l
◦
C
j
n
=
l
,
(4)where
B
i
n
=
l
is the
(
N
−
1)
–th dimensional subtensor of
B
which we obtain whenwe set the indexalongthe dimension
n
equal to
l
. The tensor
C
j
n
=
l
is deﬁned in an analogousway. Please note that the tensors
B
and
C
must be of samesize along the
n
–th dimension, and therefore
I
n
=
J
n
.
2.3.4. The
vec(
·
)
operator for tensors
The
vec(
·
)
operator stacks all elements of a tensor into avector. Thereby the indices
i
n
of an
N
–dimensional tensor
A
vary in the following ascending order
i
1
,i
2
,...,i
N
−
1
,i
N
.
Please note that the unfolding deﬁnition in Section 2.2 en-sures that the
vec(
·
)
operation for an
N
–dimensional tensoris equal to the transpose of its
(
N
+ 1)
–th unfolding
vec(
A
) =
A
T[
N
+1]
.
(5)
2.4. Higher Order Singular Value Decomposition
Every
N
–th order complex tensor
A
∈
C
I
1
×
I
2
×···×
I
N
canbe decomposed into the form
A
=
S
×
1
U
(1)
×
2
U
(2)
··· ×
N
U
(
N
)
,
(6)inwhichthematricesofthe
n
–modesingularvectors
U
(
n
)
=
u
(
n
)1
,
u
(
n
)2
,...,
u
(
n
)
I
n
∈
C
I
n
×
I
n
are unitary, and the coretensor
S
∈
C
I
1
×
I
2
×···×
I
N
is a tensor of the same the sizeas
A
. The basis matrices
U
(
n
)
contain the left singularvectors
u
(
n
)1
,
u
(
n
)2
,...,
u
(
n
)
I
n
of the matrix unfoldings
A
[
n
]
.The core tensor
S
can be calculated with the equation
S
=
A
×
1
U
(1)
H
×
2
U
(2)
H
··· ×
N
U
(
N
)
H
,
(7)
f
0
t
0
t
1
frequencytime
H
(
f
0
,t
0
)
∈
C
M
R
×
M
T
H
(
t
1
)
∈
C
M
R
×
M
T
×
N
f
Fig. 2
. Deﬁnition of the channel tensor. The 2–dimensionalsubtensor
H
(
f
0
,t
0
)
, and the 3–dimensional subtensor
H
(
t
1
)
are depicted.where
(
·
)
H
denotes the Hermitian transpose. The core ten-sor fulﬁlls some special properties, especially the propertyof all orthogonality, which means that the rows of all un-folding matrices
S
[
n
]
are orthogonal, c.f. [4]. With the helpof the Kronecker product it is possible to write equation (6)in terms of matrix operations.
A
[
n
]
=
U
(
n
)
·
S
[
n
]
·
U
(
N
)
⊗ ··· ⊗
U
(
n
+1)
⊗
U
(
n
−
1)
⊗ ··· ⊗
U
(1)
T
.
(8)Pleasenotethatthisformulais onlyvalidfortheMATLAB–like unfolding deﬁned in Section 2.2.
3. TENSOR CHANNEL MODEL
As alreadymentioned,we representthe channelcoefﬁcientsin form of the tensor
H
∈
C
M
R
×
M
T
×
N
f
×
N
t
,
(9)where
M
R
and
M
T
are the numberof antennas at the trans-mitter and receiver, and
N
f
and
N
t
are the number of sam-ples taken in frequency and time, respectively. Please notethat the frequency domain of the channel is connected to itsdelay time
τ
via a Fourier Transform.Similarly to [3, 5], we now deﬁne the channel correla-tion tensor as
R
= E
{
H
(
t
)
◦
H
(
t
)
∗
} ∈
C
M
R
×
M
T
×
N
f
×
M
R
×
M
T
×
N
f
,
(10)where
H
(
t
)
∈
C
M
R
×
M
T
×
N
f
is the frequency selectiveMIMO channel at time snapshot
t
.Assuming that the channel is block–wise stationary intime, we deﬁne an averaging window of size
T
W
, so thatthe channel within the
k
–th window, denoted with
H
k
, canbe assumed stationary (see Figure 3). The over–all channelmatrix
H
is then deﬁned as
H
=
H
1 4
H
1 4
···
4
H
⌊
N tT
W
⌋
,
(11)where
4
denotes the concatenation of the tensors
H
k
along the 4–th dimension, as introduced in [5]. We com-pute an estimate of the
k
–th correlation tensor by averagingin time, as
R
k
≈
ˆ
R
k
= 1
T
W
·
T
W
n
=1
(
H
k
)
i
4
=
n
◦
(
H
k
)
∗
i
4
=
n
= 1
T
W
·
H
k
•
4
H
∗
k
,
(12)
T
W
T
W
T
W
N
t
N
f
H
1
∈
C
M
R
×
M
T
×
N
f
×
T
W
H
2
H
⌊
N
t
T
W
⌋
k
= 1
k
= 2
k
=
⌊
N
t
T
W
⌋
Fig. 3
. Deﬁnition of the non-overlapping stationarity win-dows. Each window has
T
W
time samples. In the otherdimensions, each window is of same size as
H
.where
•
4
denotes the
4
–mode inner product. Please notethat
n
is the time index within the
k
–th window. To get in-sight intothespatial structureof thechannel,wedecomposethis correlation tensor via the following HOSVD
ˆ
R
k
=
S
k
×
1
U
(1)
k
×
2
U
(2)
k
×
3
U
(3)
k
×
4
U
(4)
k
×
5
U
(5)
k
×
6
U
(6)
k
.
(13)The matrices
U
(
n
)
k
contain the left singular vectors of the
n
–th unfolding matrices
(
ˆ
R
k
)
[
n
]
of
ˆ
R
k
. The symmetriesof the correlation tensor are also reﬂected by its HOSVD.Therefore,we can choose a HOSVD such that the followingequations hold
U
(1)
k
=
U
(4)
∗
k
,
U
(2)
k
=
U
(5)
∗
k
,
U
(3)
k
=
U
(6)
∗
k
.
(14)In this case, equation (13) can be simpliﬁed to
ˆ
R
k
=
S
k
×
1
U
(1)
k
×
2
U
(2)
k
×
3
U
(3)
k
×
4
U
(1)
∗
k
×
5
U
(2)
∗
k
×
6
U
(3)
∗
k
,
(15)and the core tensor
S
k
, according to Section 2.4, can becalculated via
S
k
=
ˆ
R
k
×
1
U
(1)
H
k
×
2
U
(2)
H
k
×
3
U
(3)
H
k
×
4
U
(1)
T
k
×
5
U
(2)
T
k
×
6
U
(3)
T
k
.
(16)The proposed channel model consists in computing thecorrelationtensors
ˆ
R
k
forallwindows,i.e.,
∀
k
= 1
...
⌊
N
t
T
W
⌋
,as they describe exhaustively the correlation properties of the channel, seen as a temporal block stationary stochasticprocess.Inthe followingwe givea briefdescriptionoftwo appli-cations of the proposed channel model, namely the genera-tion of newchannelrealizationsandthe subspace–basedde-noising of a channel measurement. Then we apply the cor-relation tensor–based channel model to channel measure-ments, and compare its performance with the tensor–basedchannel model presented in [3].
3.1. Channel Synthesis
In the 2–dimensionalcase [1, 2], the joint spatial correlationmatrix is deﬁned as
R
k
= E
{
vec(
H
k
)
·
vec(
H
k
)
H
}
.
(17)Similarly to (12), we can compute an estimate of
R
k
, de-noted by
ˆ
R
k
with
ˆ
R
k
= 1
T
W
T
W
n
=1
vec
(
H
k
)
i
4
=
n
·
vec
(
H
k
)
i
4
=
n
H
.
(18)From the information given in the correlation matrix
ˆ
R
k
, itis possibletoconstructanewrandomsyntheticchannel
˜
H
k
,displaying the same spatio–frequency correlation as
H
k
(
t
)
via
vec
˜
H
k
=
X
k
·
g
,
(19)where the entries of the vector
g
are i.i.d. zero mean com-plex Gaussian random numbers with unit variance, and
X
k
=
ˆ
R
12
k
.
(20)The matrix
X
k
is computedvia a 2–dimensionaleigenvaluedecomposition of
ˆ
R
k
=
U
k
·
Σ
k
·
U
H
k
,
(21)where the matrix
U
k
contains all eigenvectors of
ˆ
R
k
and
Σ
k
is the diagonal matrix containing the eigenvalues. Thenwe deﬁne
X
k
as follows
ˆ
R
12
k
=
U
·
Σ
12
=
X
k
.
(22)By computing the matrix
X
k
we can generate new randomsyntheticchannelsbymeansofequation(19). Inthefollow-ing we show that it is also possible to compute the matrix
X
k
from the HOSVD of the correlation tensor
ˆ
R
k
.WiththehelpoftheMATLAB–likeunfoldingfromSec-tion 2, the connection between the estimated 2D correlationmatrix
ˆ
R
k
and the estimated correlation tensor
ˆ
R
k
is givenby
vec
ˆ
R
k
=
ˆ
R
k
T[3]
=
ˆ
R
k
T[7]
= vec
ˆ
R
k
.
(23)To compute the matrix
X
k
, we can either compute a SVDof
ˆ
R
k
, or apply a HOSVD on
ˆ
R
k
. This relation betweenthe 2–dimensional model and the correlation tensor–basedmodelfollows fromthe followingderivation. For the reasonof simplicity we ﬁrst deﬁne the following unitary matrix:
U
e
k
=
U
(3)
k
⊗
U
(2)
k
⊗
U
(1)
k
.
(24)With the Kronecker version of the HOSVD (8) it followsfrom the latter relation
vec
ˆ
R
k
= vec
ˆ
R
k
=
ˆ
R
k
T[7]
=
U
e
∗
k
⊗
U
e
k
·
(
S
k
)
T[7]
= vec
U
e
k
·
unvec
I
×
I
(
S
k
)
T[7]
·
U
e
H
k
.
Here
unvec
I
×
I
(
S
k
)
T[7]
denotes the inverse matrix
vec(
·
)
operation applied to the 7–th unfolding of the core tensor
S
k
. Therefore,
unvec
I
×
I
(
S
k
)
T[7]
is a square matrix of same size as
ˆ
R
k
with
I
=
M
R
M
T
N
f
. The correlationmatrix
ˆ
R
k
can be calculated from the HOSVD of the tensor
ˆ
R
k
via
ˆ
R
k
=
U
e
k
·
unvec
I
×
I
(
S
k
)
T[7]
·
U
e
H
k
(25)Please note that
unvec
I
×
I
(
S
k
)
T[7]
is not a diagonal ma-trix. Therefore, we have to perform an additional SVD forthe calculation of
X
k
, as follows
unvec
I
×
I
(
S
k
)
T[7]
=
V
k
·
˜Σ
k
·
V
H
k
.
Now the matrix
X
k
can be calculated as
X
k
=
U
(3)
k
⊗
U
(2)
k
⊗
U
(1)
k
·
V
k
·
˜Σ
12
k
.
(26)Especially in cases, where
˜Σ
k
is of low rank it is compu-tationally cheaper to calculate
X
k
directly from
ˆ
R
k
usingequation (26).
3.2. Denoising a Measured Channel
In order to reduce the noise in measured channels, we ex-tend an idea proposed in [2] to the correlation tensor–basedchannel model. For every window
k
, as deﬁned above, weconstruct a tensor
Z
k
(
t
)
, calculated for everytime snapshot
t
, using the following equation
Z
k
(
t
) =
H
k
(
t
)
×
1
U
(1)
H
k
×
2
U
(2)
H
k
×
3
U
(3)
H
k
,
(27)where the matrices
U
(
n
)
k
are computed from the correlationtensor
ˆ
R
k
given in (12). With the help of the tensor
Z
k
(
t
)
,the channel can be reconstructed exactly via the synthesisequation
H
k
(
t
) =
Z
k
(
t
)
×
1
U
(1)
k
×
2
U
(2)
k
×
3
U
(3)
k
.
(28)Denoisingthemeasurementtensor
H
k
(
t
)
ispossiblebysim-ply considering only the ﬁrst
L
n
singular vectors of
U
(
n
)
k
,corresponding to the
L
n
largest singular values of
ˆ
R
k
.
L
n
should be determined with the help of the singular valuespectra of the HOSVD. This is similar to the well knownlow–rankapproximationofamatrixviathe2DSVD.Thereby,we assume that the omitted singular vectors span the noisespace. Thus, we obtain the tensor
Z
′
k
(
t
)
∈
C
L
1
×
L
2
×
L
3
and
(
H
k
)
denoised
(
t
) =
Z
′
k
(
t
)
×
1
U
(1)
k
[
L
1
]
×
2
U
(2)
k
[
L
2
]
×
3
U
(3)
k
[
L
3
]
,
(29)where
U
(
n
)
k
[
L
n
]
indicates the matrix containing the ﬁrst
L
n
singular vectors along the
n
-th dimension and for the
k
-th window.
--
GenerateMatrixSyntheticChannelDenoising ApproachTensor Denoising Approach
n
(
t
)
n
(
t
)
e
matrix
e
tensor
| · |
2
| · |
2
Fig. 4
. Block diagramm of the performance test for the 2denoising approaches.
−20 −15 −10 −5 0 5 10 15 2000.10.20.30.40.50.6
e
m a t r i x
,
e
t e n s o r
Signal to Noise Ratio [dB]Tensor channel model (
H
1
)2D channel model (
H
1
)Tensor channel model (
H
2
)
2D channel model (
H
2
)
Noisy channel
Fig. 5
. Reconstruction error for two synthetic noisy chan-nels: a rich multi–path channel
H
1
; a 2–path channel
H
2
.The tensor–based denoising outperforms the 2D approachfor richer channels. The green curve represents the error forthe noisy channels.Figure 5 shows the reconstruction error for two noisysynthetic channels, namely
H
1
and
H
2
, and two denois-ing approaches. The channels are created with the IlmProp,a ﬂexible geometry based channel model capable of gen-erating frequency selective time variant multi–user MIMOchannels displaying realistic correlation in frequency, time,space, and between users, cf. [6]. The Figures 7 and 6 showthe geometries of the synthetic channels. The ﬁrst chan-nel
H
1
, is richer in multi-path components than the second
H
2
. The reconstruction error (power) is deﬁned as the Eu-clidian distance between the noiseless channel and the re-constructed (denoised) channel
e
x
=
N
f
M
R
M
T
n
=1
vec(
H
synthetic
)
n
−
vec(
H
denoised
)
n
2
,
(30)for
x
=
{
matrix
,
tensor
}
, as alsodepictedFigure4. InFig-ure5, thethicklinesrepresenttheerrorobtainedviathepro-posed tensor–based method. The thin lines show the recon-struction error achieved by applying a low–rank approxi-mation directly on
ˆ
R
. We can observe that the tensor–basedapproach leads to a better subspace estimate, because of theadditional singular vectors in the frequency direction. Thistranslates to a lower reconstruction error. The gain with re-spect to the 2D approach becomes more relevant for richer
Y [m]X [m]RXTXtraj
Fig. 6
. Synthetic IlmProp scenario characterized by richscattering (
H
1
)
Y [m]X [m]RXTX
Fig. 7
. Synthetic IlmProp scenario characterized by 2 paths(
H
2
)
channels.
3.3. Validation on Measurements
Next,weapplytheproposedcorrelationtensor–basedmodelon measurements gathered from the main train station inMunich, Germany. The multi-dimensional RUSK MIMOchannel sounder employed a
16
×
8
antenna architecturewitha16–elementuniformcirculararray(UCA)atthetrans-mitter and an 8–element uniform linear array (ULA) at thereceiver. The antenna spacing at both arrays was about
λ/
2
.The bandwidth was 120 MHz at a carrier frequency of 5.2GHz. The frequency spacing was 3.125 kHz which yields atotal of 385 frequency bins. The receiver measured a com-plete channel response every 18.432 ms. A total of 9104time snapshots were taken. The mobile terminal was mov-ing in a Non Line-Of-Sight (NLOS) regime. The environ-ment was particularly rich in multi-path components.For the calculation of the channel model we consideronly25adjacentfrequencybinsaroundthecenterfrequency,thus spanninga bandwidthof 7.5MHz. We dividethe chan-nel into windows of
T
W
= 25
samples in the time domain,as in Figure 3.We ﬁrst consider 10 adjacent time windows of the mea-sured channel. To assess the behavior of the channel inthe frequency domain, we compare the proposed correla-

Search

Similar documents

Tags

Related Search

Optimum Density Based Model for ProbabilisticSwitzerland as a model for net-centred democr- Conceptual Model for Reduce Time in New ProDevelopment of average model for control of aA Radio Propagation Model for VANETs in UrbanPaper-Based Aids for Learning With a ComputerA Phenomenological Model for Psychiatry, PsycA simulation model for chemically amplified rA Novel Model for Competition and CooperationA model for introducing technology in rural a

We Need Your Support

Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks