Description

A new video segmentation method of moving objects based on blob-level knowledge

All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.

Related Documents

Share

Transcript

A new video segmentation method of moving objects basedon blob-level knowledge
Enrique J. Carmona
*
, Javier Martı´nez-Cantos, Jose´ Mira
Department of Artiﬁcial Intelligence, ETSI Informa´tica, UNED, Juan del Rosal 16, 28040, Madrid, Spain
Received 1 December 2006; received in revised form 2 October 2007Available online 13 October 2007Communicated by G. Sanniti di Baja
Abstract
Variants of the background subtraction method are broadly used for the detection of moving objects in video sequences in diﬀerentapplications. In this work we propose a new approach to the background subtraction method which operates in the colour space andmanages the colour information in the segmentation process to detect and eliminate noise. This new method is combined with blob-levelknowledge associated with diﬀerent types of blobs that may appear in the foreground. The idea is to process each pixel diﬀerently accord-ing to the category to which it belongs: real moving objects, shadows, ghosts, reﬂections, ﬂuctuation or background noise. Thus, theforeground resulting from processing each image frame is reﬁned selectively, applying at each instant the appropriate operator accordingto the type of noise blob we wish to eliminate. The approach proposed is adaptive, because it allows both the background model andthreshold model to be updated. On the one hand, the results obtained conﬁrm the robustness of the method proposed in a wide range of diﬀerent sequences and, on the other hand, these results underline the importance of handling three colour components in the segmen-tation process rather than just the one grey-level component.
2007 Elsevier B.V. All rights reserved.
Keywords:
Background subtraction; Reﬂection detection; Shadow detection; Ghost detection; Permanence memory; Blob-level knowledge
1. Introduction
The detection of moving objects in video sequences isthe ﬁrst relevant step in the extraction of information inmany computer vision applications including, for example,semantic video annotation, recognition pattern, video sur-veillance, traﬃc monitoring and people tracking. The qual-ity of the results obtained by applying this stage is veryimportant. The more reliable the shape and position of moving objects, the more reliable their identiﬁcation is.In turn, this will guarantee greater success in subsequenttasks related to tracking and classiﬁcation. Therefore, thecrucial issues related to automatic video segmentation areto separate moving objects from the background andobtain accurate boundaries from this kind of objects.There are diﬀerent methods for detecting moving objectsbased, for example, on statistical methods (Horprasertet al., 1999; Lee, 2005; Stauﬀer and Grimson, 1999), fuzzy
logic (Jadon et al., 2001), the subtraction of consecutiveframes (Lipton et al., 1998), optical ﬂow (Wang et al.,
2003), genetic algorithms (Kim and Park, 2006; Carmona
et al., 2006; Martı´nez-Cantos et al., 2006) or hybridapproaches (Collins et al., 2000; Dedeoglu, 2004) that com-
bine some of the aforementioned techniques. Nevertheless,one of the most frequently used approaches with a ﬁxedcamera is based on
background subtraction method
and itsmultiple variants (Wren et al., 1997; Haritaoglu et al.,2000; Stauﬀer and Grimson, 2000; McKenna et al., 2000;Kim and Kim, 2003; Cucchiara et al., 2003; Xu et al.,2005; Leone and Distante, 2007), because of its speed
0167-8655/$ - see front matter
2007 Elsevier B.V. All rights reserved.doi:10.1016/j.patrec.2007.10.007
*
Corresponding author. Tel.: +34 91 398 73 01; fax: +34 91 398 88 95.
E-mail addresses:
ecarmona@dia.uned.es (E.J. Carmona), javiermc@info-ab.uclm.es (J. Martı´nez-Cantos), jmira@dia.uned.es (J. Mira).
www.elsevier.com/locate/patrec
Available online at www.sciencedirect.com
Pattern Recognition Letters 29 (2008) 272–285
and ease of implementation. Basically, this method enablesmoving regions to be detected by subtracting, pixel bypixel, the current image from a
background model
takenas a reference.The outputs produced by the detection algorithms men-tioned above, especially if working with real scenes, gener-ally contain noise. The causes of noise are primarily due tothe intrinsic noise of the camera itself, undesired reﬂec-tions, objects whose total or partial colour coincides withthe background and the existence of sudden shadows andchanges (artiﬁcial or natural) in lighting. The total eﬀectof these factors may be two-fold: ﬁrst, it may mean thatareas that do not belong to moving objects are incorpo-rated into the foreground (foreground noise) and secondly,that determined areas belonging to moving objects nolonger appear in the foreground (background noise). Spe-ciﬁcally, in the set of approaches that use the backgroundsubtraction method to do segmentation, diﬀerent proposalsexist to detect and eliminate this noise. Thus, proposalsexist that address the problem partially, for example, byonly detecting the shadows (Xu et al., 2005; Leone et al.,2006; Leone and Distante, 2007). Other proposals, how-ever, attack the problem globally, i.e., trying to diﬀerentiateand classify moving object blobs and diﬀerent types of noise blobs (Cucchiara et al., 2003).In this work we propose an adaptive segmentationmethod based on a new variant of the background subtrac-tion method which uses relevant information from diﬀerenttypes of blobs that may appear in the foreground as a resultof processing the current frame. This information is closelyrelated to diﬀerent regions of interest that appear in the col-our space by comparing the angle and module of the vectorassociated with each point of the image and the corre-sponding vector from the background model. Thus, it ispossible to characterise and classify each point of the imageaccording to the region to which it belongs: real movingobject, shadow, ghost, reﬂection, ﬂuctuation or back-ground noise. The ﬁnal aim is to use this new informationbased on blob-level knowledge to reﬁne the foreground andupdate the background model to thereby achieve maxi-mum precision during the segmentation. At all times, wewill suppose that we are working with colour videosequences obtained from a ﬁxed standard camera and,without losing generality, that we are working on the stan-dard RGB colour space. The method could be applied inany other colour space.The rest of this article is organised as follows: Section 2characterises diﬀerent types of foreground blobs. Then, thesegmentation method that we propose, called the
truncated cone method
(TCM) is described and broken down into itsdiﬀerent stages. First (Section 3), a transformation of therepresentation space of the problem is done, passing froma three-dimensional space (colour space) to another two-dimensional one (angle-module space). This enables us todeﬁne a rule, called
angle-module rule
, whose applicationproduces a foreground map as an initial approach to thesegmentation result. Second (Section 4), this new represen-tation space allows us not only to characterise the diﬀerentnoise blobs in a more operative way than Section 2, butalso to deﬁne a simple set of operators to eliminate themfrom the foreground. The arrangement of all these ele-ments (rules and operators) in a suitable order will formthe TCM (Section 5). The Section 6 shows the results of
the diﬀerent made experiments as well as its discussion.Finally, the conclusions of this work are described in Sec-tion 7.
2. Blob characterisation: An initial approach
In this section, we make one ﬁrst approach to the char-acterization of the diﬀerent types of blobs that may appearin the foreground. Later, in Section 4, we will reﬁne thisapproach. Other proposals exist (Cucchiara et al., 2003),but ours consists of a set of entities and relations (seeFig. 1) that are initially deﬁned by comparing the currentimage and background model intensity levels:
•
Blob:
set of connected points.
•
Foreground
: binary image obtained from comparing thecurrent image with the background model and applyinga threshold value. In this image, theoretically, only thepoints associated with real moving objects appear.
•
Moving visual object
(MVO): foreground blob,
b
MVO
,associated with a real moving object.
•
Foreground noise
(FN): blob that erroneously appears inthe foreground but does not correspond to any realmoving object.
•
Background noise
(BN): blob that erroneously does notappear in the foreground (virtual blob) but correspondsto some real moving object. This type of noise usuallyappears when the colour of a part of a real movingobject is similar to that area of the background locatedin the same position and the threshold used is not suﬃ-ciently tuned to segment correctly. Therefore, in ﬁrstapproach, this type of noise is characterised becauseall and each of the points,
p
, in this type of virtual blob,
b
BN
, have the property:
j
I
t
ð
x
;
y
Þ
B
t
ð
x
;
y
Þ j
<
d
8
x
;
y
;
p
ð
x
;
y
Þ 2
b
BN
and
d
segmentation threshold.
ImageBackgroundModelForeground(B/W)ForegroundNoise
MVO
BackgroundNoiseShadowReflectionGhostFluctuation
Fig. 1. Classiﬁcation of types of foreground blobs.
E.J. Carmona et al. / Pattern Recognition Letters 29 (2008) 272–285
273
•
Shadow
: a type of foreground noise. It is associated withany zone of the image covered by a real shadow. In ﬁrstapproach, it is characterised because all and each of thepoints in this type of blob,
b
Sh
, have the property:
½
I
t
ð
x
;
y
Þ
B
t
ð
x
;
y
Þ
<
0
8
x
;
y
;
p
ð
x
;
y
Þ 2
b
Sh
.
•
Reﬂection
: a type of foreground noise. It is associatedwith any zone of the image enhanced by a real reﬂection.In ﬁrst approach, it is characterised because all and eachof the points in this type of blob,
b
Rf
, have the property:
½
I
t
ð
x
;
y
Þ
B
t
ð
x
;
y
Þ
>
0
8
x
;
y
;
p
ð
x
;
y
Þ 2
b
Rf
.
•
Ghost
: a type of foreground noise. It is associated withthe ﬁnal position of a moving object that is stopped orthe initial position of a stationary object that initiatesits movement. In both instances, the diﬀerence
j
I
t
ð
x
;
y
Þ
B
t
ð
x
;
y
Þ j
is suﬃciently great to make fore-ground blobs appear associated with the positions, ﬁnalor initial, mentioned.
•
Fluctuation
: a type of foreground noise. With this term,we are talking about small variations in lighting that arecaught between two consecutive frames. These varia-tions can be produced because the optical sensors froma video camera, even in the absence of changes in light-ing, do not register the light intensity values received inan exactly constant manner and/or because the verysource of light (artiﬁcial or natural) can be subjectto slight oscillations. Thus, the pixels in this type of blob,
b
Fl
, have the property:
j
I
t
ð
x
;
y
Þ
I
t
1
ð
x
;
y
Þ j
0
;
8
x
;
y
;
p
ð
x
;
y
Þ 2
b
Fl
, and therefore, in ﬁrst approach,
j
I
t
ð
x
;
y
Þ
B
t
ð
x
;
y
Þ j
0
;
8
x
;
y
;
p
ð
x
;
y
Þ 2
b
Fl
. In otherwords, the diﬀerence is next to zero but is not rigorouslynull.It is important to underline that all the properties men-tioned previously establish necessary but not suﬃcient con-ditions. For example, if
j
I
t
ð
x
;
y
Þ
B
t
ð
x
;
y
Þ j
<
d
, then point(
x
,
y
) could belong to background noise or ﬂuctuation. Thiscircumstance will have to be considered at the time of clas-sifying each image point into one or several blob classes. Inany case, the aim is to use the characterization each one of the entities deﬁned above to facilitate identiﬁcation of eachtype of blob in the scene and, in the last instance, do a moreprecise segmentation and more eﬀective updating of thebackground model.
3. Segmentation: Angle-module rule
We present here a transformation of the three-dimen-sional representation colour space to a new two-dimen-sional space called angle-module space that will be thereference frame of the segmentation method that we pro-pose. The new resulting space allows us to deﬁne a segmen-tation rule whose application produces an initial approachto the ﬁnal foreground map and constitutes the ﬁrst stageof the truncated cone method. Thus, for each position,(
x
,
y
), of a pixel of a given image frame and at a momentof time,
t
, the relation existing between the image RGB vec-tor associated with this position,
I
ð
x
;
y
Þ
t
ð
r
;
g
;
b
Þ
, and the back-ground RGB vector,
B
ð
x
;
y
Þ
t
ð
r
;
g
;
b
Þ
, can be characterisedwith the value of the angle that they form,
h
ð
x
;
y
Þ
t
, and themagnitude of diﬀerence of their modules in absolute value,
D
ð
x
;
y
Þ
t
mod
. From here onwards, to simplify the notation, we willuse
M
t
ð
x
;
y
Þ
to refer to the RGB vector associated with thepoint (
x
,
y
) pertaining to map,
M
, at moment
t
.Ateachinstantoftime,thediﬀerenceinmodules,inabso-lute value and matrix form,
D
t
mod
, is computed as (1), where
j
I
t
j
and
j
B
t
j
are the image and background module matrix,respectively. The calculation of angle, in matrix notation,
H
t
, is calculated from (4), using the two alternative deﬁni-tions of the scalar product, see Eqs. (2) and (3). In (3), the
operator
A
B
denotes the product of the two matrices,
A
and
B
, element by element. In (4), the operator,
ð
A
=
B
Þ
,denotesthequotientbetweenmatrix
A
and
B
,elementbyele-ment, the operator,
A
B
, denotes the product betweenmatrix
A
and
B
, element by element, and
e
represents a verysmall value which avoids a possible division by zero
D
t
mod
¼
abs
ðj
I
t
j j
B
t
jÞ ð
1
Þ
I
t
B
t
¼ j
I
t
j j
B
t
j
cos
ð
H
t
Þ ð
2
Þ
I
t
B
t
¼
I
Rt
B
Rt
þ
I
G t
B
G t
þ
I
B t
B
B t
ð
3
Þ
H
t
arccos
I
t
B
t
ðj
I
t
j j
B
t
jÞ þ
e
ð
4
Þ
By adequately comparing the respective values of these twomatrices,
H
t
and
D
t
mod
, some interesting relations can beobtained. The idea is the following, for each point, (
x
,
y
),of the image, a revolution cone can be built in the RGBspace (see Fig. 2) by using as an axis the straight line thatcontains the point vector,
ð
r
B
;
g
B
;
b
B
Þ
, associated with posi-tion (
x
,
y
) of the background,
B
(
x
,
y
), and another straightline as a generatrix which, passing through the origin,forms an angle
x
with the previous one. If we now tracethree planes perpendicular to the vector
B
(
x
,
y
), one con-taining the point
ð
r
B
;
g
B
;
b
B
Þ
, and the other two, situatedabove and below this, at a distance
h
, they will delimit,along with the cone surface, two regions of interest: a trun-
Fig. 2. Truncated cones associated with a background point, (
x
,
y
), in theRGB space.274
E.J. Carmona et al. / Pattern Recognition Letters 29 (2008) 272–285
cated cone located on the upper part of the intermediateplane and another on the lower part.Once the size of the truncated cone region is determinedby means of, for example,
½
x
0
;
h
0
, the point
ð
r
I
;
g
I
;
b
I
Þ
,associated with the position (
x
,
y
) in the image,
I
(
x,y
), willbelong to the volume delimited by this region if and only if the two conditions, (5) and (6), are veriﬁed simultaneously,where
x
0
and
h
0
are threshold values
H
ð
x
;
y
Þ
6
x
0
ð
5
Þ
D
mod
ð
x
;
y
Þ
6
h
0
ð
6
Þ
The advantage of using the angle-module space is twofold:on the one hand, it reduces the dimensionality, transform-ing the segmentation problem from a RGB three-dimen-sional to a two-dimensional space and, on the otherhand, as will be seen in the following section, it will facili-tate the characterization of the diﬀerent types of blobs thatmay appear in the foreground, according to the fulﬁlmentor non-fulﬁlment of these two conditions. In particular, itis quite intuitive that if we choose suﬃciently small
x
0
and
h
0
, it is possible to establish as a condition of move-ment that every point (
r
,
g
,
b
) associated with the position(
x
,
y
) of the image,
I
(
x
,
y
), which is outside the truncatedcone region deﬁned by conditions (5) and (6), will belongto a real moving object. For this to happen, one of thetwo conditions will not be fulﬁlled. Thus, it is possible todeﬁne a rule, (7), to calculate approximately the ﬁrst fore-ground map
F
t
ð
x
;
y
Þ ¼
1 if
H
t
ð
x
;
y
Þ
>
x
0
or
D
t
mod
ð
x
;
y
Þ
>
h
0
0 else
ð
7
Þ
Really, in the method that we propose, the angle thresholdvalues,
x
i
, and the module threshold values,
h
i
, are calcu-lated according to the position (
x
,
y
) and the instant
t
,
X
t
ð
x
;
y
Þ
and
H
t
ð
x
;
y
Þ
, respectively. Consequently, Eq. (7)is transformed into a new rule, (8), that has been denomi-nated
angle-module rule
F
t
ð
x
;
y
Þ ¼
1 if
H
t
ð
x
;
y
Þ
>
X
t
ð
x
;
y
Þ
or
D
t
mod
ð
x
;
y
Þ
>
H
t
ð
x
;
y
Þ
0 else
8><>:
ð
8
Þ
Both the background model and the threshold matrices rep-resent statistical properties of the pixel intensities observedin the image sequences from earlier moments
f
I
k
ð
x
;
y
Þg
for
k
<
t
.
B
1
ð
x
;
y
Þ
is initialised with the ﬁrst image (in which it issupposed that there are no moving objects), in other words,
B
1
ð
x
;
y
Þ ¼
I
1
ð
x
;
y
Þ
, and
H
1
ð
x
;
y
Þ
and
X
1
ð
x
;
y
Þ
are initialisedwith some predetermined value diﬀerent from zero. Theliterature oﬀers several ways of updating this type of matri-ces over time (Collins et al., 2000; Kim and Kim, 2003;Dedeoglu, 2004; Leone and Distante, 2007). Here, wepropose the approach expressed in (9)–(11), where
a
B
2 ½
0
:
0
;
1
:
0
is a learning constant that speciﬁes how muchinformation from the incoming image is transferred to thebackground,
a
X
,
b
X
2 ½
0
:
0
;
1
:
0
are learning constants thatspecify how much information from the angle matrix,weighted by the value of
c
X
i
,
i
= {1,2}, is transferred tothe angle threshold matrix, and
a
H
,
b
H
2 ½
0
:
0
;
1
:
0
are learn-ing constants that specify how much information from thematrix of module diﬀerences, weighted by the value of
c
Hi
,
i
= {1,2}, is transferred to the module threshold matrix.The values for
a
i
,
i
2 f
B
;
X
;
H
g
,
b
j
,
j
2 f
X
;
H
g
,
c
X
k
, and
c
Hk
,
k
= {1,2}, are adjusted according to experience basedon the type of scene and the objectives to achieve at laterstages in the segmentation process
B
t
þ
1
ð
x
;
y
Þ ¼
a
B
B
t
ð
x
;
y
Þ þ ð
1
a
B
Þ
I
t
ð
x
;
y
Þ
if
ð
x
;
y
Þ 62
F
t
ð
x
;
y
Þ
B
t
ð
x
;
y
Þ
if
ð
x
;
y
Þ 2
F
t
ð
x
;
y
Þ
8>>><>>>:
ð
9
Þ
X
t
þ
1
ð
x
;
y
Þ ¼
a
X
X
t
ð
x
;
y
Þ þ ð
1
a
X
Þ
c
X
1
H
t
ð
x
;
y
Þ
if
ð
x
;
y
Þ 62
F
t
ð
x
;
y
Þ
b
X
X
t
ð
x
;
y
Þ þ ð
1
b
X
Þ
c
X
2
H
t
ð
x
;
y
Þ
if
ð
x
;
y
Þ 2
F
t
ð
x
;
y
Þ
8>>><>>>:
ð
10
Þ
H
t
þ
1
ð
x
;
y
Þ ¼
a
H
H
t
ð
x
;
y
Þ þ ð
1
a
H
Þ
c
H
1
D
t
mod
ð
x
;
y
Þ
if
ð
x
;
y
Þ 62
F
t
ð
x
;
y
Þ
b
H
H
t
ð
x
;
y
Þ þ ð
1
b
H
Þ
c
H
2
D
t
mod
ð
x
;
y
Þ
if
ð
x
;
y
Þ 2
F
t
ð
x
;
y
Þ
8>>><>>>:
ð
11
Þ
It should be stressed that the
B
t
ð
x
;
y
Þ
value is only updatedfor non-foreground points. This could cause the appear-ance of foreground noise from ghost blobs (Dedeoglu,2004). The solution to this problem is considered in the fol-lowing section.
4. Noise detection
The aim of this section is to describe the characteristicsof some notable regions that may be deﬁned in the angle-module space, by combining the fulﬁlment or non-fulﬁl-ment of conditions (5) and (6), and properly choosing thevalues
x
0
and
h
0
from some domain knowledge heuristic.The result is the characterisation of the diﬀerent types of noise blobs according to whether or not they belong toone of these regions, allowing a more operative deﬁnitionthan the one in Section 2. Finally, to a great extent, thischaracterisation will facilitate the construction of ﬁlteringoperators that will selectively be applied to eliminate eachtype of noise blob.
4.1. Detecting shadows
One of the major causes of foreground noise are theshadows that objects project with highly undesirable eﬀects(Horprasert et al., 1999; Salvador et al., 2004). Therefore, it
is necessary to use a method to eliminate this type of noisebecause, otherwise, failure at a possible subsequent stage of tracking and/or classiﬁcation is almost certain.To characterise this type of noise, we make use of thefact that, for each pixel belonging to a shadow region,
E.J. Carmona et al. / Pattern Recognition Letters 29 (2008) 272–285
275
the associated image RGB vector is approximately in thesame direction as the background RGB vector and theRGB vector module of the image pixel is always slightlyless than the corresponding vector module of the back-ground pixel (Horprasert et al., 1999). This means thatall shadow noise image points will be conﬁned in the lowertruncated cone region of the RGB space. Thus, initially,the shadow map,
Sh
t
ð
x
;
y
Þ
, is given by Eq. (12), where
/
sh
and
h
sh
are thresholds
Sh
t
ð
x
;
y
Þ ¼
1 if
H
t
ð
x
;
y
Þ
6
/
sh
and
j
B
t
ð
x
;
y
Þj
P
j
I
t
ð
x
;
y
Þj
P
ðj
B
t
ð
x
;
y
Þj
h
sh
Þ
0 else
8><>:
ð
12
Þ
The shadow map thus deﬁned also contains all those MVOpixels whose colour is conﬁned in this small region and theirelimination from the foreground produces backgroundnoise. In order to minimize the inclusion of this type of points in the shadow map, a second deﬁnition, (13), isproposed, where
h
sh
i
and
/
sh
i
,
i
= {1,2} are thresholds thatfulﬁl
h
sh1
>
h
sh2
,
h
sh1
,
h
sh2
2 ½
0
:
0
;
1
:
0
and
/
sh1
<
/
sh2
. Thisallows greater ﬂexibility to delimit the shadow region (seeFig. 3a). Experimentally, it is observed that with
/
sh1
,
/
sh2
2 ½
0
:
0
;
6
:
0
and
h
sh1
,
h
sh2
2 ½
0
:
5
;
1
:
0
good results areobtained. Instead of working with absolute module thresh-old values, as proposed in (12), in the new deﬁnition of theshadow map, percentage values are calculated relative tothe module of the background RGB vector. It is a way of normalising with respect to the module size. From a practi-cal point of view, this allows better results to be obtained
Sh
t
ð
x
;
y
Þ ¼
1 if
/
sh1
6
H
t
ð
x
;
y
Þ
6
/
sh2
and
ð
h
sh1
j
B
t
ð
x
;
y
ÞjÞ
P
j
I
t
ð
x
;
y
Þj
P
ð
h
sh2
j
B
t
ð
x
;
y
ÞjÞ
0 else
8>>><>>>:
ð
13
Þ
4.2. Detecting reﬂections
To characterise this type of noise, we applied the oppo-site reasoning to the one used for shadows. Thus, for eachreﬂection pixel, the associated image RGB vector isapproximately in the same direction as the backgroundRGB vector and the RGB vector module of the image pixelis always slightly greater than the corresponding vectormodule of the background pixel. This means that all reﬂec-tion noise image points will be conﬁned in the upper trun-cated cone region of the RGB space (see Fig. 3b).Therefore, the reﬂection map,
Rf
t
ð
x
;
y
Þ
, is given by (14),where
h
rf
i
and
/
rf
i
,
i
= {1,2}, are thresholds that fulﬁl
h
rf1
>
h
rf2
,
h
rf1
,
h
rf2
2 ½
1
;
1Þ
,
/
rf1
<
/
rf2
. Experimentally,it is observed that with
/
rf1
,
/
rf2
2 ½
0
:
0
;
6
:
0
and
h
rf1
,
h
rf2
2 ½
1
:
0
;
2
:
0
the best results are obtained
Rf
t
ð
x
;
y
Þ ¼
1 if
/
rf1
6
H
t
ð
x
;
y
Þ
6
/
rf2
and
ð
h
rf1
j
B
t
ð
x
;
y
ÞjÞ
P
j
I
t
ð
x
;
y
Þj
P
ð
h
rf2
j
B
t
ð
x
;
y
ÞjÞ
0 else
8>>><>>>:
ð
14
Þ
4.3. Detecting ﬂuctuations
To characterise this type of noise in the context of ourapproach, it is necessary to consider that the ﬂuctuationwill be associated with small variations in module diﬀer-ences,
j
I
t
j
and
j
B
t
j
, and small variations in the angles,
H
t
.Consequently, all points from the ﬂuctuation noise imagewill be conﬁned in the upper and lower truncated coneregion of the RGB space, with very small
h
and
x
(seeFig. 3c). Thus, the ﬂuctuation map,
Fl
t
ð
x
;
y
Þ
, is given by(15), where
h
f1
and
/
f1
are now absolute thresholds of small value
Fl
t
ð
x
;
y
Þ ¼
1 if
H
t
ð
x
;
y
Þ
6
/
f
l
andabs
ðj
I
t
ð
x
;
y
Þj j
B
t
ð
x
;
y
ÞjÞ
6
h
f
l
0 else
8><>:
ð
15
Þ
Nevertheless, note that the condition that makes it possibleto delimit the ﬂuctuation noise region, characterised by Eq.(15), is exactly the opposite to the condition expressed bythe angle-module rule, (8). This means that the segmenta-tion obtained initially by applying just the angle-modulerule excludes all the ﬂuctuation noise points and, therefore,it is not necessary to use any operator to eliminate this typeof noise.
4.4. Detecting ghosts
The elimination of ghosts could have been done (Dedeo-glu, 2004) if we had updated the background of all the fore-
Fig. 3. Noise blob characterisation based on the truncated cone method: (a) shadow region, (b) reﬂection region, (c) ﬂuctuation region.276
E.J. Carmona et al. / Pattern Recognition Letters 29 (2008) 272–285

Search

Similar documents

Related Search

Motion Semantics of Moving ObjectsA New Industry: The Publishing Of Structural A New Industry: the Publication of Reading TyPrediction of Garment Drapability Based on FaEFFECTS OF TEACHING MATERIAL BASED ON 5E MODERise of Chinese Civilization Based on Paddy RMethods of age calculations based on teethA new variant of Alesia brooch from Italy andUsing of Natural Herbal Plants as a New AlterA new set of methods for humiliate and hurt T

We Need Your Support

Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks