A SLAM algorithm for indoor mobile robot localization using anExtended Kalman Filter and a segment based environment mapping
Luigi D’Alfonso, Andrea Griffo, Pietro Muraca, Paolo Pugliese
Abstract
—The present paper faces the Simultaneous Localization And Mapping (SLAM) problem for a mobile robot inan unknown indoor environment. A set of segments is usedto model the robot surrounding environment and segments’starting and ending points are used as SLAM landmarks. Asegment based mapping algorithm is proposed and used alongwith an Extended Kalman ﬁlter driven by measurements takenby ultrasonic sensors located on the robot. The proposed SLAMalgorithm has been tested in both simulated and real experimentsyielding to encouraging estimation and mapping results.
I. I
NTRODUCTION
Smith and Cheeseman [1] introduced the Simultaneous Localization And Mapping (SLAM) problem in the 80’s and sincethen this problem has received very considerable attention. TheSLAM problem involves the parallel estimation of a mobilerobot pose (position and orientation) and of the environmentwhere the robot moves.To solve the SLAM problem, the robot needs to build anenvironment map and to localize itself within this map. Inmobile robotics the SLAM problem is considered as
chickenand egg
problem due to the strong correlation between thelocalization task and the mapping task. To know where therobot is, a good environment mapping is needed but to havea reliable environment mapping, the robot’s position andorientation should be estimated as best as possible.The literature shows various solutions to the SLAM problem[2], [3], [4], [5], [6] using various sensors types, e.g. wheelencoders, laser range sensors, sonar range sensors and cameras.Each SLAM algorithm is really inﬂuenced by the used sensorstypes and the same algorithm can be very efﬁcient using somesensors but it can yield to poor performance using differentones. Thanks to its sensors, the robot can observe (sometimesonly partially and inexactly) itself and the world where itmoves. Once the sensors’ measurements have been acquired,depending on the measurements’ typology, various algorithmscan be used to obtain the robot pose and to reconstruct therobot environment. For instance, laser sensors measurementscan be fused through an Extended Kalman ﬁlter (EKF) asshown in [2], while camera measurements can be used alongwith a RaoBlackwellised particle ﬁlter (RBPF) to solve theSLAM problem, see [4]. Laser sensors and cameras probablyrepresent the best way to face the SLAM problem but theyhave some drawbacks. Laser sensors can be very expensivewhile cameras provide a large amount of information and itmay be difﬁcult and computationally onerous to extract thisinformation starting from the camera image. The SLAM problem can be solved also using ultrasonic sensors. These sensorsare less expensive than laser scanners and range cameras,their use is computationally cheap and they work well also indark or transparent environments, where cameras usually fail.
Authors are with DIMES, Universit`a della Calabria, Rende, Italy. Email:
{
ldalfonso,muraca,pugliese@dimes.unical.it
}
, and.griffo@gmail.com
Many works can be found in the literature on the use of theultrasonic sensors in SLAM applications. In [6] Tardos et al.describe a technique to face the SLAM problem using standardsonar sensors and the Hough transform [9] to detect corners,edges and walls into the environment starting from acquiredsonar data. In [7] a posebased algorithm to solve the SLAMproblem for an autonomous underwater vehicle is proposed.A probabilistic scan matching technique using range scansgathered from a mechanical scanning imaging sonar is usedalong with two Extended Kalman ﬁlters, one for the estimationof the local path traveled by the robot and the second one thatestimates and keeps the registered SLAM landmarks.Very often the SLAM algorithms assume to have at least someinformation about the environment or they assume to model theenvironment in a very approximated way. For example in [8]the authors assume the robot placed in an environment modeledas a set of orthogonalparallel lines. On the contrary, in thepresent paper, we propose a solution to the SLAM problemin an unknown environment using noisy sonar measurementsand a very simple but effective structure for the environmentmap. The proposed solution uses a segment based model forthe robot surrounding environment and it solves the SLAMproblem thanks to an Extended Kalman ﬁlter suitably adaptedto the above environment model.In the SLAM context, it is mandatory to satisfy time constraints since information provided by SLAM algorithms istypically used to compute the output of a control law designed,for example, to make the robot follows a given trajectory orcompletes a given task. The proposed SLAM algorithm hasbeen developed looking for a very simple but efﬁcient strategyin terms of the algorithm computational requirements. It willbe shown, through simulations and experimental tests, that theresulting algorithm can be suitably adapted to satisfy timeconstraints ensuring good estimation and mapping results.The paper is organized as follows: in section II the problemstatement is described; in section III the proposed SLAM algorithm is discussed; in section IV experimental and numericalresults are shown and in section V conclusions are drawn andpossible extensions for future investigations are proposed.II. P
ROBLEM
S
TATEMENT
Assume to have a mobile robot placed in an unknown environment, the robot is equipped with on board ultrasonicsensors able to provide the distance of the robot from theenvironment bounds. Since there is no
apriori
knowledge onthe environment, to yield the output equation related to the onboard sonar sensors, a model for the environment boundariesis required. In the following subsections the robot and theenvironment models will be described.
A. Robot model
Consider a batterypowered mobile robot with two independentdriving wheels and a castor wheel: a concrete example is theKhepera III robot [10], which has been used in the experiments
described in Section IV. For such robot, an approximated,discretetime model is [11]:
x
1
k
+1
=
x
1
k
+
V
k
T
cos(
θ
k
+1
) +
w
1
k
x
2
k
+1
=
x
2
k
+
V
k
T
sin(
θ
k
+1
) +
w
2
k
θ
k
+1
=
θ
k
+ ∆
k
+
w
θk
,
(1)where:
•
(
x
1
k
,x
2
k
)
is the robot position and
θ
k
is the anglebetween the robot axle and the
x
1
axis, at time
t
k
;
•
V
k
=
r
(
ω
lk
+
ω
rk
)
/
2
is the robot linear velocity and
ω
lk
,
ω
rk
are the wheels angular velocities;
•
w
1
k
,w
2
k
,w
θk
are zeromean uncorrelated Gaussiannoises;
•
r
is the wheels radius and
d
is the distance betweenthe active two wheels;
•
∆
k
=
r
(
ω
lk
−
ω
rk
)
T/d
is the rotation within thesampling period
T
=
t
k
−
t
k
−
1
.The model input variables are the wheels angular velocities
ω
lk
and
ω
rk
, and they have been precomputed so that the robotfollows the desired trajectories. The model’s state variablesare
x
k
= [
x
1
k
x
2
k
θ
k
]
T
. The Gaussian disturbances takeinto account unmodeled dynamics, friction, wheels slippingand also, if the case, external disturbances such as wind.
B. Environment model
The robot is assumed to be equipped with
n
S
= 5
ultrasonicsensors, placed as shown in Figure 1. The environment will bemodeled as a set of segments such that each of them intersectsat least at one point of the environment boundaries (Figure 1).
Fig. 1. real environment (red line), segment based modeled environment(blue line), robot on board sensors positions
Each sensor
S
i
provides, at each step
k
, the distance from therobot center, denoted by
p
= (
x
1
k
,x
2
k
)
, to one point on theenvironment boundaries, denoted by
˜
p
i
= (˜
x
1
,
˜
x
2
)
.Considering the model used for the environment, the measurement provided by the sensor
S
i
is approximated, as shownin Figure 1 for the case
S
i
=
S
3
, by the distance from
p
tothe intersection point,
p
i
= (
x
1
,x
2
)
, between the axis of thesensor
S
i
and one of the environment modeling segments.Marking the axis of the sensor
S
i
as
x
2
=
a
i
x
1
+
q
i
anddenoting the axis of the modeling segment in front of thesensor as
x
2
=
c
i
x
1
+
s
i
, the coordinates of the intersectionpoint
p
i
are given by
x
1
=
s
i
−
q
i
a
i
−
c
i
, x
2
=
a
i
s
i
−
c
i
q
i
a
i
−
c
i
,
(2)and the measurement,
y
i
, from sensor
S
i
, is modeled as
y
i
=
(
x
1
k
−
˜
x
1
)
2
+ (
x
2
k
−
˜
x
2
)
2
and it is approximated by
y
i
≈
(
x
1
k
−
x
1
)
2
+ (
x
2
k
−
x
2
)
2
.
(3)Since the robot structure is given, the orientation,
α
i
, of eachsensor
S
i
, with respect to robot axis (placed on third sensoraxis, orthogonal to the wheel axes), is known and then the axisof the sensor
S
i
is given by:
a
i
= tan(
θ
k
+
α
i
)
, q
i
=
x
2
k
−
a
i
x
1
k
.
(4)By replacing (3) with (4), an observation function dependingon robot state and segment parameters
(
s
i
,c
i
)
, is obtained as
y
i
≈
h
((
x
1
k
,x
2
k
,θ
k
)
,
(
s
i
,c
i
))
, i
= 1
,...,n
S
These relationships deﬁne the output equation of the robotmodel and they will be written in the more compact form
y
k
≈
h
(
x
k
,
(
s
k
,c
k
)) +
v
k
,
(5)where
v
k
is a Gaussian noise and it is assumed uncorrelatedwith
w
k
= [
w
1
k
w
2
k
w
θk
]
T
. The vectors
s
k
and
c
k
contain theparameters
(
s
i
,c
i
)
, of the segments intercepted by the sensors
S
i
,i
= 1
,...,n
S
at step
k
.The goal of the present work is to estimate the positionand orientation of the robot,
x
k
= [
x
1
k
x
2
k
θ
k
]
T
, and tosimultaneously ﬁnd a segment based approximation of therobot surrounding environment. To achieve this goal, a segmentbased SLAM algorithm will be described in the next Section.III. S
EGMENT BASED
SLAMBailey et Al [12],[13] divide a SLAM algorithm into
5
mainparts: landmark extraction, data association, state estimation,state update and landmark update. Using the sonar measurements and the actual robot pose estimation, the data associationand landmark extraction processes are performed and yield tothe actual environment mapping. Starting from this mappingand from the model inputs, the state estimation, state updateand landmark update processes provide the robot pose andupdate the environment map.
A. Landmark extraction and Data association
Starting from the segment based model for the robot surrounding environment, the proposed Segment based SLAMalgorithm uses the starting and ending points of each modelingsegment as landmarks. The main idea behind the landmark extraction and data association algorithm is to use the currentlyacquired measurements to update the actual environment mapping by ﬁnding new modeling segments of the surroundingenvironment or by improving the previously obtained ones.The following notation will be used:
•
E
k
= [
p
k,
1
,p
k,
2
,...,p
k,n
k
]
is an array of
n
k
sorted
points. In the following, an array of points will beconsidered sorted if each couple
(
p
k,i
,p
k,i
+1
)
, of consecutive points, represents the starting and endingpoints of one of the environment modeling segments.
• E
k
is a set containing all the points in
E
k
.
•
Π
k
is the set of currently acquired environment points.Using the state estimation
ˆ
x
k
along with the measurements
y
k
provided by the on board ultrasonic sensors,an approximation of the environment points detectedby the sensors can be obtained as
b
ik
,i
= 1
,...,n
S
:
b
ik
= [ˆ
x
1
k
+
y
i,k
cos(ˆ
θ
k
+
α
i
)
,
ˆ
x
2
k
+
y
i,k
sin(ˆ
θ
k
+
α
i
)]
(6)where
y
i,k
is the measurement provided by sensor
S
i
.The goal of the landmark extraction and data associationprocess is to ﬁnd
E
k
starting from the previous environmentmapping
E
k
−
1
, the predicted robot state
ˆ
x
k

k
−
1
and the
acquired measurements
y
k
. To accomplish this task the mainidea is to compute a new landmark related to each point in
Π
k
.The obtained
n
S
landmarks are then suitably added to
E
k
−
1
yielding to
E
k
.More precisely, at each step
k
, given the prediction
ˆ
x
k

k
−
1
andthe acquired measurements
y
k
, the set
Π
k
is computed. Foreach point
b
ik
∈
Π
k
the set of points close, in a neighborhood
δ
, to
b
ik
is obtained as
Ω
i
=
{
p
∈ {E
k
−
1
Π
k
}
:

b
ik
−
p
 ≤
δ
}
and
δ >
0
is one of the algorithm parameters. For each set
Ω
i
a virtual point
q
i
is obtained computing the weighted meanof the points contained in the set (see Figure 2(a)). Moreprecisely, the set
Ω
i
can contain currently acquired points orvirtual points obtained in the previous steps. The new virtualpoint
q
i
is obtained as
q
i
=
g
x
(ˆ
x
k

k
−
1
,y
k
) = 1
M
p
i
∈
Ω
i
m
i
p
i
, M
=

Ω
i

i
=1
m
i
(7)where

Ω
i

is the number of elements contained in
Ω
i
,
m
i
= 1
if
p
i
is one of the currently acquired points (
p
i
∈
Π
i
)
, while if
p
i
is one of the previously obtained virtual points, computedusing
n
i
points, then
m
i
=
n
i
.Please note that the proposed weighted mean is a function of the robot predicted pose
ˆ
x
k

k
−
1
and of the currently acquiredmeasurements
y
k
since each point into
Π
k
is computed using(6) and thus the function
g
x
(ˆ
x
k

k
−
1
,y
k
)
can be obtained byreplacing (6) with (7).The proposed weighted mean is such that the more the numberof points represented by a previously computed virtual pointis big, the bigger this point inﬂuence will be on the newvirtual point
q
i
. At the end, the number of points describedby the new virtual point
q
i
is
M
and it will be related to thenew virtual point for future algorithm executions. For eachset
Ω
i
there will be a new virtual point
q
i
and each of thesepoints will be related to a number of mapped points
M
. Onceall the new virtual points have been obtained, each set of points
Ω
i
can be removed from
E
k
−
1
. Therefore, the newarray
E
k
−
1
can be computed as the sorted array (following thepreviously described sorting deﬁnition) containing the pointsin
E
k
−
1
= ˜
E
k
−
1
, where
˜
E
k
−
1
=
E
k
−
1
\
(
n
S
i
=1
Ω
i
)
. Figure 2(a)shows
E
k
−
1
before the
Ω
i
deleting process while Figure2(b) shows the resulting
E
k
−
1
after the
Ω
i
deleting process.As shown in these Figures, after the
Ω
i
deleting process allthe segments related to starting and ending points in
Ω
i
areremoved.At this point, the new virtual points
q
i
,i
= 1
,...,n
S
haveto be inserted into the vector
E
k
−
1
. As previously remarked,this vector contains the sorted sequence of starting points andending points of each of the environment modeling segments.To insert a new virtual point
q
i
into
E
k
−
1
without infringingthis ordering three main steps are performed. First of all,the line
r
between the robot position and the virtual pointis computed. As a second step, the subvector
e
k
−
1
of
E
k
−
1
containing all the segments in
E
k
−
1
intercepted by
r
, andin front of the robot, is obtained. Finally, the segment
ˆ
l
k
−
1
contained in
e
k
−
1
and closest to the robot along the line
r
is computed and let
p
k
−
1
,j
,p
k
−
1
,j
+1
be the segment startingand ending points respectively (see Figure 2(b)). To ensurethe array
E
k
−
1
is sorted after the point
q
i
is added, thispoint has to be inserted between the starting point and theending point of
ˆ
l
k
−
1
. Therefore when a new virtual point
q
i
isinserted into
E
k
−
1
, one of the previously obtained segmentsis deleted and substituted by two new segments, the ﬁrst oneends in
q
i
while the second one starts from
q
i
. Figures 2(c)and 2(d) show an example of virtual point insertion into thelandmarks array
E
k
−
1
, in particular Figure 2(d) shows theresulting environment mapping after the new point insertion.Note that after the
Ω
i
deleting process, the resulting intermediate mapping (black line in Figure 2(b)) is usually worsethan the previous one (black line in Figure 2(a)) but addingthe obtained virtual point
q
i
, the ﬁnal mapping (black andgreen line in Figure 2(d)) is better and it contains a lowernumber of segments than the starting mapping (black line inFigure 2(a)). Moreover comparing the segments related to thepoints contained into
Ω
i
(orange points in Figure 2(a)) with thesegments related to
q
i
(Figure 2(d)), the second ones are lessinﬂuenced by noise than the ﬁrst ones thanks to the equation(7) which can be seen as a noise ﬁltering operation.As a drawback, using the described insertion strategy, segmentswith very short length can be chosen after the new virtualpoint insertion; moreover starting from a single segment,two new segments are always obtained and, as a consequence, the number of segments can increase step by stepwith a resulting high computational cost for the algorithm.To face these problems, a minimum acceptable segmentlength,
σ >
0
, has been deﬁned and if both new segmentshave a length less or equal than
σ
, they are discarded. Let
E
k
−
1
= [
p
k
−
1
,
1
,...,
p
k1,j
,
p
k1,j+1
,...,p
k
−
1
,n
k
−
1
]
be the array containing the landmarks, and let
q
i
be a virtual point to insert into this array. If

p
k1,j
−
q
i

> σ
or

q
i
−
p
k1,j+1

> σ
then the resulting sorted array will be
E
k
−
1
=
p
k
−
1
,
1
,...,
p
k1,j
,
q
i
,
p
k1,j+1
,...,p
k
−
1
,n
k
−
1
where the points
p
k1,j
,
p
k1,j+1
are computed using the previously described strategy. The related set of elements willbe
E
k
−
1
= ˜
E
k
−
1
where
˜
E
k
−
1
=
E
k
−
1
{
q
i
}
. Otherwise, if thecondition on the minimum segments length is not satisﬁed, thenew virtual point is not added and the array
E
k
−
1
and the set
E
k
−
1
do not change.Note that the parameter
σ
has to be accurately chosen depending on the desired performance. If
σ
→
0
then very shortsegments are accepted resulting in a more precise environmentestimation but also in a high computational cost. On thecontrary, if a high
σ
value is chosen then the algorithmcomputational cost will be low but the resulting estimationand mapping performance can be very poor. After all the newvirtual points have been considered, the obtained array
E
k
−
1
and its related set of points
E
k
−
1
can be considered as theupdated environment map, thus
E
k
=
E
k
−
1
and
E
k
=
E
k
−
1
.The entire landmark extraction and data association processwill be denoted in the following Sections as
E
k
=
L
(
E
k
−
1
,y
k
,
ˆ
x
k

k
−
1
)
B. State estimation, State and Landmark Update
The state estimation and the state and landmark updateproblems can be solved using an Extended Kalman ﬁlter.
Fig. 2. (a): virtual point
q
i
computation, the orange points and the red pointare in
Ω
i
; (b) segment
ˆ
l
k
computation; (c) and (d): virtual point insertion
Given a nonlinear stochastic system from noisy measurements,represented by the equations
x
k
+1
=
f
(
x
k
,u
k
) +
w
k
y
k
=
h
(
x
k
) +
v
k
,
(8)where
x
k
is the state to be estimated,
y
k
is the measuredoutput,
w
k
and
v
k
are the process and measurements noises,the Extended Kalman ﬁlter (EKF) is based on the linearizationof the nonlinear maps
(
f,h
)
of (8) around the estimatedtrajectory. The ﬁlter is based on the assumption that theinitial state, the measurement noise and the process noise areGaussian and uncorrelated each other. From the computationalpoint of view the EKF is simply a timevarying Kalman ﬁlterwhere the dynamic and output matrices are given by
A
k
=
∂f
(
x
k
,u
k
)
∂x
k
x
k
=ˆ
x
k

k
, C
k
=
∂h
(
x
k
)
∂x
k
x
k
=ˆ
x
k

k
−
1
,
(9)and its output is a sequence of state estimates
ˆ
x
k

k
andestimation error covariance matrices
P
k

k
.Ignoring
s
k
and
c
k
, using model (1) as the state update function
f
(
·
)
and equation (5) as model output equation, the describedframework falls within (8) on deﬁning
x
k
= [
x
1
k
x
2
k
θ
k
]
T
, u
k
= [
ω
lk
ω
rk
]
T
, w
k
= [
w
1
k
w
2
k
w
θk
]
T
In the following we will indicate with
W
and
V
the
w
k
and
v
k
covariance matrices respectively and these matrices will beassumed to be known apriori.To solve the SLAM problem the Extended Kalman ﬁlter hasto be suitably adapted to provide an estimation of the robotpose along with an estimation of the environment modelingsegments with their starting and ending points. To this end,the following augmented state is deﬁned
X
k
= [
x
T k
,p
k,
1
,p
k,
2
,...,p
k,n
k
]
T
containing the robot state
x
k
and all the starting and endingpoints of the environment modeling segments. Please note thateach point
p
k,i
is represented by its
x
1
and
x
2
coordinates andthus the dimension of the augmented state is
3 + 2
×
n
k
andit is a time varying dimension.The starting and ending points of each segment are consideredas landmarks and their positions are obviously constant whatever is the control input. Moreover the landmarks are assumedto be not inﬂuenced by any process noise, therefore the stateupdate function, the process noise covariance matrix and thedynamic matrix related to the augmented state will be
X
k
+1
=
x
k
+1
p
k
+1
,
1
...
p
k
+1
,nk
=
F
x
k
p
k,
1
...
p
k,nk
,u
k
=
f
(
x
k
,u
k
) +
w
k
p
k,
1
...
p
k,nk
W
=
W
∅∅ ∅
2
×
n
k
;
A
k
=
A
k
∅∅
I
2
×
n
k
where
I
q
and
∅
q
are the identity matrix of order
q
and the nullmatrix with
q
×
q
elements respectively.The output equation (5) is a function of both the robot stateand the environment modeling segments. Since the parameters
(
s,c
)
of each segment can be related to its starting and endingpoints, the output function can be seen as a function of the augmented state
X
k
,
y
k
=
h
(
x
k
,
(
s
k
,c
k
)) +
v
k
=
h
(
X
k
) +
v
k
.The output matrix will be
C
k
=
C
1
,k∂h
1
∂p
k,
1
...
∂h
1
∂p
k,nk
......
C
n
S
,k∂h
nS
∂p
k,
1
...
∂h
nS
∂p
k,nk
where
n
S
is the robot number of sensors,
C
i,k
is the ith rowof the
C
k
matrix,
h
i
is the function
h
(
X
k
)
related to the ithsensor and
∂h
i
∂p
k,i
is a two dimensional array containing thepartial derivatives of
h
i
w.r.t the
p
k,i
coordinates.Indicating with
ˆ
x
k
the estimation of the state
x
k
and with
ˆ
p
k,i
the estimation of the landmark
p
k,i
, the estimation errorcovariance matrix related to the augmented state is
P
k
=
P
k
P
(ˆ
x
k
,
ˆ
p
k,
1
)
... P
(ˆ
x
k
,
ˆ
p
k,n
k
)
P
(ˆ
p
k,
1
,
ˆ
x
k
)
P
(ˆ
p
k,
1
,
ˆ
p
k,
1
)
... P
(ˆ
p
k,
1
,
ˆ
p
k,n
k
)
......
P
(
ˆ
p
k,n
k
,
ˆ
x
k
)
P
(ˆ
p
k,n
k
,
ˆ
p
k,
1
)
... P
(ˆ
p
k,n
k
,
ˆ
p
k,n
k
)
where
P
(
a,b
)
is the estimation error covariance matrix between
a
and
b
.Due to the landmark extraction process, new points can beinserted into the current mapping
E
k
or old points can beremoved from it. Therefore the number of points
n
k
canchange during the SLAM process and the resulting sizes of the augmented state
X
k
and of the matrices
A
k
,
C
k
,
P
k
,
W
are time varying. After the landmark extraction process, foreach change in the mapping, there is a change in the matrices
A
k
,
C
k
,
P
k
,
W
and in the augmented state.Let
E
k
−
1
= [
p
k
−
1
,
1
,...,p
k
−
1
,j
,p
k
−
1
,j
+1
,...,p
k
−
1
,n
k
−
1
]
T
and assume that
L
(
·
)
extracts the point
p
and returns
E
k
= [
p
k
−
1
,
1
,...,p
k
−
1
,j
,
p
,p
k
−
1
,j
+1
,...,p
k
−
1
,n
k
−
1
]
T
. Theaugmented state becomes
X
k
= [
x
k
, E
k
]
T
and, as shown in[14], the estimation error covariance matrix related to theaugmented state after the new landmark insertion is
˜
P
k
=
G
x
P
k
G
T x
+
G
y
V G
T y
,
P
k
= ˜
P
k
.
(10)The matrix
G
x
is
G
x
=
I
3
∅
...
∅ ∅
...
∅∅
I
2
,
1
...
∅ ∅
...
∅
..................
∅ ∅
... I
2
,j
∅
...
∅
∂
g
x
∂
x
k
∅
...
∅ ∅
...
∅∅ ∅
...
∅
I
2
,j
+1
...
∅
..................
∅ ∅
...
∅ ∅
... I
2
,n
k
and
I
2
,i
is the identity matrix of order two related to the ithlandmark while
∅
is a null matrix of appropriate size.The matrix
G
y
is
G
y
=
∅
...
∅
∂
g
x
∂
y
k
∅
...
∅
T
and
∂g
x
∂y
k
is the
j
+ 1
th row of this matrix. In the abovematrices the function
g
x
is the landmark extraction function(7). For what regards the landmark elimination, if the landmark extraction process deletes a point
p
i
from
E
k
, then thecorresponding entries in the augmented state and in the matrix
P
k
have to be removed. Following the above update rules, itis convenient to deﬁne the update function
(
X
∗
k
,
P
∗
k
) =
U
(
X
k
,
P
k
,E
k
−
1
,E
k
)
which properly modiﬁes the augmented state and the estimation error covariance matrix according to the variations in theenvironment mapping from
E
k
−
1
to
E
k
. The function outputsare then used as the new augmented state,
X
k
=
X
∗
k
, and asthe new estimation error covariance matrix,
P
k
=
P
∗
k
.Finally, at each time step, using the observationstructure (5), the landmark extraction function
L
(
·
)
and the update function
U
(
·
)
, the resulting segmentbased SLAM algorithm can be summarized as follows:
Segment based SLAM
ˆ
X
k
+1

k
=
F
( ˆ
X
k

k
,u
k
)
P
k
+1

k
=
A
k
P
k

k
A
T k
+
W
E
k
+1
=
L
(
E
k
,y
k
+1
,
ˆ
x
k
+1

k
)( ˆ
X
∗
k
+1

k
,
P
∗
k
+1

k
) =
U
( ˆ
X
k
+1

k
,
P
k
+1

k
,E
k
,E
k
+1
)ˆ
X
k
+1

k
= ˆ
X
∗
k
+1

k
P
k
+1

k
=
P
∗
k
+1

k
K
k
+1
=
P
k
+1

k
C
T k
+1
(
C
k
+1
P
k
+1

k
C
T k
+1
+
V
)
−
1
ˆ
X
k
+1

k
+1
= ˆ
X
k
+1

k
+
K
k
+1
(
y
k
+1
−
h
( ˆ
X
k
+1

k
))
P
k
+1

k
+1
=
P
k
+1

k
−
K
k
+1
C
k
+1
P
k
+1

k
Where the algorithm initial conditions are
E
0
=
∅
, an emptyarray
E
0
(
n
0
= 0
),
ˆ
X
0

0
= [ˆ
x
0

0
]
and
P
0

0
=
P
0

0
.This algorithm consists in an Extended Kalman ﬁlter used toestimate the augmented state formed by the robot pose andthe landmarks positions. Please note the landmark extractionfunction is invoked using only the ﬁrst three elements of the estimated augmented state representing the robot stateestimation.IV. N
UMERICAL AND
E
XPERIMENTAL
R
ESULTS
The proposed SLAM algorithm performance has been testedthrough numerical simulations and real experiments. In boththe experimental setup and the numerical simulations the robotKhepera III ([10]) has been used and modeled. The followingparameters have been used:
•
d
= 0
.
09
m
,
r
= 0
.
0205
m
for the robot Khepera III;
•
Sampling period
T
= 1
s
;
•
V
= 0
.
0004
I
5
: a standard deviation of
0
.
02
m
for themeasurements provided by the ultrasonic sensors;
•
W
=
diag
{
0
.
01
2
,
0
.
01
2
,
0
.
000002
}
: a standard deviation of
0
.
01
m
on
w
1
k
and
w
2
k
, a standard deviation of
0
.
1
°on
w
θk
for the robot model;
•
P
0

0
=
diag
{
0
.
05
2
,
0
.
05
2
,
0
.
000002
}
: a standard deviation of
0
.
05
m
on robot initial position estimation
Fig. 3. Segment based SLAM simulation result using
σ
= 0
.
08
error and a standard deviation of
0
.
1
°on robot initialheading estimation error;
•
δ
= 0
.
1
for the landmark extraction and data association algorithm.To evaluate the proposed algorithm performance,three indexes have been deﬁned. The ﬁrst index is
ε
=
1
k
f
+1
k
f
k
=0

x
k
−
ˆ
x
k

k

x
k
 ×
100
, where
k
f
is the number of time steps in which each experiment/simulation is performed.The above index represents the percentage relative estimationerror between the estimated robot pose and the real one. Theindex can be used to evaluate the localization performance.To evaluate the quality of the environment mapping a furtherindex
ρ
has been used. The index is evaluated as follows: theproposed environment mapping algorithm provides a set of points
p
k,i
,i
= 1
,...,n
k
such that each couple
(
p
k,i
,p
k,i
+1
)
represents a segment which maps the robot surrounding environment. Given the
i
th couple
(
p
k,i
,p
k,i
+1
)
, a set of
n
i
points,
P
i,j
,j
= 1
,...,n
i
, equally spaced by an amount equal to
0
.
01
,is computed on the segment between this two points. For eachpoint
P
i,j
, the distance
d
i,j
between this point and its nearestreal environment boundaries point is computed yielding to theindex
ρ
=
1
n
k
−
1
n
k
−
1
i
=1
ρ
i
where
ρ
i
=
1
n
i
n
i
j
=1
d
i,j
and
n
k
−
1
isthe number of modeling segments contained into
E
k
.The third index is
τ
=
1
k
f
+1
k
f
k
=0
τ
k
, where
τ
k
is the executiontime of the segment based SLAM algorithm at step
k
. Thisindex represents the averaged execution time of the SLAM algorithm and it has been used to evaluate the time performanceof the proposed algorithm.In the simulation setup, the Khepera III robot model has beentested in the environment shown in Figure 3. The angularvelocities have been precomputed so that the robot follows thetrajectory shown in Figure 3, starting from
(0
.
5
,
0
.
9)
m
, with
θ
0
= 0
rad
, passing by
(3
.
9
,
0
.
9)
m
,
(3
.
9
,
−
1
.
6)
m
,
(3
.
7
,
−
1
.
6)
,
(3
.
7
,
0
.
8)
and
(0
.
5
,
0
.
8)
. The path has been performed in
k
f
= 500
steps.
150
numerical experiments have been performed setting
ˆ
x
0

0
= [0
.
5 0
.
9 0]
T
as initial condition for theSLAM algorithm. For each simulation, four possible values of the parameter
σ
have been tested:
0
.
06
,
0
.
08
,
0
.
1
,
0
.
12
. Theobtained averaged indexes over the
150
simulations are shownin Table I. The execution times
τ
k
have been computed usingMATLAB R2012b running the proposed algorithm on an IntelCore
2
Duo
T
7300
CPU.