Skip to main content

Real time simultaneous localization and mapping: towards low-cost multiprocessor embedded systems

Abstract

Simultaneous localization and mapping (SLAM) is widely used by autonomous robots operating in unknown environments. Research community has developed numerous SLAM algorithms in the last 10 years. Several works have presented many algorithms’ optimizations. However, they have not explored a system optimization from the system hardware architecture to the algorithmic development level. New computing technologies (SIMD coprocessors, DSP, multi-cores) can greatly accelerate the system processing but require rethinking the algorithm implementation. This article presents an efficient implementation of the EKF-SLAM algorithm on a multi-processor architecture. The algorithm-architecture adequacy aims to optimize the implementation of the SLAM algorithm on a low-cost and heterogeneous architecture (implementing an ARM processor with SIMD coprocessor and a DSP core). Experiments were conducted with an instrumented platform. Results aim to demonstrate that an optimized implementation of the algorithm, resulting from an optimization methodology, can help to design embedded systems implementing low-cost multiprocessor architecture operating under real-time constraints.

Introduction

Autonomous robots must be able to localize themselves. Simultaneous localization and mapping (SLAM) algorithms aim to build an environment map while estimating the robot pose. Many researches were conducted to develop SLAM algorithms like extended Kalman filter for SLAM (EKF-SLAM)[1, 2], FAST SLAM[3], GRAPH SLAM[4], DP-SLAM[5] which aim to improve consistency, accuracy or robustness. Other algorithms derivate from the EKF-SLAM, such as algorithms using unscented Kalman filter (UKF)[6] which increases the localization accuracy against the classical EKF algorithm based on a linearized model. Only few works deal with the implementation of low-cost SLAM embedded systems.

Most of SLAM implementations rely on the use of accurate and dense measurements provided by expensive sensors like laser rangefinder sensors[7] or time of flight cameras[8]. High-priced smart sensors are not suitable to be integrated in most of embedded systems in commercial objectives or industrial applications.

Simultaneous localization and mapping systems using low-cost sensors have been recently designed. Abrate et al.[9] provide an implementation of the EKF-SLAM algorithm on a Khepera robot. The robot hosts limited range, sparse and noisy IR sensors. Experimental results have shown the importance of the sensor characteristics, the primitives (lines) extraction and data association. Yap and Shelton[10] use cheap, noisy and sparse sonar sensors embedded in a P3-DX robot. To cope with these low-cost sensors, the implemented SLAM algorithm uses a multiscan approach and an orthogonality assumption to map indoor environments.

Classical SLAM algorithms are too computationally intensive to run on an embedded computing unit. They require at least laptop-level performances. Gifford et al.[11] present a low-cost approach to autonomous multi-robot mapping and exploration for unstructured environments. The robot hosts a Gumstix computing unit (600 Mhz), 6 IR scanning range arrays, a 3-axis gyroscope and odometers. Running DP-SLAM alone on the Gumstix with 15 particles takes on average 3 s per update. While using 25 particles, it takes more than 10 s per update. Authors have underlined the difficulty to find the right SLAM parameters to fit within the available computing power and the real-time processing. Magnenat et al.[12] present a system based on the co-design of a low-cost sensor (a slim rotating scanner), a SLAM algorithm, a computing unit, and an optimization methodology. The computing unit is based on an ARM processor (533 Mhz) running a FASTSLAM 2.0 algorithm[13]. Magnenat et al.[12] use an evolution strategy to find the best configuration of the algorithm and setting of the parameters.

As pointed out by[11, 12], the first improvement of a SLAM algorithm is an efficient setting of the various parameters of the algorithm. Other modifications were investigated to reach real-time constraints. These modifications are necessary due to the low computing power and limited memory resources available on embedded systems. Features restriction for EKF-SLAM algorithm has been implemented to decrease the processing time[14]. Schroter et al.[15] focused on reducing the memory footprint of particle-based gridmap SLAM by sharing the map between several particles.

Robust laser-based SLAM navigation has long existed in robot applications, but systems implement sensors that, in some cases, are more expensive than the final product. Neato Robotics has developed a vacuum cleaner that implements a navigation system using a SLAM algorithm. The approach is based on a low-cost system implementing a designed laser rangefinder[16].

This article presents an efficient implementation of the EKF-SLAM algorithm on a multi-processor architecture. The approach is based on an algorithm implementation adequate to a defined architecture. The aim is to optimize the implementation of the SLAM algorithm on a low-cost and heterogeneous architecture implementing an SIMD coprocessor (NEON) and a DSP core. The hardware includes several low-cost sensors. As[17], we chose to use a low-cost camera (exteroceptive sensor) and odometers (proprioceptive sensors). Following[12], we efficiently tune the parameters of the SLAM algorithm. We improve on previous works by proposing an adequate implementation of the EKF-SLAM algorithm on a multiprocessing architecture (ARM processor, SIMD NEON coprocessor, DSP core). The specifications related to the NEON coprocessor and the DSP core improve the processing time and the system performance. Results aim to demonstrate that an optimized implementation of the algorithm, resulting from an evaluation methodology, can help to design embedded systems implementing low-cost multiprocessor architecture operating under real-time constraints.

Section “EKF-SLAM algorithm” introduces the EKF-SLAM algorithm. Section “Multiprocessor architectureand system configuration” presents the embedded multiprocessor architecture and the system configuration. Section “Evaluation methodology and algorithm imple-mentation” details the evaluation methodology, provides a first algorithm implementation and analyzes this implementation in terms of processing time. A Hardware–software optimization is proposed and analyzed in Section “Hardware–software optimization and improvements”. It presents SIMD optimizations and DSP parallelization. A performance comparison is then performed between the optimized and non-optimized instances. Finally, Section “Conclusion” concludes this article.

EKF-SLAM algorithm

Overview

Extended Kalman filter for SLAM estimates a state vector containing both the robot pose and the landmark locations. We consider that the robot is moving on a plane. The algorithm uses 3D points as landmarks. It uses proprioceptive sensors to compute a predicted vector and then corrects this state using exteroceptive sensors. In this article, we consider a wheeled robot embedding two odometers (attached to each rear wheel) and a camera.

State vector and covariance matrix

With N landmarks, the state vector is defined as:

x= ( x , z , θ , x a 1 , y a 1 , z a 1 , , x a N , y a N , z a N ) T
(1)

where:

  • x,z are the ground coordinates (x-axis, z-axis) of the robot rear axle center. We suppose that the robot is always moving on the ground, so y=0 (no elevation) and y does not appear in Equation (1).

  • θ is the orientation of a local frame attached to the robot with respect to the global frame.

  • x a 1 , y a 1 , z a 1 ,, x a N , y a N , z a N are the 3D coordinates of the N landmarks in the global frame.

The state covariance matrix is defined as:

P= P xx P xz P P x x a 1 .. P x z a N P zx P zz P P z x a 1 . P z z a N P θx P θz P θθ P θ x a 1 .. P θ z a N P x a 1 x P x a 1 z P x a 1 θ P x a 1 x a 1 . P x a 1 z a N .. .. .. .. .. .. P z a N x P z a N z P z a N θ P z a N x a 1 .. P z a N z a N
(2)

Prediction

The prediction step relies on the measurements of the proprioceptive sensors, the odometers, embedded on our experimental platform. A non linear discrete-time state-space model is considered to describe the evolution of the robot configuration x:

x k | k 1 =f( x k 1 | k 1 , u k )+ v k
(3)

where u k is a known two-dimensional control vector, assumed constant between the times indexed by k−1 and k, and v k is an unknown state perturbation vector that accounts for the model uncertainties. xk−1|k−1 represents the state vector at time k-1, xk|k−1 represented the state vector after the prediction step, xk|krepresents the state vector after the estimation step. The classical evolution model, described in[18], is considered:

f( x k 1 | k 1 ,δs,δθ)= x k 1 + δs cos θ k 1 + δθ 2 z k 1 + δs sin θ k 1 + δθ 2 θ k 1 + δθ x a 1 , k 1 y a 1 , k 1 z a 1 , k 1 .. .. x a N , k 1 y a N , k 1 z a N , k 1
(4)

where u k =(δs δθ); δs is the longitudinal motion and δθ is the rotational motion[19]:

δs δθ = g ( φ l , φ r ) = w r δ φ r + w l δ φ l 2 w r δ φ r w l δ φ l e
(5)

where:

  • w r and w l are respectively the radius of the right and left wheel.

  • e is the length of the rear axle.

  • δ φ i =δ p i 2 Π ρ with i{r,l} (r=right, l=left), δ p i : number of steps, ρ: odometer resolution. δ φ i is the angular movement of the right/left wheel.

The state covariance matrix is defined as:

P k | k 1 = f x P k 1 | k 1 f x T + Q k
(6)

where

  • f x = 1 0 δs sin θ k 1 | k 1 + δθ 2 0 . 0 0 1 δs cos θ k 1 | k 1 + δθ 2 0 . 0 0 0 1 0 . 0 0 0 0 1 . 0 . . . . . . 0 0 0 0 . 1

  • Q k is the covariance matrix of the process noise.

Estimation

The estimation of the state is made using the camera which returns the position in the image (u i , v i ) of the i-th landmark.

The innovation and its covariance matrix: The pinhole model is used to project a known landmark position into the image:

u i v i 1 = pinhole ( x a i cam , y a i cam , z a i cam ) = f k u s uv c u 0 f k v c v 0 0 1 x a i cam z a i cam y a i cam z a i cam 1
(7)

where:

  • (u i ,v i ) is the position of the i-th landmark in the image.

  • ( x a i cam , y a i cam , z a i cam ) is the position of the i-th landmark in the camera frame.

  • f is the focal length.

  • (k u ,k v ) is the number of pixels per unit length.

  • s uv is a factor accounting for the skew due to non-rectangular pixels. In our case, we take s uv =0.

Equation (7) can be written as the predicted observation equation for a single landmark:

h i ( x k | k 1 )= u i v i = c u + f k u x a i cam z a i cam c v + f k v y a i cam z a i cam
(8)

The pose of a landmark in the camera frame is defined from its pose ( x a i , y a i , z a i ) in the global frame:

x a i cam y a i cam z a i cam = cos θ 0 sin θ 0 1 0 sin θ 0 cos θ x a i x y a i z a i z 0 0 D
(9)

Where D is the length between the camera and the robot rear axle center.

During the observation step, the algorithm matches M landmarks (M<=N) whose observations are added in h k = h 0 . h M 1 .

Thus, the innovation is:

Y k = z ̂ k h k ( x k | k 1 )
(10)

where z ̂ k is the measurement for all the M predicted observations.

The innovation covariance S k is:

S k = H k P k | k 1 H k T + R k
(11)

where H k is the Jacobian of h k and R k is the observation noise covariance.

State estimation: The state is updated using the classical EKF equations:

K k = P k | k 1 H k S k 1 x k | k = x k | k 1 + K k Y k P k | k = ( I K k H k ) P k | k 1
(12)

Visual landmarks

The landmarks used in the observation equation are extracted from images. Landmark initialization defines the initial coordinates and the initial covariance of landmarks localization (also called interest points or features). In[20], we have evaluated the processing time of corner detectors like Harris, Shi-Tomasi or FAST. Harris and Shi-Tomasi detectors were more time consuming than the FAST detector and do not provide significantly better localization results than FAST. Consequently, there is no need to implement more sophisticated algorithms such as Harris or Shi and Tomasi. FAST[21] (Features from Accelerated Segment Test) corner detector relies on a simple test performed for a pixel p by examining a circle of 16 pixels (a Bresenham circle of radius 3) centered on p. A feature is detected at p if the intensities of at least 12 contiguous pixels are all above or all below the intensity of p with a threshold t. Even if this detector is not highly robust to noises and depends on a threshold it produces stable landmarks and is computationally very efficient[21].

The FAST detector[21] is related to the wedge-model style of detector evaluated using a circle surrounding a candidate pixel. To optimize the detector processing-time, this model is used to made a decision classifier which is applied to the image (Figure1).

Figure 1
figure 1

Image (320×240 Pixels) of the embedded camera and result of the FAST detector.

Matching based on zero-mean sum of squared differences

The EKF-SLAM matches the previously detected feature with a new one using zero-mean sum of squared differences (ZMSSD).

The covariance of the projected feature localization defines a searching area τ. This area includes the robot localization uncertainty and the landmarks localization uncertainty. We use the ZMSSD to find the best candidate point inside τ. For each candidate point p:(p x ,p y ), the N p value of the weighted ZMSSD is:

N p =w( p x , p y )×ZMSSD
(13)

and

ZMSSD = i , j ( d ( i , j ) m d ) im p x + i des 2 , p y + j des 2 m i 2
(14)

where:

  • w(p x ,p y ) is the Gaussian weights defined by the landmark covariance.

  • i[0;des−1] and j[0;des−1] and des is the descriptor size.

  • d is the feature descriptor.

  • m d and m i are respectively the means of the pixel values in the descriptor and in the image window.

  • im is the image.

The observation pobs will be selected usingp=(pτ| N p =min( N p j ), p j τ).

The descriptor, used to identify the landmark during the matching, is classically a small image window of 9×9 pixels to 16×16 pixels around the interest point. Davison[22] claims that this sort of descriptor is able to serve as long-term landmark feature.

Landmark initialization based on davison method

Landmark initialization consists of defining the initial coordinates and the initial covariance of landmarks (interest points). Various methods exist and can be classified as an undelayed or delayed method. Undelayed method adds landmarks with only one measurement whereas the delayed method needs two or more frames. We chose to use the widely spread delayed method proposed by[2] which is both efficient and adequate to implement. Furthermore the work of Munguia and Grau[23] shows that the delayed method have the same performance as the undelayed method.

In order to compute the 3D depth of a newly detected landmark, as[2], we initialize a 3D line into the map along which the landmark must lie. This line starts at the estimated camera position and heads to infinity along the feature viewing direction. The line is composed of 100 particles which represent depth hypothesis. The prior probability used is uniform and the range is 0.5 to 15 m. At subsequent time, each particle (a feature depth hypothesis) is projected into the image, matched and its probability is re-weighted.

When the ratio of the standard deviation of depth to the expected value is below a threshold, the distribution is approximated as a Gaussian and the landmark is initialized. The landmark pose A i =(x a i ,y a i ,z a i ) is added to x and the A i covariance is added into P.

Multiprocessor architecture and system configuration

In order to test and validate the EKF-SLAM algorithm, experiments were conducted with an instrumented mobile robot called Minitruck[24]. The platform was teleoperated during the experiments. For our first evaluation, the experiment consists to operate inside a large corridor of our research lab (see Figure2).

Figure 2
figure 2

Minitruck in action embedding a multi-sensor system.

We have developed a system architecture on the top of a multi-processor board (Gumstix Overo) based on the OMAP3530 chip (see Figure3). The OMAP chip integrate a RISC processors (ARM Cortex A8 500 Mhz) with an SIMD NEON coprocessor, a DSP (TMS320C64x + 430 Mhz) and a graphical processor unit (POWERVR SGX). This board communicates with an additional processor for control and data acquisition (Atmega168 16 Mhz).

Figure 3
figure 3

System architecture.

Multiple sensors (odometers and a camera) are interfaced to this architecture (Figure2). The variety of sensors enables us to evaluate the SLAM algorithms with different types of sensor data and take advantage of the information complementary of these sensors. Our objective is to evaluate the implementation of SLAM algorithms using land vehicles and sensors, like steering encoders and a camera.

The use of wheel and steer encoders is obvious in robotics and navigation. Simple kinematic motion models can be used to integrate velocity and heading measurements from wheel and steer encoders to provide an estimation of the mobile robot location and orientation. Estimations are regularly subject to considerable errors due to misalignment, offsets and wheels slippage. It is possible to implement basic models to approximate and correct offset and slippage errors on-line leading to significant improvement of performances. We chose two HEDS 5540 odometers for our experimental vehicle.

The feature detection in SLAM application relies on the embedded sensors. We chose to achieve this extraction using a vision sensor (a cheap USB webcam, Philips SPC530NC, delivering 30 fps). We chose to use all possible images (30 fps) because it is much easier to perform point matching if the movement is small. Conventional approaches for vision systems design are usually based on general purpose computers interfaced with cameras. The new computing technologies (SIMD, DSP, multi-cores) can greatly accelerate algorithm processing, but require rethinking these algorithms by optimizing the parallelism. This parallel processing is pushed to integrate near the sensors parallel computing units[25]. We have used a Gumstix processing module based on OMAP3530 architecture. It is an heterogeneous architecture (ARM Cortex-A8 500 Mhz processor with power consumption less than 300 mW, SIMD NEON integrated coprocessor, DSP C64x processor and a 3D graphics accelerator) that communicates via a WLAN connection (802.11 g).

The WLAN connection is used only to control speed and direction of the vehicle. In the future, a dedicated algorithm to autonomous navigation will be implemented and thus the WLAN connection will be used to achieve only the system monitoring. A coprocessor (ATMega168) takes care of data acquisition. It controls the robot speed and its direction using two pulse-width modulation (PWM) signals. It decodes signals coming from the odometers embedded in the rear wheels. It communicates with the main board using an I2C interface. This interface allows the main processor to retrieve odometers data and to send instructions corresponding to speed and direction.

To evaluate the designed system, an experiment was achieved in a corridor of our lab. Frames have been grabbed at 30 fps with 320×240 resolution. Odometer data were sampled at 30 Hz. During the experiment, references are periodically drawn on the ground by an embedded marker.

Evaluation methodology and algorithm implementation

Our evaluation methodology is based on the identification of the processing tasks requiring a significant computing time. It is based on several steps: we analyze first the execution time of tasks and their dependencies on the algorithm’s parameters. A threshold is fixed for each parameter. The algorithm is then partitioned in order to have functional blocks (FBs) performing defined calculations. Each block is then evaluated to determine its processing time. Function blocks that require the most important execution time are then optimized to reduce the global processing time.

Algorithm Algorithm 1 EKF-SLAM summarizes the main tasks of EKF-SLAM. The algorithm is composed of two process: Prediction and Correction. The correction process implements three tasks: matching, estimation and initialization.

Algorithm 1 EKF-SLAM

1: χ    List of Landmarks for initialization

2: Robot pose initialization

3: while localization is required do

4: DATA ← Sensors Data acquisition

5: if DATA =(φ l ,φ r )then Odometer’s data

6: Prediction

7: (δs, δθ) ← g(φ l ,φ r ) (see Eq (5))

8: x k | k 1 f( x k 1 | k 1 ,δs,δθ) (see Eq (4))

9: P k | k 1 f x P k 1 | k 1 f x T + Q k (see Eq (6))

10: else if DATA = Camera then

11: FAST detector applied on the image

12: Matching:

13: for Each Landmark N i xk|k−1do

14: u i , v i , τ i pinhole( x k | k 1 , N i ) (see Eq (8))

15: if(u i ,v i ) Camera Frame then

16: z ̂ k ZMSSD( τ i , N i ) (see Eq (13))

17: h k ( u i , v i )

18: Y k z ̂ k h k

19: H k h k x x k | k 1

20: end if

21: end for

22: Estimation:

23: S k H k P k | k 1 H k T + R k (see Eq (11))

24: K k P k | k 1 H k S k 1 (see Eq (11))

25: x k | k x k | k 1 + K k Y k (see Eq (12))

26: P k | k (I K k H k ) P k | k 1 (see Eq (12))

27: Initialization:

28: for Each L χ do L: Aspiring new Landmark

29: LobsZMSSD(L) (see Eq (13))

30: Update the particles weight according Lobs(see[2])

31: Compute σdepth, depth

32: if σ depth depth < ε then

33: Compute L, P L

34: append(xk|k−1,L); append(Pk|k−1,P L )

35: remove(χ,L)

36: end if

37: end for

38: if Lack of Landmark then see[8]

39: append(χ, New_Landmarks)

40: end if

41: end if

42: end while

Prediction process

This phase updates the mobile robot position (xk|k−1) according to its proprioceptive data acquired from odometers (φ l ,φ r ). The processing time of the prediction process is constant. It just updates the 3D vector containing the robot pose and its 3×3 covariance matrix. During the prediction step, the landmarks localization and uncertainties do not change: landmarks are defined in the global frame.

Correction process

The processing time of the correction process is not constant. The following of this section studies the processing time of each task of the process and their dependencies.

Matching task

Each landmark in the state vector must be projected in the camera frame using the pinhole model (see L. 2). The computing time of these projections depends only on the number of landmarks in the state vector (L. 2). For each projected landmark on the focal plane, ZMSSD matches an observation. Both the size of the descriptor and the size of the searching area τ will affect the computing time (see Equation (13)).

The processing time of the matching task depends on several parameters:

  • The number of landmarks in the state vector.

  • The number of visible landmarks on the focal plane.

  • The size of the descriptor.

  • Both the localization uncertainty of the mobile robot and the landmarks.

In practice, all the previously defined parameters should be set in order to bound the computing time. The first three parameters can be set by the users. The uncertainty depends on the followed path and cannot be bounded.

Estimation task

The estimation task uses the classical Kalman equations to update both the robot and landmarks uncertainties. The processing time of the estimation task is time-consuming and depends on:

  • The number of landmarks in the state vector.

  • The number of landmarks observed.

The size of the matrix and thus the computing cost of the matrix multiplication in the Equations (11) and (12) depend on the number of landmarks in the state vector. Moreover, Equation (11) depends on the number of landmarks observed. As for the matching process, these parameters (size of the state vector and number of observations) should be bounded in order to achieve this estimation task in a constant computing time.

Initialization task

For each landmark under initialization, each particle (a feature depth hypothesis) is projected into the image, matched and its probability is re-weighted. If there is a lack of landmarks under initialization, we add aspiring new landmarks. The processing time of the initialization task depends on:

  • The number of landmarks being initialized.

  • The size of the descriptor.

  • Both the localization uncertainty of the mobile robot and the landmark.

The number of landmarks being initialized and the size of the descriptors can be bounded. For each landmark being initialized, we have to update the probability of each localization hypothesis using a matching process. As for the matching task, the computing time depends on the localization uncertainty of the mobile robot and the landmarks.

Thresholds definition

Previous section shows that the computation time of each task of the EKF-SLAM algorithm depends on many variables. For real-time implementation, it is important to get a constant, or at least a bounded computation time. To solve this constraint we have to:

  • set the maximum number of landmarks in the state vector. The size of the state vector will be fixed. Therefore, no dynamic memory allocation will be needed.

  • set the maximum number of landmarks observed. This keeps the computation time of the estimation task constant using a fixed size matrix multiplication.

  • set the maximum number of landmarks being initialized in order to bound the computation time of the initialization task. Unfortunately, it will not be sufficient to keep the computation time of the initialization task constant due to its internal matching step.

  • bound the computing time induced by the uncertainties. The only solution to get a bounded global-processing-time is to set a maximum execution time for the matching task. Due to the constant processing time of the prediction and the estimation task, the execution time of both the matching and initialization task can be bounded (33 ms - (tprediction+ testimation)). We chose to use all possible images (30 fps).We set a maximum execution time for the matching task. The algorithm proceeds in a way to match a maximum of landmarks in a bounded time. The initialization task has a dynamic execution time depending on the real processing time of the matching task and the number of landmarks being initialized. The lower bound of this dynamic execution allows at least a minimum number of landmarks to be initialized.

Map management

To keep the size of the state vector constant, we need to delete some landmarks when inserting new ones. The new state vector includes new landmarks (whose initialization has just been performed) and previously used landmarks. Auat Cheein and Carelli[26] proposes an efficient method to select landmarks for the estimation task. It is based on the evaluation of the influence of a given feature on the convergence of the state covariance matrix. The method matches all possible landmarks and computes (IK k H k ) from Equation (12). Unfortunately, we cannot implement it exactly as proposed by[26] due to the high computing time. We chose to add the landmarks, based on the previous estimation step, by selecting the previous landmarks which have the best previous influence on the convergence of the state covariance matrix. At time k, we select the landmark which had the smallest (IKk−1Hk−1).

Functional block partitioning

All the previously defined tasks do not have a fixed computing time, their computing time depends on the experiment. We have defined FBs which have a fixed computing times to optimize the implementation. The computing time of the FB do not depend on the experiment. Experiments will only affect the number of iterations of some FBs(3,4,5,6,7,8 and 9). From the previous algorithm, we have defined 9 FBs and their runtimes are studied in below Table1.

Table 1 Functional block partitioning

Each FB has a fixed computing time and some FB can occur more than one time (Landmark projection, ZMSSD, H i , Weight updating, Addition of a new landmark).

Processing time evaluation

As an application scenario, the robot moves over a square of 6 m side. At the end of the trajectory, it joined the initial starting position. Using only odometers, the final localization has an error of 1.6 m. With the EKF-SLAM algorithm, the localization has been significantly improved. The final error is approximately 0.4 m. EKF-SLAM includes all viewed landmarks in the state vector. Indeed, the localization result depends on the number of landmarks but the size of the state vector and the number of observations must be bounded to achieve a bounded computing time. The overall accuracy of the EKF-SLAM depends on the number of the landmarks in the state vector and the matched observations. The accuracy of the localization depends monotonically on the number of processed landmarks.

The given EKF-SLAM (Algorithm Algorithm 1 EKF-SLAM) is processed sequentially on the embedded ARM processor operating at 500 MHz (no coprocessor is implemented). In the following, all times given correspond to times evaluated on the embedded system using the ARM processor. The data acquisition time is constant:

  • The odometer data acquisition is achieved in 0.7 ms (this processing time is due to the I2C communication with the Atmega168 processor).

  • Each image acquisition takes 1.8 ms (due to USB data transfer).

The prediction step does not require significant processing time, it takes only 0.093 ms per iteration. As for the matching task, the estimation task cannot be achieved in a constant processing time. Estimation task processing time depends on the total number of landmarks and the number of matched landmarks. Figure4 shows the processing time of the estimation task according to the number of landmarks in the state vector. The estimation task is entirely processed on the ARM processor (no use of coprocessor). Obviously it will be impossible to take into account all the landmarks detected when the algorithm is processed: the computation time will be higher than the 33 ms allowed. It is necessary to find a compromise between the number of landmarks and the processing time.

Figure 4
figure 4

Processing time of the estimation task.

Experimental results

An experiment was conducted to evaluate the processing time of the different blocks of the algorithm (including tasks with unboundable processing time). For this experiment, we set the size of the descriptor to 16×16 pixels and we set the thresholds as follows:

  • Maximum number of landmarks in the state vector: 25.

  • Maximum number of observed landmarks: 25.

  • Maximum number of landmarks being initialized: 20.

First, we can analyze the runtime of the 8 previously defined FBs of the algorithm. We have used the integrated cycle counter register (CCNT) of the ARM processor to compute the processing time of each FB. The prediction process (FB1) occurs in 0.093 ms. Table2 summarizes, for the other FBs, the processing time per iteration, the mean of the number of iterations and the mean of the processing time per correction process. The estimation task could not be processed in some iterations of the correction process, especially when there is no matched landmark.

Table 2 Processing time of the correction process FBs on the main processor (ARM)

The mean processing time by frame is approximately 80.8 ms which corresponds to the sum of all processing times: prediction process (FB1) and correction process (FB2 to FB9). The processing time of the estimation task (FB6) is approximately 70.5 ms and it represents about 87% of the global processing time. The FAST detector (FB2) represents 3.4 ms. The ZMSSD-M task (FB4) takes 2.63 ms per correction process. Finally, the initialization task (FB7, FB8 and FB9) takes 3.9 ms. These six FBs represent 99.6% of the global processing time. We focused on an efficient implementation of these FBs to enhance the global processing time.

Hardware–software optimization and improvements

OMAP3530 architecture description

The OMAP3530 is an heterogeneous architecture designed by TI (Texas Instruments) and implements an ARM Cortex-A8 500 MHz processor, a NEON coprocessor with SIMD instructions, a DSP C64x processor and a 3D graphics accelerator.

The NEON unit is similar to the MMX and SSE extensions existing on an X86 processor. It is optimized for Single Instruction Multiple Data (SIMD) operations. The NEON unit has two floating point pipelines, an integer pipeline and a 128 bits load/store/permute pipeline. An efficient implementation on the SIMD NEON architecture improves the processing time. NEON instructions perform “Packed SIMD” processing as follows:

  • Registers are considered as vectors of the same data type elements

  • Data types can be: signed/unsigned 8, 16, 32, 64-bits or single precision floating point

  • Instructions perform the same operation on multiple data simultaneously as shown in Figure5. The number of simultaneous operations depends on the data type: NEON supports up to 16 operations at the same time using 8-bits data.

Figure 5
figure 5

Data processing in a NEON architecture.

SIMD optimization results

In the Algorithm Algorithm 1 EKF-SLAM, the time-consuming FBs are: the estimation block (FB6), the initialization blocks (FB7, FB8 and FB9), the FAST detector block (FB2) and the ZMSSD-M block (FB4). FAST detector is already an optimized instance using machine learning[21]. Moreover, FAST has been already implemented on an FPGA based architecture[27]. We chose to optimize the other FBs. The matching task computes ZMSSD which computes the image correlation. It performs the same operation (addition, subtraction, multiplication and comparison) on 8 bits data. The computation of the ZMSSD can be optimized using the SIMD NEON architecture. The estimation task is based on floating point matrix multiplication, it could efficiently be optimized using the SIMD NEON coprocessor (the ARM Cortex A8 does not include any floating point unit (FPU)). The initialization FBs will be studied at Section “Parallel implementation on a DSPprocessor”.

ZMSSD (FB4)

The EKF-SLAM matches features using ZMSSD. ZMSSD is computed for each landmark using Equation 2. We chose to use a descriptor with 16×16 8-bits pixels size due to the efficiency of SIMD NEON architecture to deal with 128/64 bits vectors.

Basic implementation

The basic implementation of the ZMSSD function block computes the means of the pixel values in a window m i (m d can be precalculated when the landmark is detected). Then the ZMSSD (ZMSSD) is computed using loops (Algorithm Algorithm 2 Basic ZMSSDł::bel alg:Basic-ZMSSD).

Algorithm 2 Basic ZMSSD

1: m i ←0

2: ZMSSD←0

3: for Each i [0;des−1], j[0;des−1]do

4: m i m i +im( p x +i des 2 , p y +j des 2 )

5: end for

6: m i m i /(des×des)

7: for Each i [0;des−1], j [0;des−1]do

8:ZMSSDZMSSD+ i , j ( ( d ( i , j ) m d ) ( im ( p x + i des 2 , p y + j des 2 ) m i ) ) 2

9: end for

This implementation takes 12.60 μ s on the ARM processor.

Efficient scalar implementation

The second implementation aims to modify the calculation of ZMSSD in order to avoid the use of two loops. Formally the ZMSSD is written as:

ZMSSD= i , j ( ( d m d ) ( im m i ) ) 2
(15)

where d=d(i,j) andim=im( p x +i des 2 , p y +j des 2 )

By expanding the ZMSSD, we obtain:

ZMSSD= i , j ( ( d m d ) 2 2(d m d )(im m i )+ ( im m i ) 2 )
(16)
= i , j ( d 2 2 d. m d + m d 2 2 d. im + 2 d. m i + 2 m d . im 2 m d . m i + im 2 2 im . m i + m i 2 )
(17)

Using m d = kl d ( k , l ) des × des and m i = kl im ( p x + k des 2 , p y + l des 2 ) des × des , we simplify the sum:

i , j m d = i , j kl d ( k , l ) des × des
(18)
= kl d(k,l)
(19)
i , j m i = i , j kl im ( p x + k des 2 , p y + l des 2 ) des × des
(20)
= kl im p x + k des 2 , p y + l des 2
(21)

The equation becomes:

ZMSSD = 2 i , j dim i , j d 2 i , j im 2 / ( des × des ) + i , j d 2 + i , j im 2 2 i , j dim
(22)

Using the notation:

  • Sd= i , j d(i,j) the sum of the descriptor pixels (this sum can be precalculated).

  • Si= i , j im( p x +i des 2 , p y +j des 2 ) the sum of the image pixels.

  • SSi= i , j im( p x +i des 2 , p y +j des 2 )×im( p x +i des 2 , p y +j des 2 ) the sum of squared image pixel values.

  • SSd= i , j d(i,j)×d(i,j) the sum of the squared descriptor pixel values (this sum can be precalculated).

  • Sdi= i , j d(i,j)im( p x +i des 2 , p y +j des 2 ) the sum of the product of the descriptor pixels and the image pixels.

The final equation is:

ZMSSD = [ ( ( 2 Sd × Si ) Sd 2 Si 2 ) / ( des × des ) ] + SSi + SSd 2 Sdi
(23)

The implementation of Algorithm Algorithm 2 Basic ZMSSDł::bel alg:Basic-ZMSSD becomes Algorithm Algorithm 3 Efficient scalar ZMSSD:

Algorithm 3 Efficient scalar ZMSSD

1: Si←0

2: SSi←0

3: Sdi←0

4: for Each i[0;des−1], j[0;des−1]do

5:SiSi+im( p x +i des 2 , p y +j des 2 )

6:SSiSSi+im( p x +i des 2 , p y +j des 2 )×im( p x +i des 2 , p y +j des 2 )

7:SdiSdi+im( p x +i des 2 , p y +j des 2 )×d(i,j)

8: end for

9:ZMSSD(((2Sd×Si)S d 2 S i 2 )/(des×des))+SSi+SSd2Sdi

In this instance, we use only one loop. This reduces memory access. Using this implementation, the computing time decrease from 12.60μ s to 11.29μ s.

Vector implementation

SIMD NEON architecture allows vector processing and performs the same operation on all the vector processing-units. We have implemented a vectorized instance of the ZMSSD functional block as follows (Algorithm Algorithm 4 SIMD vectorized ZMSSD):

Algorithm 4 SIMD vectorized ZMSSD

1: V 8x 8V image←0   V8x8: 8×8 bits vector

2: V 8x 8 V descriptor ← 0

3: V 16x 8 VSi←0   V16x8: 8×16 bits vector

4: V 32x 4 VSSi←0   V32x4: 4×32 bits vector

5: V 32x 4 VSdi←0

6: for Each i[0;des−1], j=0,8do

7:Vimage load 8 (im( p x +i des 2 , p y +j des 2 )) load 8 pixels

8: V descriptor←load8(d(i,j))

9: VSiVSi + V image

10: VSSiVSSi + V image×V image

11: VSSiVSSi + V image×V image

12: end for

13: Si ← sum(VSi) Sums the component of a vectors

14: SSi ← sum(VSSi)

15: Sdi ← sum(VSdi)

16:ZMSSD(((2Sd×Si)S d 2 S i 2 )/256)+SSi+SSd2Sdi

This instance uses 8 pixels at time. SIMD NEON architecture allows computing eight addition or eight multiplication simultaneously. The processing time of the vector implementation decreases to 1.27 μ s.

Computation time results

Table3 summarizes the processing time of the three different implementations of the ZMSSD functional block. The SIMD implementation is approximately 10 times faster than a basic implementation.

Table 3 ZMSSD processing time

Estimation (FB6)

ARM Cortex A8 do not integrate a FPU. That’s why the processing time of the estimation FB is significant (Figure4). To optimize the matrix multiplication, we have used the EIGEN3 library[28] which provides SIMD NEON optimized functions. Figure6 presents the results of the processing-time of the estimation task implemented on the ARM processor (non-optimized task) and those using the SIMD NEON coprocessor (optimized task). The processing time of the optimized task is approximately eight times faster than those of the non-optimized one. This gain is due to the lack of the FPU in the Cortex A8 and to the efficiency of the NEON to evaluate a multiply and accumulate instruction in only one CPU cycle.

Figure 6
figure 6

Processing time of the estimation task on the ARM and NEON coprocessor.

Parallel implementation on a DSP processor

Digital signal processors (DSP) are usually used in vision systems[29]. They integrate a number of resources that serve to enhance image processing versatility. The use of digital signal processing with data sharing ensures that image processing will be achieved in parallel. With a DSP based image processing, it is possible to parallelize the EKF-SLAM algorithm on the multiprocessor architecture (ARM, NEON and DSP processors). This allows enhancing the global processing time especially when we consider to operate in real-time constraints. The landmarks matching (FB3 to FB5) and the robot position estimation (FB6) tasks must be processed sequentially. Fortunately, the initialization tasks (FB7, FB8 and FB9) can run simultaneously with the matching and estimation tasks.

Rethinking the implementation to obtain a parallel implementation, the instance of Algorithm 1 with block partitioning leads to the Algorithm 5.

Algorithm 5 Multiprocessed EKF-SLAM

1: Robot pose initialization

2: while localization is required do

3: if DATA = Odometers then

4: Prediction

5: else if DATA = Camera then

6: FAST detector

7: ARM Processor Matching and Estimation (FB 3, 4, 5 and 6)

8: DSP Processor Initialization (FB 7, 8 and 9)

9: end if

10: end while

The architecture of the OMAP3530 can interface the ARM and DSP processors using a shared memory. Figure7 shows the data transfer mechanism using a shared DDR memory area. For each acquired image, the ARM processor writes the image (320×240 pixels), the robot position and its uncertainty on the shared memory. Data transfer between the ARM processor and DSP processor for a 320×240 gray image is done in one millisecond. When the initialization of a landmark is completed, the DSP processor returns the position and the uncertainty of possible new landmarks.

Figure 7
figure 7

ARM-DSP interface with a shared memory.

Global results

We have improved the EKF SLAM implementation using the SIMD NEON coprocessor and the DSP processor. We have implemented the matching and estimation tasks on a NEON coprocessor and the initialization tasks on a DSP processor. FAST corner detector is already an optimized algorithm using machine learning[21]. For the latest experiment, we set the same thresholds as Section “Experimental results”.

Table4 summarizes the processing time per iteration and the mean processing time per Frame of each FB. The computing time of the initialization task (blocks 7, 8 and 9) implemented on the DSP processor is approximately 4.0 ms. The DSP processor computes the initialization task while the ARM-NEON processors compute the prediction, FAST detection, matching and estimation tasks.

Table 4 FBs processing times on ARM, NEON and DSP processors

With this implementation and since the processing-time of the initialization task (4.0 ms) is smaller compared to the sum of the processing times of the matching and estimation tasks (13.0 ms for blocks 3, 4, 5 and 6), the overall computing time is reduced to the sum of the processing-times of the prediction process (0.093 ms), the FAST detector (3.4 ms), the matching and estimation tasks (13.0 ms). The mean processing time per frame with the optimized implementation is 17.6 ms (we add 1 ms for the ARM/DSP data transfer) whereas the nonoptimized implementation has a processing time of 80.85 ms. The optimized processing time represents 22% of the nonoptimized one. The processing time has been reduced by 78%.

Conclusion

This article proposed an efficient implementation of the EKF-SLAM algorithm on a multiprocessor architecture. The overall accuracy of the EKF-SLAM depends on the number of the landmarks in the state vector and the matched observations. Both are linked to the time allowed to the embedded architecture to compute the robot pose. Based on the application constraints (real-time localization) and an evaluation methodology, we have implemented the algorithm in consideration of the underlying hardware architecture. A runtime analyses shows that the FBs and the initialization task represents 99.6% of the global processing time. We have used an optimized instance of the FAST detector. Two FBs (in matching and estimation tasks) have been optimized on an SIMD NEON architecture. The initialization task has been parallelized on a DSP processor. This optimization required a modification of the algorithm implementation. Using the optimized implementation, the global processing time was reduced by a factor equal to 4.7. The results demonstrate that an embedded systems (with a low-cost multiprocessor architecture) can operate under real-time constraints, if the software implementation is designed carefully. To scale with larger environment, we are going to include an approach of local/global mapping as proposed by[30]. Using this approach, we will be able to map larger environment. The map joining system will be implemented on the GPU coprocessor integrated on the OMAP3530.

Other future developments will be centered around a Hardware–software co-design to improve the system performances implementing a system-on-chip with a field programmable gate array (FPGA). The use of a configurable architecture accelerates greatly the design and validation of a proof of real-time and system-on-chip concept.

References

  1. Dissanayake M, Newman P, Clark S, Durrant-Whyte H, Csorba M: A solution to the simultaneous localization and map building (SLAM) problem. IEEE Trans. Robot. Autom 2001, 17: 229-241. 10.1109/70.938381

    Article  Google Scholar 

  2. Davison A, Reid I, Molton N, Stasse O: MonoSLAM: real-time single camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell 2007, 29: 1052-1067.

    Article  Google Scholar 

  3. Montemerlo M, Thrun S, Koller D, Wegbreit B: FastSLAM: a factored solution to the simultaneous localization and mapping problem. National Conference on Artificial Intelligence, Orlando, Florida, USA, 2002, pp. 593–598

    Google Scholar 

  4. Folkesson J, Christensen HI: Graphical SLAM-a self-correcting map. IEEE International Conference on Robotics and Automation, LA, New Orleans, USA, 2004, pp. 383–390

  5. Eliazar A, Parr R: DP-SLAM: fast, robust simultaneous localization and mapping without predetermined landmarks. International Joint Conference on Artificial Intelligence vol. 18. Acapulco, Mexico, 2003, pp. 1135–1142

    Google Scholar 

  6. Thrun S: Probabilistic robotics. Assoc. Comput. Mach 2002, 45(3):52-57.

    Google Scholar 

  7. Brenneke C, Wulf O, Wagner B: Using 3d laser range data for slam in outdoor environments. IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, Nevada, USA, 2003, pp. 188–193

    Google Scholar 

  8. Prusak A, Melnychuk O, Roth H, Schiller I: Pose estimation and map building with a time-of-flight-camera for robot navigation. Int. J. Intell. Syst. Technol. Appl 2008, 5(3):355-364.

    Google Scholar 

  9. Abrate F, Bona B, Indri M: Experimental EKF-based SLAM for mini-rovers with IR sensors only. European Conference on Mobile Robots, Freiburg, Germany, 2007

    Google Scholar 

  10. Yap T, Shelton C: SLAM in large indoor environments with low-cost, noisy, and sparse sonars. IEEE International Conference on Robotics and Automation, Kobe, Japan, 2009, pp. 1395–1401

    Google Scholar 

  11. Gifford C, Webb R, Bley J, Leung D, Calnon M, Makarewicz J, Banz B, Agah A: Low-cost multi-robot exploration and mapping. IEEE International Conference on Technologies for Practical Robot Applications, Woburn, Massachusetts, USA, 2008, pp. 74–79

    Google Scholar 

  12. Magnenat S, Longchamp V, Bonani M, Rétornaz P, Germano P, Bleuler H, Mondada F: Affordable SLAM through the co-design of hardware and methodology,. IEEE International Conference on Robotics and Automation, Anchorage, Alaska, 2010, pp. 5395–5401

    Google Scholar 

  13. Montemerlo M, Thrun S, Koller D, Wegbreit B: FastSLAM 2.0: An improved particle filtering algorithm for simultaneous localization and mapping that provably converges. International Joint Conference on Artificial Intelligence, Acapulco, Mexico, 2003, pp. 1151–1156

    Google Scholar 

  14. Rezaei S, Guivant J, Nebot E: Car-like robot path following in large unstructured environments. IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, Nevada, USA, 2003, pp. 2468–2473

    Google Scholar 

  15. Schröter C, Böhme H, Gross H: Memory-efficient gridmaps in Rao-Blackwellized particle filters for SLAM using sonar range sensors. European Conference on Mobile Robots, Freiburg, Germany, 2007, pp. 138–143

    Google Scholar 

  16. Konolige K, Augenbraun J, Donaldson N, Fiebig C, Shah c: A low-cost laser distance sensor. IEEE International Conference on Robotics and Automation, Pasadena, California, USA, 2008, pp. 3002–3008

    Google Scholar 

  17. Pirjanian P, Karlsson N, Goncalves L, Di Bernardo E: Low-cost visual localization and mapping for consumer robotics. Indust. Robot 2003, 30(2):139-144. 10.1108/01439910310464159

    Article  Google Scholar 

  18. Seignez E, Kieffer M, Lambert A, Walter E, Maurin T: Real-time bounded-error state estimation for vehicle tracking. IEEE Int. J. Robot. Res 2009, 28: 34-48. 10.1177/0278364908096237

    Article  Google Scholar 

  19. Siegwart R, Nourbakhsh I: Introduction to Autonomous Mobile Robots. (The MIT Press, London, 2004)

    Google Scholar 

  20. Vincke B, Elouardi A, Lambert A: Design and evaluation of an embedded system based SLAM applications. IEEE/SICE International Symposium on System Integration, Sendai, Japan, 2010, pp. 224–229

    Google Scholar 

  21. Rosten E, Porter R, Drummond T: Faster and better: a machine learning approach to corner detection. IEEE Trans. Pattern Anal. Mach. Intell 2009, 32: 105-119.

    Article  Google Scholar 

  22. Davison A: Real-time simultaneous localisation and mapping with a single camera. IEEE International Conference on Computer Vision, Nice, France, 2003, pp. 1403–1410

    Chapter  Google Scholar 

  23. Munguia R, Grau AFreiburg, Germany, 2007, pp. 1–6

  24. Seignez E, Lambert A, Maurin T: An experimental platform for testing localization algorithms. IEEE International Conference On Information And Communication Technologies: From Theory To Application, Damascus, Syria, 2006, pp. 748–753

    Google Scholar 

  25. Elouardi A, Bouaziz S, Dupret A, Lacassagne L, Klein JO, Reynaud R: A smart architecture for low-level image computing. International Journal on Computer Science and Applications, 2008, pp. 1–19

    Google Scholar 

  26. Auat Cheein F, Carelli R: Analysis of different feature selection criteria based on a covariance convergence perspective for a SLAM algorithm. Sensors 2010, 11: 62-89. 10.3390/s110100062

    Article  Google Scholar 

  27. Kraft M, Schmidt A, Kasinski A: High-speed image feature detection using FPGA implementation of fast algorithm. International Conference on Computer Vision Theory and Applications, Funchal, Madeira, Portugal, 2008, pp. 174–179

    Google Scholar 

  28. Eigen, 2012.http://eigen.tuxfamily.org/

  29. Gunnam K, Hughes D, Junkins J, Kehtarnavaz N: A vision-based DSP embedded navigation sensor. IEEE Sens. J 2002, 2(5):428-442. 10.1109/JSEN.2002.806212

    Article  Google Scholar 

  30. Piniés P, Tardós J: Large-scale slam building conditionally independent local maps: application to monocular vision. IEEE Trans. Robot 2008, 24(5):1094-1106.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bastien Vincke.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Vincke, B., Elouardi, A. & Lambert, A. Real time simultaneous localization and mapping: towards low-cost multiprocessor embedded systems. J Embedded Systems 2012, 5 (2012). https://doi.org/10.1186/1687-3963-2012-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-3963-2012-5

Keywords