<<

A Literature Study of Indoor Positioning Systems and Map Building Software

Robin Amstersa, Ali Bin Junaida, Samrat Roya, Peter Slaetsa, Peter Aertsb, Maarten Verheyenb, Eric Demeesterb,

AIMPRESEARCH GROUP,KULEUVEN,MECHANICAL ENGINEERING TECHNOLOGY CLUSTER TC, CAMPUS GROEP TLEUVEN,BELGIUM

BACRORESEARCH GROUP,KULEUVEN,DEPARTMENT OF MECHANICAL ENGINEERING,CAMPUS DIEPENBEEK,WETENSCHAPSPARK 27, 3590 DIEPENBEEK,BELGIUM

Juli 17, 2017

http://www.acro.be/adusumnavigantium/ Chapter 1

Introduction

Positioning is not a particularly new problem. Mankind has attempted to determine its location for centuries, using instruments such as sextants, clocks, almanacs, maps, etc. One of the largest revolutions in this field is probably the advent of the global positioning system (GPS), which was originally launched in 1973 by the United States military. GPS can provide position information almost anywhere on earth to anyone with a GPS receiver. The position is determined via signals that originate from sattelites that orbit the earth. These carry atomic clocks that are synchronised to each other and to ground clocks. The location of the satellites is known very precisely, and is broadcasted continuously by them. By using the satellite location, the travel time of the signal and the propagation speed, it is possible to determine the distance from the sattelite to the receiver. When the distance to at least 4 satellites is known, the receiver position can be determined in 3 dimensions via multilateration. Besides GPS, a number of other global navigation sattelite systems also exist. Examples include the Galileo (Europe) and GLONASS systems (Russia).

The low cost and global coverage make GPS a very attractive solution. Navigation, geotagging pictures, location based emergency services and more have all become possible since this technol- ogy became commercially available. One major drawback, however, is that GPS often does not work in indoor environments, as the signals of the sattelites are generally too weak to penetrate the walls of buildings [1]. Additionally the accuracy is limited to a couple of meters, which is insufficient for a lot of indoor applications. A study performed by the National Exposure Research Laboratory indicates that people spend about 90% of their time indoors [2]. Indoor location information could thus provide many opportunities, which is illustrated by the fact that the indoor location market is increasing in size rapidly. It is estimated that the indoor mapping market will be worth $10 billion 2020 [3]. Figures 1.1 and 1.2 provide estimates of the growth of the indoor localization industry in more detail. These figures show that indoor location information has economic potential (examples of con- crete applications can be found in section 1.1). But since GPS cannot be used for indoor envi- ronments, different technologies are required obtain this information. These systems are usually referred to as indoor positioning systems (IPS). While GPS is often regarded as the de facto stan- dard for outdoor environments, there exists no such standard for indoor environments. Even though indoor localization has received increasing attention from the research community, no single ’best’ system has been established. The wide variety of these environments has prompted an equally wide variety of IPS, each with their own pros and cons. There are multiple factors to consider when

1 Figure 1.1: Indoor Location Technology and Services Spending [4]

Figure 1.2: Indoor Location-Influenced Spending [4]

2 selecting an IPS, for example [5]:

• Accuracy

• Availability

• Cost

• Coverage area

• Intrusiveness

• Output data

• Scalability

• Update rate

• etc.

These are all factors that may have to be accounted for depending on the application. Because environments differ so much, the performance of IPS depend on where they are deployed. We see then, that an objective, general comparison between IPS is very challenging. Even worse is that the research community does not have a single standard when it comes to characterising the performance of novel systems. Some formulations have been proposed [6][7], but these are not always reported. Evaluation of scientific contributions also reveals that ’a high percentage of publications describe their methods of ground truth data gathering poorly at best’ [8]. This makes objective comparison between localization research very challenging. A comparison can therefore only really be made for the same reference environment. An example of such a comparison is given in chapter 5.

1.1 Applications of indoor positioning

Having access to indoor location information opens up a wide array of applications. Examples of such applications include:

• Augmented reality (AR) and virtual reality (VR): AR browsers like Junaio can overlay location specific information even in indoor environments. This can be used to for example create an interactive museum tour. Visitors can receive more information about the parts of the exhibits they are seeing on their smartphone [9]. Apps like Lowes Vision can be used to plan home improvements. This app allows users to see how certain pieces of furniture would look in their house with the help of a smartphone [10]

• Navigation: Just like sattelite navigation systems allow us to travel to unknown places easily, indoor location information can be used for navigation inside large buildings. Exam- ples include college campuses, museums, shopping malls, stores, warehouses, airports, train stations, office buildings, etc. [11][12][13].

3 • Location based advertising: location information allows consumers to receive tailor- made, location-specific advertisements. Research indicates that a disproportionate amount of coupons that were given to costumers inside the store were used (in-store handouts repre- sent 3.3% of all coupons that were handed out whereas they represent 14% of all redeemed coupons [14]). This reflects the openness to influence of the buyers at the point of sale. Offline analytics could also be used to visualize the impact of advertising campains to in-store sales. [15]

• Social networks: the accuracy of location-based social networks such as Foursquare or Gowalla could benefit from indoor localization techniques.

• Indoor robotics: position information in indoor environments opens up opportunities for many indoor robotic applications. Examples include:

– Robotic wheelchairs [16] – Guidance [17][18] – Robotic vacuum cleaners (, Neato, etc.) – Autonomous pallet jacks [19] – Mobile monitoring of conditions (gas, radiation, biohazards, etc.) in dangerous places (factories, waste houses, etc.) when deployment of a stationary system is not possible

• Logistics:

– Package tracking inside warehouses – Deliveries of small objects in large buildings – Locating equipment and tools inside a plant. Knowing where equipment is at all times enables a productivity increase, because staff has to spend less time looking for it. It also reduces costs associated with theft and loss of equipment.

• Security:

– Localizing people in high-security areas and checking clearance – Security systems with automatic mobile patrol capability

• Medical: Cost reduction is necessary in the health care sector, as there is a growing shortage of nurses and doctors as the current population ages. A number of applications for IPS in healthcare have already been proposed, with the aim of reducing costs:

– Asset tracking. In [20], it is estimated that hospitals purchase 10% to 20% more portable equipment than is actually required for operational needs, just so that staff may find it when needed. It is also estimated that a hospital could save up to £130.500, just from the excess purchase of IV pumps. This estimate includes the cost of the positioning system, meaning that it has an immediate return on investment. If this estimate were to incorporate other equipment, the reduced cost of depreciation, maintenance and storage, as well as a less expensive positioning system, the savings could become very large. Another example of the impact of asset tracking in healthcare can be found in a project by OpenRTLS and a Dutch system integrator [21]. In 2014 they completed a case

4 study on a medium scale RTLS deployment of 25.000 tags in a large hospital in the Netherlands. All relevant costs within a 9 year period were taken into account. In the first phase (asset tracking), savings are estimated at e300.000 to e400.000 annually in this single hospital environment. Asset tracking could potentially even save lives. In some cases, finding important equipment such as crash carts is very time critical. – Tracking patient flows for throughput management can help diagnose bottlenecks and tailor (and monitor the implementation of) appropriate solutions for problems such as extended waiting times, overcrowding and boarding in outpatient clinics, emergency departments/rooms (ED/ER) and post-anaesthesia care units (PACUs); bumped and late surgeries; and the lack of available routine inpatient and intensive care unit (ICU) beds [20]. Monitoring patient flow or movement (handoffs) between departments, can also prove to be valuable. For example transfer from ED to another department is accomplished by giving each patient a unique tag to always carry with them. The time spent by patients in each location is logged by an analytic application. By monitoring the time patients spend in various rooms and departments around the hospital, the hospital management can decide whether they need to allocate more staff or equipment at different departments and stages of the patients journey. – Tracking patients with dementia: measures could be taken to restrict their access to certain areas based on their location. This allows them to move more freely on their own. – Assisted living: RTLS can provide information about residents mobility around the home (e.g., by computing the daily distance walked by each resident based on the distance between each sensor he/she passed by; the latter (distances between sensors) are stored in the computer system), which can be used as an indicator of residents overall well-being and in detecting problems, such as when an older resident has not left his/her room or visited a toilet within a pre-set period of time. – Advanced nurse calling systems: these systems can locate the nearest nurse to a patient requesting help, making their work more efficient. – RTLS has the potential of improving the productivity of nurses and caregivers and hence their job satisfaction levels by reducing many mundane and repetitive tasks that staff encounters on a daily basis. For example, a nurse or a caregiver typically has to manually cancel a call (register that it has been answered), but an RTLS can perform the same task automatically by recognising the nurses presence in the room. RTLS can also cut the time staff has to spend to check the status of rooms and beds and also improve a patients family/visitors satisfaction by increasing their awareness of patient location

• Search and rescue: Communications with fire-fighters and rescue workers.

1.2 Classification of indoor positioning systems

When it comes to indoor positioning, one can categorize different systems in a number of different ways. The first way to classify IPS is based on the technology that is used, this is the channel that ensures communication between transmitter and receiver. Classification based on technology divides IPS into two main categories: sensor methods and camera methods [22]. Indoor positioning

5 technologies are described in more detail in chapter 3

A second way to differentiate different IPS, is based on the technique that is used. This refers to the way that the position is estimated based on the available data. Indoor positioning tech- niques are discussed in chapter 2.

A final way to classify IPS is via the topology of the system. This classification refers to where the position information is obtained and used. It is discussed in more detail in chapter 4.

By combining a technology with a technique and a topology one obtains a positioning system. Many such combinations are possible, although some are more compatible than others. These posi- tioning systems are not always limited to indoor environments, and could also be applied in outdoor environments. Outdoor applications are out of the scope of this discussion, however. An indoor positioning system does not have to be limited to using just one technique or technology, many hybrid approaches exist, and these generally have improved accuracy over using just 1 technology or technique.

1.3 Map representations

Representing an environment in which a mobile platform can move is a dual of the problem of representing the possible positions of this platform [23]. When choosing a map representation, one must take into account that the precision of the map, the type of represented features and the complexity of the map will influence the performance and computational requirements of the positioning algorithm. Because of these trade-offs, location information can take a few different forms. In accordance to this, different map representations are also possible. Some examples of location information types are [24]:

• Physical location: expressed as coordinates on a two or three dimensional map. The most well example of this is the degree/minute/second representation.

• Absolute location: the same reference is used for all of the located objects

• Relative location: this approach relies on its own frame of reference

• Symbolic/topological location: this final type of location information is expressed in human language, such as ’in the hall’, ’in the garage’, etc.

Different mapping representations can generally be divided into one of two categories: metric maps and topological maps [25]. Both types of representations have their respective advantages and disadvantages. An overview can be found in Table 1.1.

In metric maps, the environment is represented in terms of geometric relations between the objects and a fixed reference frame. A well known example of a metric map is the occupancy grid. The world is treated as a grid of cells and each cell is marked as occupied (1) or unoccupied (0). The memory required to hold an occupancy grid increases with the spatial area represented and inversely with the cell size. A polygon-based map is another example of a metric map. These represent the vehicle position as px, yq P R2 and the driveable regions or obstacles as polygons,

6 each comprising lists of vertices or edges. This can be a very compact format, but determining collisions may involve testing against a long list of edges.

Topological maps on the other hand, only represent adjacency relations between objects. A set of objects are connected by links (typically denoted by GpV, Eq). The objects (V) are called vertices or nodes, and the links (E) , that connect some pairs of vertices are called edges or arcs. Edges can have an associated weight or cost associated with moving from one of its vertices to the other. A sequence of edges from one vertex to another Is a path.

Map Advantages Disadvantages representation Adequate when representing sensor Metric maps Space consuming delivered metric information Necessary in the implementation of metric algorithms (e.g. shortest Unsuitable for global planning path) Sensitive to odometry errors Can be used effectively for Topological maps symbolic planning (especially when Unsuitable for accurate positioning considering long distances) Uses less memory

Table 1.1: Advantages and disadvantages of different camera positioning techniques

1.4 Calibration

Most IPS require some sort of calibration procedure. Usually, the goal is to find the location of the fixed anchor beacons, or to build a (signal) map of the environment. For the determination of the location of anchor nodes, multiple methods exist [26]:

1. Using a second positioning system with a higher degree of accuracy (e.g. totalstations, tachymeters, etc.): this requires extra (more expensive) hardware and can be a tedious process

2. Auto-localization / Mobile Assisted Positioning / Simultaneous Localization and Tracking (SLAT): A dynamic listener is slowly moved at different locations in a room thereby collecting ranging data to 4 (or more) beacon nodes that are mounted at the ceiling. Both the mobile and the static node positions are unknown. The inter-beacon distances are sometimes also unavailable. After the auto-localization procedure had been completed, the coordinates of the static beacons are available in a local system. In an over-determined setup, quality indicators can be obtained as well. Some methods (e.g. heuristic methods) have a high computational cost. This is, however, not always a problem as the auto-localization algorithm has is only executed once and not in real-time. [26]

7 8 Chapter 2

Indoor positioning techniques

We will start our discussion of indoor positioning systems by giving an overview of different posi- tioning techniques. As mentioned in section 1.2, this refers to the method that is used to determine the position based on available data (such as RF signals sent by beacons, or images taken with a camera). This classification gives us a two large categories, namely triangulation and scene analysis techniques, and can be seen in Figure 2.1. These will be discussed sections 2.2 and 2.3 respectively. A smaller, third category can also be distinguished, namely proximity techniques. This is probably the simplest approach to positioning, but generally also the least accurate one. Signals from differ- ent transmitters are separated and the receivers is marked as being close to the transmitter with the strongest received signal. It is clear that the accuracy in this case is limited by the number of transmitters that are present in the environment. The implementation of this technique is, however, very simple. A general comparison between the pros and cons off different positioning techniques can be found in Table 2.1.

Angulation AOA

TOA Triangulation TDOA Lateration RTOF Indoor positioning Proximity RSS techniques POA

Scene analysis

Figure 2.1: IPS classification according to technique used

2.1 Proximity techniques

Proximity based localization relies on the presence or signal strength of a wireless transmitter. The receiver is considered to be ’close’ to the transmitter to from which the largest signal strength is detected. As such, only symbolic information is provided. Proximity based localization is among the simplest techniques to be implemented, but only has limited accuracy. By increasing the

9 Table 2.1: Advantages and disadvantages of positioning techniques

Technique Advantages Disadvantages Low accuracy (unless a lot of Proximity Simplest implementation transmitters are deployed) Requires clock synchronization TOA Straightforward implementation between all transmitters and receivers Timestamp must be sent along with the signal Requires LOS Requires only 2 measurements (for 2D Requires clock synchronization TDOA localization) between all transmitters Does not require clock synchronisation Requires the solving of a set of between the transmitters and receivers non-linear equations Requires LOS Requires a good path loss model (which are often environment specific, RSS Does not require LOS meaning that they have to be changed when the environment changes) Less strict time synchronisation RTOF requirement POA No time synchronisation required Requires LOS Phase measurements can be ambiguous Directional atennae or antenna arrays AOA No time synchronisation required are required 2D position can be determined from Long distance measurements are less just two measurements (3 in 3D) accurate

10 deployed number of beacons, accuracy can be improved. This comes with an increase in system cost, however.

2.2 Triangulation techniques

Triangulation techniques use the geometric properties of triangles to estimate position. One can differentiate between angulation and lateration approaches. Lateration approaches can be further categorised based on the measuring principle.

2.2.1 Lateration Multi-lateration techniques estimate the position of an object by measuring its distances from multiple reference points. The individual distances define a function along which the object can be found (e.g. a circle or hyperbola). The intersection of enough functions uniquely defines the location.

• Time of arrival (TOA): uses the one-way travel time and the signal velocity to estimate the distance between nodes. To uniquely determine the location in two dimensions, three TOA measurements are needed. This approach requires that all clocks in the system (transmitters and receivers) are accurately synchronised. In addition, a timestamp must be labeled in the transmitting signal in order for the measuring unit to discern the distance the signal has traveled. Usually, the different TOA measurements are used to locate the object on a circle and the intersection of three circles defines the location in two dimensions (4 measurements are required in the 3D case). Altough other approaches exist such as the least squares approach [27].

Figure 2.2: TOA/RTOF/POA postioning technique [24]

• Time difference of arrival (TDOA): estimates the distance between reference nodes based on the difference in arrival time of the signal from the mobile node to multiple reference nodes. For each TDOA measurement, the transmitter must lie on a hyperboloid with a constant range difference between the two receivers. To uniquely determine the position in two dimensions, only 2 TDOA measurements are needed. The use of this method also requires that the clocks

11 of the measuring units are synchronised, but does not impose any requirement on the mobile target. The use of this method requires that a set of non-linear equations. This can be done through non-linear regression [28] or the equation can be linearized via a Taylor-series expansion and solved with an iterative algorithm [29]. Correlation techniques are usually used to obtain the time difference of the signals [24].

Figure 2.3: TDOA postioning technique [24]

• Received signal strength (RSS): estimates the distance between nodes based on the energy of the received signal. This can be done using a model for the path loss. The accuracy of this model will ultimately determine the accuracy of the positioning system as a whole. Scene analysis is also commonly used in combination with RSS measurements (see 2.3).

• Roundtrip time of flight (RTOF): measures the time-of-flight of the signal traveling from the transmitter to the receiver and back. The time synchronisation requirement is less strict then for TOA. The range measurement mechanism is however the same as that of the TOA.

• Received signal phase method/Phase of arrival (POA): uses the phase (or phase difference) of the signal to estimate the range from the transmitter to the receiver. Once this range is obtained, the same positioning algorithms used for TOA can be used.

2.2.2 Angulation

Angulation techniques attempt to localize mobile nodes based on measured angles instead of mea- sured distances. Angles between a given (mobile) node and a number of reference nodes are used to estimate the location. This technique is also known as angle of arrival (AOA). Positioning in 2D requires 2 known reference points and 2 angles (3 measuring units are required in 3D). No time synchronisation is required for AOA. The disadvantages include the large and complex hardware that is required (directional antennae or antennae arrays), and increase in location inaccuracy as the mobile target moves farther from the measuring units.

12 Figure 2.4: AOA postioning technique [24]

2.3 Scene analysis techniques

Scene analysis techniques work by comparing an online measurement to a previously made database. This is thus a two stage process, comprising of an offline data collection and an online position estimation stage. Different algorithms are required for the two stages, Figure 2.5 gives a graphical overview of these techniques.

2.3.1 Offline phase in the first (offline) stage features (that are location dependent, also known as fingerprints) are collected at a number of known locations. These features are stored in a database. These features are usually the received signal strength of some RF signal (e.g. UWB, WLAN, RFID, etc. see chapter 3), but images taken with a camera can also be used as features.

2.3.2 Online phase During the second (online) stage, measurements are matched to the database to obtain a position estimate. There are different methods for this second stage, examples include: • Probabilistic methods: these methods consider positioning as a classification problem. Assuming there are n location candidates L1, L2, ..., Ln and s is the observed signal strength vector, the decision rule is formulated as follows:

Choose Li if P pLi|sq ą P pLj|sq for i, j “ 1, 2, ..., n, j ‰ i

These probabilities may be obtained by using for example Bayes’ rule.

• k-nearest-neighbour (kNN): k closest matches of known locations are obtained from the previously built database using the root mean square errors principle. These k location candidates are then averaged with or without using these errors as weights.

• Neural networks: during the offline phase, the signal strengths and their locations are used as inputs for training purposes. After this this training, appropriate weights are obtained. The neural network can then be used during the online phase.

• Support vector machine (SVM): another machine learning algorithm that can be used for data classification and regression. The machine conceptually implements the following

13 idea: input vectors are non-linearly mapped to a very highdimension feature space. In this feature space a linear decision surface is constructed that can be used for binary classification [30].

• SMP: this technique uses the online RSS values to search for candidate locations in signal space with respect to each signal transmitter separately. M-vertex polygons are formed by choosing at least one candidate from each transmitter (suppose total of M transmitters). Av- eraging the coordinates of vertices of the smallest polygon (which has the shortest perimeter) gives the location estimate.

Figure 2.5: Taxonomy of positioning techniques [31]

14 Chapter 3

Indoor positioning technologies

As mentioned earlier, one can differentiate indoor positioning systems based on the technology that is used to obtain the location information, this classification is shown in Figure 3.1 . This classification contains two big categories; namely sensor methods and camera methods, although hybrid methods also exist. With sensor methods, the system is comprised of a positioning device and the infrastructure in a scene. These methods calculate the self-location relative to the infrastructure based on availability, intensity, or latency of the signal. Camera methods use computer vision to determine the location of the device. And can be further split up into methods that use features that are naturally present in the environment, and methods that use artificially placed markers. A general overview of different technologies can be found in figure 3.2. These technologies will be discussed in more detail in their respective sections. A graphical representation of different technologies with their respective accuracies and coverage areas can be found in 3.3.

3.1 Sensor technologies

3.1.1 Sound

In contrast to electromagnetic waves such as UWB, sound is a mechanical wave propagating through a medium (usually air, though sound can move through walls and other objects as well). Audible sound can be used for positioning, though in usually ultrasonic waves are used in stead to make the system less obtrusive [5]. Systems using ultrasonic waves can be classified into two categories, depending on the system architecture. The first are active device systems, which determine the location based on multilateration from three or more ranges to fixed receivers deployed at known locations. Passive device systems consist of a reverse signal flow with multiple static emitters at known locations and one or more mobile passive devices which receive the signal. The distance between two nodes can be estimated via for example TOA or TDOA measurements (see ??). With TDOA each beacon broadcasts an RF (Radio Frequency) signal together with an ultrasonic pulse in order to trigger nearby receiver nodes. The range r between beacon and listener can then be derived by the difference of arrival times △t between the RF and the US signal. Similar to the biosonar used by several animals (such as bats), echolocation can also be used for indoor positioning. Sound pulses are emitted by transmitter to the environment and the returned echoes are used to locate and identify objects or even determine the transmitters location. This system does not require body-worn tags, but has an accuracy that is much lower than other US systems (ď0.5m [33]).

15 WLAN

Bluetooth

Inertial measurements Active RFID Passive IR UWB CW

PRNM

Active Sensor IR Passive technologies

Arti¡ cial light Active Indoor positioning Sound Passive technologies Echolocation NFER Magnetic FP PM

LIDAR

Database Natural features No database Camera technologies

Markers

Figure 3.1: IPS classification according to technology used

Applications • Sonitor is a commercially available system that uses Ultrasound technology in combination with Wi-Fi and LF for indoor positioning. Applications of the system focus on the healthcare sector [34]. • Cricket: a research project by MIT that provides centimeter level accuracy with low cost, of the shelf components [35]. • Marvelmind: a low cost, centimeter level accuracy system. Applications focus more on hob- byists [36].

3.1.2 WLAN Wireless Local Area Network (WLAN) is a type of wireless computer network that is most often used to provide internet access to wireless devices within a limited area. Most WLANs are based

16 Figure 3.2: Overview of indoor positioning technologies. Coverage refers to ranges of single nodes [5] on the IEEE 802.11 standard, such as Wi-Fi. Since their inception, WLANs have become very widespread, and receivers are being built into more and more devices as the internet of things (IOT) gains more ground. Devices such as laptops and smartphones are almost ubiquitous in certain parts of the world, meaning that IPS can be implemented simply in software. Lateration, fingerprinting and proximity techniques can all be used with WLAN [5]. Modelling the propagation is very challenging, as there are many factors to be considered, some of which are very hard to account for (e.g. the number of people in a room and the number of open/closed doors).

Applications • Skyhook is a software solution that allows developers access to a database containing the location over 800 million Wi-Fi access points and cellular towers. Skyhook deployed drivers to survey streets, highways, etc. to build up this database, but users can also submit acces points themselves. Accuracy can be as low as 20m [37]. • Ekahau provides software for site surveying and Wi-Fi planning. The heatmap that is built up by this survey can later be used by fingerprinting algorithms to determine the location of mobile devices. Accuracy is a few meters [38].

3.1.3 RFID Radio Frequency Identification (RFID) is a rapidly developing technology which uses wireless com- munication for automatic identification of objects. The location of a tag is usually determined via proximity, although fingerprinting techniques and lateration techniques can also be used. There are two main types of RFID tags:

• Passive tags do not require any battery, as they use some of the energy that is transmitted in the RF signal, this means that they can have an unlimited lifetime. They are also very

17 Figure 3.3: Current positioning technologies according to their accuracy and coverage area [32]

inexpensive when compared to active tags (/c1-$2[39]). The main disadvantage of passive tags is that they have a relatively short range (limited to a few meters at most[40]).

• Active tags provide a much larger read range, but are a lot more expensive ($15-$100 [40]) and have to be replaced when their internal battery dies.

RFID is most interesting for inventory unit level tracking, as passive tags can be very cheap and some readers can process up to 1000 tags per second. The infrastructure is however rather spe- cialized (not integrated in smartphones) and rather costly. As such it is much less suitable for consumer applications [39].

Applications

• Navifloor provides an RFID underlay for flooring. These RFID tags can then be picked by a reader on a , allowing it to determine its location. Accuracy is limited to the density of the RFID grid, which is 50cm in the standard case

• Kimaldi is a manufacturer and wholesaler of a variety of RFID and biometric systems used for access and attendance control. Another application is wander detection of people with memory disorders such as Alzheimer via active RFID sensing [41].

3.1.4 Bluetooth Bluetooth is a standard for wireless communication over relatively short distances. Bluetooth is managed by the Bluetooth special interest group and is thus a proprietary standard (unlike for

18 example ZigBee). Receivers can be small, cheap and consume relatively low power. This makes bluetooth an interesting technology for battery powered tags, which can easily be worn by users without too much obtrusion. The technology is also built into almost any modern smartphone, which enables consumer applications. Bluetooth was not intended for positioning, however. Which is why proximity is most often used in combination with bluetooth [39]. Although other positioning techniques such as fingerprinting or lateration are still possible.

Applications • Zonith provides an RTLS that combines GPS and bluetooth data in order to obtain a system that is able to track employees throughout corporate facilities. This location information is for example used for panic button alarms that relay the location of the employee that is in distress [42]. • Estimote offers low cost beacons and an accompanying SDK for app developers to enable loca- tion based services for their apps. Examples include and interactive tour at the Guggenheim museum, contextual information in Camp Nou and Navigation at Qatar airport [43]. • Senions stepinside IPS uses information from the internal sensors of smartphones as well as bluetooth beacons to determine the location of these phones. Combining this location information with a map allows geofences of any area, in stead of being limited to a specific beacon. Application areas include retail, healthcare and industry [44]. • Quupa uses the angle of arrival technique to determine the location of their tags in 2 or 3 dimensions. Accuracy of 0.5m can be reached [45].

3.1.5 Ultra wide band (UWB) Ultra-wideband radios have a bandwidth of more than 500 MHz. Such a large bandwidth improves reliability, as it is more likely that at least some of the transmitted frequencies will go through or around obstacles. This means that UWB is more suitable for environments that are sensitive to multipath propagation. A larger bandwidth also offer a higher resolution which can also improve accuracy. It is generally presented as a good candidate for short-range accurate location estimation [46]. Different positioning techniques have been proposed for use with ultra-wideband radios, such as lateration and fingerprinting. Range measurements for these lateration techniques can in general be obtained in three different ways [5]

• Continuous waves: The signals used in this technique are sinusoidal waves whose frequency increase linearly with time [47]. The signal is analysed in the time domain in a relatively low time resolution. This makes it unsuitable for real time applications. The technique also requires large antennas, but the range date is very precise. • Impulse radio: this technique uses extremely short pulses which allows fast range measure- ments and thus makes it attractive for real time location determination. Power consumption is also lower then other techniques. • Pseudo noise modulation: This can be used for TOA measurements. More processing power is needed for determining the correlations, but the antennas can be smaller than other tech- niques.

19 Applications

• Ubisense: Localizes small, active tags with long-life batteries. Deployment of multiple fixed receivers with antenna arrays is required. Accuracy is specified as 15cm in 3D with an operating range of about 50m [48].

• Zebra Enterprise solutions: Accuracy of 30cm under LOS condition is reported at a distance of 100m [49].

• Pozyx: a low cost UWB system that provides decimeter level accuracy in 2D or 3D for arduino boards. Target audience is more focused on hobbyists [50].

3.1.6 Infrared

Infrared wavelengths are longer than visible light, and can no longer be seen by the human eye under most conditions. Positioning methods based on infrared light differ greatly from each other, but can generally be divided into three categories:

• Active beacons: the system has fixed infrared receivers placed at known locations throughout an indoor space and mobile beacons whose positions are unknown. The receivers receive short IR pulses from the mobile beacons and use this to determine the location of these mobile beacons. To achieve meter level accuracy, several receivers are required in each room.

• Infrared imaging using natural (i.e. thermal) radiation/passive infrared localization: infrared sensors can also be used to detect the temperature of persons or objects without the need for active tags. Multiple sensors can be placed around a room to measure the angles relative to the radiation source. Then, via the principle of AOA, the location of the source can be determined.

• Imaging of artificial light sources: the most well known example of this technology is the microsoft kinect. It uses continuously projected infrared structured light to capture 3D scene information with an infrared camera. The 3D structure can be computed from the distortion of a pseudo random pattern of structured IR light dots. People can be tracked simultaneously up to a distance of 3.5 m at a frame rate of 30 Hz. An accuracy of 1 cm at a distance of 2 m has been reported.

Applications

• Stargazer by Hagisonic uses an infrared camera relies on the deployment of passive landmarks. The camera can detect the relative position of these landmarks by their reflectivity [51].

• Ambiplex’s IR.Loc system uses passive infrared localization to detect heat sources without the need for tags. Applications include access control, surveillance to reduce fire risk and device control by head motion for disabled persons [52].

20 3.1.7 Visible light In stead of infrared light, visible light can also be used for positioning. Physically, the visible light spectrum is the portion of the electromagnetic (EM) spectrum that is visible to the human eye. These wavelengths range from about 300nm to 700nm. Visible light positioning (or VLP for short) is a subset of visible light communication (VLC). The communication is achieved by modulation of the lights. The change in optical power can be picked up by a photodiode or camera, after which the analog signal or images can be converted to a stream of digital data. Generally, LED’s are used for VLC, because of their easy control and low cost. Though some VLC systems with fluorescent lights have been developed [53][54].

Visible light is similar to infrared light in a lot of regards, both [55]:

1. Are absorbed by dark-coloured objects 2. Are diffusely reflected by light-coloured objects 3. Are directionally reflected from shiny surfaces 4. Can penetrate through transparent barriers (e.g. glass) but not opaque barriers (e.g. walls)

When using visible light over IR, the cost can be shared with that of the lighting infrastructure, lowering the overall cost of the system. Proximity, triangulation, fingerprinting and vision analysis techniques can all be used in combination with visible light. VLP shows great promise but does not have many commercial applications. Most examples are proof-of-concept works or pilot projects.

Applications • Phillips lighting is arguably the most successful company in the VLP market. Their app enables phone cameras to receive VLC signals and determine their position. A number of pilot projects have been completed in cooperation with large retailers in Europe (e.g. Carrefour and mediamarkt ) [56]. • Bytelight is another big player in the VLC / VLP market, and was the first company to develop the technology for commercial purposes. Their business focussed in the united states. Pilot projects were completed with big retailers such as Wallmart [57][58].

3.1.8 Magnetic positioning Indoor position information can also be obtained by analysing the variations in magnetic fields. These fields can be generated by special transmitters, but the earth’s magnetic field can also be used for this purpose. Magnetic positioning systems can generally be classified according to a few categories [5]: • Near-Field Electromagnetic Ranging (NFER): A small antenna is used to derive the phase relation between the electric and magnetic field components of an electromagnetic field. When the receiver is close to the antenna, this phase difference is 90. As the receiver moves further away, this phase difference decreases. So this phase difference can be used to estimate the range from the transmitter to the receiver, which can in turn be used to localize this receiver by using lateration techniques. Submeter accuracy has been reached with this approach [59].

21 • Permanent magnet systems: the magnetic field produced by permanent magnets can be used to localize a mobile magnetic sensor.

• Systems using the earths magnetic field: this approach detects anamolies in the earth’s mag- netic field to determine the position of a mobile receiver. These anomalies can be caused by electrical power systems, metal inside buildings, etc. A magnetometer can detect these variations and use a fingerprinting database to determine its position.

Applications

• IndoorAtlas developed an indoor positioning system based on fluctuations in the earth’s magnetic field. It provides 1-2m accuracy (P90) [60] but a fingerprinting database needs to be built.

• Polhemus offers an indoor positioning based on alternating current electromagnetics. The systems is able to track both objects and people with 6 DOF. Applications include healthcare, military and research [61].

• Q-track states that it is the inventor and sole provider of NFER positioning. They offer mul- tiple tags ranging from small, lightweight wearable ones to interactive touch-screen devices. Battery life ranges from 2 to 20 days continuous use [59].

3.1.9 IMES The main concept of IMES is to transmit position and floor ID of the transmitter with the same RF signal as GPS. IMES transmits latitude, longitude, height, and floor ID by replacing the ephemeris and clock data in the navigation message of GPS. A single unit of IMES is enough to get the position data, since the position itself is directly transmitted [62].

3.1.10 Inertial measurements Previous technologies often made use of signals (often RF) that could be analysed to obtain posi- tion information. Another approach to positioning is to start from a given initial position, and to integrate all displacements up to the current moment. This process is known as ’dead reckoning’. In pedestrian localisation inertial measurements units (IMU) can be used to obtain these displace- ments, submeter accuracy has been achieved for this application (P95) [63]. A complete IMU consists of a 3-axis accelerometer (for acceleration measurements along its x, y and z-axis) a 3-axis gyroscope (for angular velocity measurements along the three axes) and a 3-axis magnetometer (for measuring changes in magnetic fields along the three axes). Robotics applications generally use encoders for dead reckoning, although IMUs can be used as well or even in combination with the encoders. The main advantage of dead reckoning is that it can operate independently from any infrastructure, making positioning possible in environments where it is impossible or impractical to deploy such infrastructure. The major disadvantage of this technique is the accumulated error. The error on each individual measurement is typically small, but integrating all the different dis- placements generally leads to drift. This technique is thus best not used on its own, but can provide valuable information for hybrid approaches. It is most often used to provide rough estimates in between measurements of a more accurate system.

22 3.1.11 LIDAR Light Detection And Ranging (LIDAR) sensors provide dense distance measurements of the envi- ronment. These distances can are obtained from the travel time of a pulsed laser [64]. LIDARS generally provide very accurate measurements, but can be very costly. Price ranges from e500 1 all the way to e75.000 2. Due to their high cost, they are used when high accuracy and dense environmental information are needed. For example in navigation of autonomous mobile robots. The simplest LIDARs provide a line scan of the environment. More advanced sensors also measure in the vertical plane. Due to density of the measurements, more processing power is generally required than for other technologies. Techniques to process laser scans fall into the category of scene analysis techniques (see chapter 2).

1https://www.slamtec.com/en/Lidar 2http://velodynelidar.com/hdl-64e.html

23 3.2 Camera technologies

An entirely different way to determine position in an indoor environment is with the use of a cam- era. Camera methods have become increasingly popular in measurement applications that require sub-mm accuracy, due to advancements in the technology of image sensors and computational capabilities.

3.2.1 Natural feature points One way of determining position is via use of natural feature points. These natural features are objects that are already present in the environment and can be recognized and matched to a certain reference database. The key advantage of these methods is that there is no requirement for instal- lation of local infrastructure such as deployment of sensor beacons. In other words, reference nodes are substituted by a digital reference point list. Accordingly, these systems have the potential for large scale coverage without significant increase of costs [5].

It is however, not always necessary to construct a database of feature beforehand. Position changes can be detected without an external reference, and as such mobile objects can be tracked with- out the need for a database of features. This has been used in practice to track people [65] and industrial robots [66].

3.2.2 Artificial features These methods do require that the environment is altered, as markers have to be placed at known positions. This increases the robustness of the system, for example in changing lighting conditions. They also simplify the automatic detection and are easier to distinguish from each other than natural features, because a unique code can be used for each feature. The impact on the environment can be minimized by placing these markers on for example the ceiling. If it is not feasible or desirable to physically deploy targets, then it is also possible to project reference points (potentially using infrared to make the system completely unobtrusive).

24 3.3 Advantages and disadvantages

An overview of the different advantages and disadvantages of different sensor indoor positioning methods can be found in Table 3.1. The advantages and disadvantages of camera methods can be found in Table 3.2.

Technique Advantages Disadvantages WLAN Low cost (can be implemented simply in software) Accuracy level between 2m (static objects) and 5m (moving objects) [67] Versatility (standard smartphone or laptop can be Time consuming calibration procedure, which has used as a receiver) to be repeated when changes occur in the environ- ment. Can cover a relatively large area No line of sight component is required RFID Requires no direct contact or line of sight Time consuming calibration procedure. Passive RFID tags are very cheap Readability range of passive tags is limited, and thus requires lots of (more expensive) antennas. UWB Position accuracy in the centimeter range [46] Expensive hardware Requires little power Can interfere with nearby systems that operate in the same spectrum (e.d. WiMAX and digital TV) Better resistance to multipath propagation due to large bandwidth Bluetooth Beacons are small and cheap ($5-$30) [39] Batteries have to be replaced Standard smartphones can be used as receivers Building a fingerprinting database is a slow and tedious process Beacons have a relatively large range (3-50m) Accuracy is relatively low (1m) Inertial measure- Cheap and small sensors Requires initial position ments Does not require changes in infrastructure Accumulated error Magnetic position- Cheap and small sensors Accuracy is limited to 1-2m [60] ing Time consuming calibration procedure, which has to be repeated when large changes occur in the environment. IMES Easy to achieve seamless positioning in indoor- High initial cost of transmitters outdoor environments Accurate positioning is difficult (3-10m) Ultrasonic waves Position accuracy in the range of mm-cm [68] Requires dedicated receiver with a generally high energy consumption. Large rooms also require many of these receivers (active bat system uses 720 receivers and up to 75 mobile tags in an office space of 1000 m2 [69]. Coverage of one transmitter is rather small (ď10m)

25 Temperature compensation is required for very accuracte positioning [5] Multipath propagation decreases accuracy infrared Very accurate positioning and tracking is possible High cost [70] Complicated set-up Requires line of sight VLP Uses no extra space Requires filter for UV light Uses no extra power Needs line of sight component Requires no extra markers in the environment Needs extra hardware for each lamp

Table 3.1: Advantages and disadvantages of different sensor positioning techniques

Technique Advantages Disadvantages Natural feature Significant computing time points with Small error (decimeter range [71]) required database Position corresponding to the initial frame must be known Natural features Can estimate position even at an Limited number of available scenes without database unknown location in a spacious field Pre-defined Highly stable Low flexibility markers Require that environment is altered, can be decreased by Easy to estimate initial position placing infra-red reflectors at the ceiling

Table 3.2: Advantages and disadvantages of different camera positioning techniques

26 Chapter 4

System topologies

The topology of indoor positioning system refers to where the position information is obtained and used. In general, one can classify an IPS into four different categories based on topology [24]:

• Remote positioning system: the signal transmitter is mobile and several fixed measuring units receive the transmitters signal. The results from all measuring units are collected, and the location of the transmitter is computed in a master station. An example of this topology would be asset tracking in a warehouse or hospital. The equipment itself does not need to know where it is located inside the facility. Instead, staff want to access this location information through a centralized system.

• Self-positioning: the measuring unit is mobile and receives the signals of several transmit- ters in known locations. The unit has the capability to compute its location based on the measured signals. An example of this topology would be an autonomous . When the robot is fully autonomous and its position is not required elsewhere, this topology is a good choice/

• Indirect remote positioning: a self positioning system that sends the measurement result to the remote side. When the position information of a mobile robot has to be known both by the robot and an external observer, this topology makes the most sense. The robot can use its location in order to execute its tasks, and can also be observed by for example the engineers.

• Indirect self-positioning: a remote positioning system that sends the measurement result to the mobile unit. indirect self-positioning allows for offloading of computationally intensive tasks, and removes the need to have individual sensing hardware present on each platform. In certain applications, this can significantly reduce the cost of the system.

27 28 Chapter 5

IPS comparison

As mentioned earlier, it is difficult to make a comparison between different IPS, as a lot of pa- rameters depend on the environment inside which they are used. An objective comparison can only be made with respect to a certain reference environment. The reference environment in this section is a floor of the ”Sint-Jozefs kliniek Izegem” hospital in Belgium (see Figure 5.1). For this comparison, a number of commercial systems were used. These systems are based on the following technologies (for a detailed description as to how these technologies work, see chapter 3)

• Ultrasound

• UWB

• Infrared

• Visible light

• Bluetooth

• RFID

• Other technologies (usually operating in the ISM band, e.g. zigbee)

For each system the average accuracy, hardware cost, power usage and update rate were compared. There are more factors to consider when selecting an IPS, but these are some of the important ones. The full results in table form can be found in Table ??. Graphical comparisons of accuracy, cost and power use can be found in Figures 5.2 and 5.3. Some assumptions had to be made, however, to make this comparison possible:

• Power usage is the combined power use of all the transmitters and receivers

• Only commercial systems were taken into account

• existing Wi-Fi and light infrastructure: for systems that made use of this infrastructure, only the surplus cost and power usage was taken into account

• 10 receivers need to be localized at the same time

• Location/beacon management software was not taken into account

29 Figure 5.1: Plan of hospital floor [72]

30 • Whenever possible, power use was listed/calculated at 1 Hz

• When the receiver is a smartphone, a cost of e 200 is assumed per receiver

When looking at figure 5.2, we see that the optimum for both parameters can be found at the lower left. This represents a system that can provide high accuracy at a low cost. In general, a choice has to be made, and a system will be either low cost, or accurate. This can be seen on the graph by the fact that most of the shapes (which represent the general range of accuracies and costs for a particular technology) are located along the axes. There is one notable exception though, namely sound-based systems. Indeed, the Marvelmind IPS can provide centimeter level accuracy at surprisingly low hardware cost. The trade-off is however immediately visible in figure 5.3. Here we see that sound based systems use considerably more power than other technologies with compa- rable accuracy (e.g. UWB, IR). This increases their overall costs, especially for a battery powered solution such as the one made Marvelmind Robotics. Making it less interesting overall as a solution.

This comparison illustrates the statement made previously that there is no single ’best’ IPS. Rather, one has to look at the application and compare different systems for that specific environment.

31 Figure 5.2: Comparison of accuracy and (hardware) cost of commercial IPS

32 Figure 5.3: Comparison of accuracy and power usage of commercial IPS

33 Average Cost of Power usage Update rate Update rate Technology accuracy [m] system [e] (system) [W] low [Hz] high [Hz] Sound Sonitor 0,30 0,3 1 Marvelmind 0,02 2.750,23 601,70 0,5 24 UWB OpenRTLS 0,15 10.537,00 17,00 0,02 40 Zebra 0,30 24.042,77 2000,15 0,01 200 (DartTag) Kio RTLS 0,30 20.473,60 26,75 1 4 Ubisense 0,15 34.275,00 144,62 1 30 Infrared Stargazer 0,10 11.991,68 15,00 20 Hz Northstar 0,05 13.446,54 (Robotino) Bluetooth Quupa 0,50 2.028,00 20,70 0,1 100 Senion 2,00 4.285,00 0,50 Estimote 2,93 2.301,62 1,64 0,33 0,40 RFID Zebra 2,00 9.850,00 48,50 0,00 1,00 (WhereTag) Ekahau 2,50 6.117,90 0,06 ISM band Zolertia 3,00 1.397,36 1,40 0,6 2 Purelink 2,00 5.573,41 30,00 0,50 1,00 VLP Bytelight 1,00 2.036,56 4,80

Table 5.1: Comparison of commercial IPS

34 Chapter 6

The SLAM problem

The SLAM (Simultaneous Localization and Mapping) problem is the problem of building a map of the environment while simultaneously determining the robot’s position relative to this map [73]. This problem is also referred to as Concurrent Mapping and Localization (CML). A mobile robot moves through an unknown environment, starting at a known location (mostly given by coordinates {0,0,0}). Its motion is uncertain due to the robot’s model and sensor noise. As it roams, the robot can detect the environment using a form of observation measurement. To compute the map and the robot’s location within this map, the probability of accuracy of the data must be included. Figure 6.1 shows that the state of the robot xt is generated from the

Figure 6.1: Graphical model of the SLAM problem that characterizes the evolution of controls, states and measurements. previous state xt´1. Therefore it makes sense to specify the probability distribution from which xt is generated. Hence all past states, measurements and controls are taken into account. The evolution of state might be given by a probability distribution of the following form: ppxt|x0:t´1, z1:t´1, u1:tq with x0:t´1 all the previous poses, z1:t´1 all the previous observations and u0:t all the previous executed control actions including the current executed control action. We assume here that the robot executes a control first and then takes a measurement of the environment m.

35 But if the state x is complete then it is a sufficient summary of all that happened in previous time steps. Therefore xt´1 is a sufficient statistic of all previous controls and measurements up to the current point in time. This leads to the following insight:

ppxt|x0:t´1, z1:t´1, u1:tq “ ppxt|xt´1, utq (6.1)

The same applies to model the process by which measurements are generated. If x is complete, we have an important conditional independance:

ppzt|xt, z1:t´1, u1:tq “ ppzt|xtq (6.2)

This states that xt is sufficient to predict the measurement zt. These insights are important for future paragraphs.

A key concept in probabilistic robotics is that of belief. This reflects the robot’s internal knowl- edge about the environment. For example, a robot can usually not know its pose because this is not directly measurable. The robot must derive this from its data. This belief is represented through conditional probability distributions. A belief distribution assigns a probability (or density value) to each possible hypothesis with regard to the true state. Belief distributions are posterior probabilities over state variables conditioned on the available data. Belief is denoted as:

belpxtq “ ppxt|z1:t, u1:tq (6.3)

This belief however is taken after incorporated measurementzt. It will prove useful to calculate a posterior before taking the measurement into account. This posterior belief is denoted as: belpxtq “ ppxt|z1:t´1, u1:tq (6.4)

The Bayes Filter This is the most general algorithm for calculating beliefs. It consists of two major steps, a prediction step and a correction step. The prediction step is the calculation of the belief belpxtq. The correction step takes into account the measurement zt which leads to the belief belpxtq.

Algorithm 1 Bayes Filter

1. Algorithm Bayes Filter (belpxt ´ 1q, ut, ztq:

2. for all xt do ş 3. belpxtq “ ppxt|ut, xt´1qbelpxt´1qdxt´1

4. belpxtq “ ηppzt|xtqbelpxtq 5. endfor

6. return belpxtq

36 The basic Bayes filter in pseudo-algoritmic form is depicted in table 1. The attentive reader may have noticed that the algorithm’s formula for calculating the beliefs do not match the prior equations. These equations are derivations by implementing the Bayes rule.

pzt|xt, z1:t´1, u1:tqppxt|z1:t´1, u1:tq ppxt|z1:t, u1:tq “ (6.5) ppzt|z1:t´1, u1:tq

ppxt|z1:t, u1:tq “ ηppzt|xt, z1:t´1, u1:tqppxt|z1:t´1, u1:tq (6.6) Because of conditional independence, this equation can be simplified using equation 6.2. This leads to: ppxt|z1:t, u1:tq “ ηppzt|xtqppxt|z1:t´1, u1:tq (6.7) and when we implement 6.4 we become:

belpxtq “ ηppzt|xtqbelpxtq (6.8)

This equation is implemented in the Bayes filter algorithm. Next we expand the term belpxtq exploiting the fact that the case is continuous which gives us the probability of x: ż ppxq “ ppx|yqppyqdy (6.9)

When expanding the primary belief with equation 6.9 we become:

belpxtq “ ppxt|z1:t´1, u1:tq (6.10) ż

belpxtq “ ppxt|xt´1, z1:t´1, u1:tqppxt´1|z1:t´1, u1:tqdxt´1 (6.11)

Once again we exploit the assumption that our state is complete. This leads to the implementation of equation 6.1. The second probability can be written as belpxt´1q, referring to 6.3: ż

belpxtq “ ppxt|xt´1, utqppxt´1|z1:t´1, u1:tqdxt´1 (6.12) ż

belpxtq “ ppxt|xt´1, utqbelpxt´1qdxt´1 (6.13)

These equations create the Bayes filter which is the most general algorithm for calculating beliefs. It is the basis for tackling the SLAM problem. Many algorithms are based on this method for calculating the probabilities.

37 Chapter 7

Solution to the SLAM problem

There are various methods for robotic map-building. Two of the most common methods are the Kalman Filters and the Particle Filters. These techniques are probabilistic approaches based on the mathematical derivations of the Bayes filter. These algorithms tackle the problem of uncertainty and sensor noise. Particle filters are more commonly used as a solution to the SLAM problem. But we explain both because Kalman filters are sometimes used as an addition to Particle filters.

7.0.1 Kalman filters The Kalman filters are an application of the Bayes filter. Similar to the Bayes filter, the Kalman filter has two major steps, a prediction step and a correction step. These steps correspond to the beliefs of the Bayes filter. The prediction step corresponds to belpxtq. The correction step, also called the measurement update, corresponds to calculation of belpxtq. ř The Kalman filters represent the belief belpxtq at time t by the mean µt and covariance t. These filters are the basis for some algorithms. They assume that the beliefs are represented by Gaussian destributions. Hence the posteriors are Gaussian and the Kalman filters assume linear system dynamics.

Algorithm 2 Kalman Filter ř 1. Algorithm Kalman Filter (µt´1, t´1, ut, ztq:

2. µt “ Atµt´1 ` Btut ř ř T 3. t “ At t´1 At ` Rt

ř ř T T ´1 4. Kt “ tCt pCt tCt ` Qtq

5. µt “ µt ` Ktpzt ´ Ctµtq ř ř 6. t “ pI ´ KtCtq t ř 7. return µt, t

38 Table 2 is the algorithm of the Kalman filter. Lines 2 and 3 are the prediction step. Lines 4 to 6 are the correction step of the algorithm.

For calculating the means and covariances, matrices are implemented. The matrices At and Bt are respectively the state transition matrices. At takes into account the change of the world when the robot does not execute a movement. In other words, the influence of the world on the state of the robot. This matrix corresponds to the state vector xt. Bt implements the robot’s physics into the equation i.e. acceleration, deviation,... .It corresponds to the control vector ut. Matrix Rt depicts the noise of motion of the robot.

For the correction step, the Kalman gain Kt is calculated. It specifies the degree to which the measurement is incorporated into the new state estimate. In other words, how certain is the ob- servation? Matrix Ct depicts the measurement vector zt. Qt implements the noise of the observation. Matrix I is the identity matrix. This matrix is obtained from the mathematical derivation of the Kalman Filter. For the mathematical derivation and more insights in the Kalman filter, we refer to the book Pro- bibalistic Robotics[74].

A great disadvantage of the Kalman Filter is that it assumes everything to be Gaussian, that observations are linear functions of the state and that the next state is a linear function of the previous state. Unfortunately, the state transitions and measurements are seldom linear in the actual world. This leads to a wide variety of algorithms who deal with the linearisation of these non-linear functions. These algorithms are briefly illustrated. For more information we again refer to the book Probibalistic Robotics[74].

Extended Kalman Filter (EKF) This filter preforms a local linearisation of the non-linear functions g and h. These functions correspond to the state prediction and the measurement prediction of the Kalman filter. Where the Kalman filter uses matrices for calculating the mean and covariance, the EKF uses non-linear functions. This filter preforms a linearization via Taylor Expansion. It approximates the non-linear function by a linear function that is tangent at the mean of a Gaussian overlapping the non-linear function. Figure 7.1 illustrates the Taylor Expansion. It constructs a linear approximation to a function g from g1s value and slope. This slope is given by the partial derivative:

1 Bgput, xt´1q g put, xt´1q :“ (7.1) Bxt´1q 1 g put, xt´1q :“ Gt (7.2) The same goes for the measurement function h. The most likely state for a Gaussian is the mean of the posterior upt ´ 1q. This means that g is approximated by its value at µt´1 and ut. The linear extrapolation is achieved by a term proportional to the gradient of g at µt´1 and ut:

gput, xt´1q “ gput, µt´1q ` Gtpxt´1 ´ µt´1q (7.3)

The same applies for the measurement function h. The Taylor expantion is developted around µt

hpxtq “ hpµt ` Htpxt ´ µtq (7.4)

39 Algorithm 3 extended Kalman Filter ř 1. Algorithm Extended Kalman Filter (µt´1, t´1, ut, ztq:

2. µt “ gput, µt´1q ř ř T 3. t “ Gt t´1 Gt ` Rt

ř ř T T ´1 4. Kt “ tHt pHt tHt ` Qtq

5. µt “ µt ` Ktpzt ´ hpµtqq ř ř 6. t “ pI ´ KtHtq t ř 7. return µt, t

Figure 7.1: Linearisation via Taylor Expansion

The matrices G and H are the Jacobians of their respective non-linear functions g and h. The value of these Jacobians depends on ut and µt´1, hence it differs for different points in time.

This algorithm is an extension of the Kalman Filter. It preforms local linearisation to handle non-linearities. It works well in practice for moderate not-linearities and uncertainty. But to construct large maps, the EKF becomes computationally intractable.

40 Unscented Kalman Filter (UKF) The EKF uses a Taylor expansion to local linearise non-linear functions. The UKF uses a lineari- sation called unscented transform. This methods uses sigma points and propagates these points through the non-linear function. Using these transformed and weighted sigma points, a Gaussian is reconstructed.

Weighted sigma points propagaded through ’g’ Weighted sigma points non-linear function ’g’

Gaussian distribution Gaussian distribution after Unscented transform

Figure 7.2: Graphical representation of the Unscented transform

Figure 7.2 depicts the workings of the unscented transform. Each sigma point defined in the Gaussian is assigned an importance weight. These sigma points are propagated through the non- linear function. After this, a new Gaussian is formed taking into account the importance weight of each sigma point. This method of linearising the function to keep everything Gaussian has the advantage that no Jacobian needs to be calculated. Although this method is a better approximation than the EKF for non-linear models, the differences are rather small. The EKF is still more used than the UKF. For the interested reader, table 4 is the algorithm of the Unscented Kalman filter. Wherein Xt is a set of sigma points used to linearise the function g and h. We kept this explanation fairly short. If the reader wishes to learn more about the Unscented Kalman filter, we recommend the book Probibalistic Robotics[74].

Extended Information Filter (EIF) The last discussed Kalman filter is the Extended Information filter. Just like the Kalman filter, the Information filter represents the belief by a Gaussian. The difference lies in the way the Gaussian is represented. The Kalman filter represents it by theis moments (mean and covariance) while the information filters represent the Gaussians in their canonical parameterization. The canonical parameterization of a Gaussian is given by the information matrix Ω and the infor- mation vector ξ. To obtain these matrices from the Gaussian is given by: ÿ Ω “ ´1 (7.5) ÿ ξ “ ´1µ (7.6) The advantage of this form is the difference in update equations. This has an impact on the computational complexity of the algorithm. What is computational complex becomes simple in the

41 Algorithm 4 Unscented Kalman Filter ř 1. Algorithm Unscented Kalman Filter (µt´1, t´1, ut, ztq: bř bř 2. Xt´1 “ pµt´1 µt´1 ` γ t´1 µt´1 ´ γ t´1q

˚ 3. Xt “ gput,Xt´1q ř2n ris ˚ris 4. µt “ wm Xt i“0

ř ř2n ris ˚ris ˚ris T 5. t “ wc pXt ´ µtqpXt ´ µtq ` Rt i“0 bř bř 6. Xt “ pµt µt ` γ t µt ´ γ tq

7. Zt “ hpXtq

ř2n ris ris 8. zˆt “ wm Zt i“0

ř2n ris ris ris T 9. St “ wc pZt ´ zˆtqpZt ´ zˆtq ` Qt i“0

ř ř2n x,z ris ris ris T 10. t “ wc pXt ´ µtqpZt ´ zˆtq i“0 ř x,z ´1 11. Kt “ t St

12. µt “ µt ` Ktpzt ´ zˆtq ř ř T 13. t “ t ´ KtStKt ř 14. return µt, t

42 canonical form and vice versa. Algorithm 5 is the Extended Kalman filter but in canonical form. For more information about

Algorithm 5 extended Information Filter

1. Algorithm Extended Information Filter (ξt´1, Ωt´1, ut, ztq:

´1 2. µt “ Ωt´1ξt´1

T ´1 3. Ωt “ pGtΩt´1Gt ` Rtq

4. ξ “ Ωtgput, µt´1q

5. µt “ gput, µt´1q

T ´1 6. Ωt “ Ωt ` Ht Qt Ht T ´1 7. ξt “ ξt ` Ht Qt rzt ´ hpµtq ` Htµts

8. return ξt, Ωt this algorithm, we recommend the book Probibalistic Robotics[74]. The Extended Information filter has an other derivative, the Sparce Extended Information filter. This filter needs less memory and has a lower computational load. But it approximates more data which leads to a loss of accuracy. This filter will not be discussed because it has no added value to this document. Should the reader be interested the previously mentioned book has a chapter dedicated this topic.

43 Particle filters Particle filters are among the most popular filters used in robotics. This implementation of the Bayes filter draws multiple samples of posterior belief. A sample corresponds to a state where the robot might be in. A finite number of samples are drawn from a proposal distribution. These samples are known as particles. Hence all these particles are a representation of where the robot might be. These particles are assigned a weight for some particles have a higher likelihood of being the actual state of the robot. This allows for multi-model beliefs which was not possible with the Kalman filter. The particle filter consists of three main steps;

1. Sampling the particles using a proposal distribution.

2. Compute the importance weight of each particle.

3. Resampling by drawing previous samples and use them again.

This particle filter localisation method is also called Monte-Carlo localization (MCL) and is now the gold standard for mobile robot localization.

Algorithm 6 particle filter

1. Algorithm Particle Filter pXt´1, ut, ztq:

2. Xt “ Xt “H

3. For j “ 1 to J do

rjs 4. sample xt „ ppxt|xt´1, utq

rjs rjs 5. wt “ ppzt|xt q

rjs rjs 6. Xt “ Xt` ă xt , wt ą 7. endfor

8. for j “ 1 to J do

ris 9. draw i P 1, .., J with probability 9wt

ris 10. addxt to Xt 11. endfor

12. return Xt

The fist step consists of drawing samples. These samples are mostly drawn from the odometry rjs model of the robot. Thus a particle xt is drawn from the distribution ppxt|xt´1, utq. Which can be found in line 4 of the pseudo-algorithm in table 6. This step corresponds to the prediction step, which was also based on the motion model.

44 After a sample is drawn, a weight (wt) is computed. This is the second step of the particle filter. The weight takes into account the observation model. This corresponds to the correction step and is represented in line 6 of the algorithm. After all samples are drawn and assigned a weight, they are stored in a dataset Xt. The last step is the resampling step. Some particles have a low likelihood of being the state of the robot. These unlikely samples are replaced by more likely ones. This method avoids that many samples cover unlikely states. And because we have a finite number of samples, the resampling step makes sure we have room for new states. The samples which are not discarded, are used again in the next step.

Rao-Blackwellized particle filter (RBPF) This iteration of the particle filters has become a the most used tool to solve the SLAM problem. The key idea of the RBPF is that the individual particles each carry a individual map of the environment. Therefore the RBPF for SLAM is to etimate the joint posterior of the poses of the robot and the map given the observations and movements. it does so by using the factorization given by equation 7.7:

ppx1:t, m|z1:t, u1:t´1q “ ppx1:t|z1:t, u1:t´1q ˚ ppm|x1:t, z1:tq (7.7)

In this equation ppx1:t|z1:t, u1:t´1q represents the path posterior and ppm|x1:t, z1:tq represents the map posterior. The path posterior is similar to the localisation problem (MCL). Only the trajectory of the robot must be estimated which is preformed using a particle filter. The map posterior can be computed efficiently since the poses x1:t are known when estimating the map m. This factorisation has as effect that a Rao-Blackwellized particle filter for SLAM maintains an individual map for each sample and updates this map based on the trajectory estimate of the sample upon ”mapping with known poses” [75].

One of the first efficient algorithms to use this technique is FastSLAM [76]. This was a signifi- cant step in the field of Simultaneous Localization and Mapping. Many other variations are based on this FastSLAM algorithm. One clear variation is UFastSLAM which is explained further in this document.

45 Chapter 8

Inventory

The following document is an inventory of a number of SLAM (simultaneous localisation and map- ping) algorithms. These algorithms are divided into two sub categories. The first one manly using 2D laser scanner data and odometry to build a map and find the position of the robot. The second categorie is using vision sensors instead of 2D laser scanning data. These SLAM’s are all open source and can be downloaded freely. Furthermore this document pro- vides a general description of the algorithms workings, advantages and disadvantages.

The selection of algorithms is made from an open source website which generally describes dif- ferent SLAM’s. Eventually, six where chosen regarding the algoritms using 2D laser scanner data and another six where chosen using vision sensors:

SLAM algorithms using 2D laser scanner data

• DP-SLAM

• Gmapping

• GridSLAM

• RoboMap

• TinySLAM

• UFastSLAM

SLAM algorithms using vision sensors

• EKFmonocularSLAM

• ORB-SLAM

• RatSLAM

• SeqSLAM

• RGB-DSLAM

46 • LSD-SLAM

In addition to these descriptions, the issues are also mentioned to get these programs operational and how to solve those issues.

47 DP-SLAM

DP-SLAM is an algorithm provided by Duke University. It can be downloaded from several web- sites1[77]. Its operating system is preferably Linux but further in this text, a description is given to run it on Windows. This algorithm was first introduced in 2003. Austin Eliazar and Ronald Parr are the creators and authors of it’s paper[78].

What is DP-SLAM DP-SLAM aims to achive simultaneous localization and mapping without the use of landmarks. It maintains a joint probability distribution over maps and robot poses by using a particle filter. This technique allows the program to maintain uncertainty about the map over multiple time steps until these uncertainties can be resolved [77].

”DP” stands for distributed particle. This because DP-SLAM uses a new map representation by using a particle filter to represent both robot poses and possible map configurations. It uses no assumptions of the environment[78].

The program uses raw laser range data and odometry to create grid maps. These maps are rep- resented by a png-file. The algorithm already has a dataset implemented in its download. This log-file can easily be replaced.

The algorithm creates png-files as outputs. There are two sorts; hmap.png and lmap.png. The hmap is created after every iteration for the high level mapping process. The lmap is created after each independent run of the low level mapping. A map of the environment is created during the run and supplied to the high level mapper.

Using DP-SLAM The program is written to run on Linux. We have encountered some difficulty getting DP-slam operational but these difficulties where overcome. There’s also the possibility of running the program on a Windows operating system. For this, a linux command terminal needs to be installed. Recommended the reader uses Cygwin2. This is a collection of GNU and Open Source tools which provide a similar functionality to Linux. With this program it is possible to run native Linux code and apps on Windows. A rebuild of the application from source is required.

Linux This is the standard OS (Operating System) for this algorithm. There are many versions of Linux. For this research, UBUNTU was used. While trying to run the program, a segmentation fault occurred. Meaning that the algorithm tries to write to memory it has no access to. This can be easily avoided by making sure the created workspace has enough RAM to work with for this program.

1www.openslam.org 2https://www.cygwin.com

48 (a) hmap13: complete map (b) lmap12: last low level map

Figure 8.1: Last generated files of loop5.log

• Download DP-SLAM and unpack the file

• Open the Linux Terminal

• Using the terminal, go to the DP-SLAM map, enter ”make” and execute. This will generate an application for the algorithm.

• Enter ’./slam -p name of log file ’. When using the supplied dataset enter ’./slam -p loop5.log’.

The program will run and create png-files. These files are saved in the same map as the program itself. The last created file (hmap13) is the total map created by the program.

Windows As previously mentioned, Cygwin must be installed. To install it, we refer to an installation video [79]. Make sure that the ’devil’-tools are installed as well.

• Install Cygwin

• Download DP-SLAM

• Save the downloaded file in the map ’home’ Cygwin created for itself and unpack it

• Open the Cygwin terminal

• Using the terminal, go to the DP-SLAM map, enter ’make’. This will generate an application for the algorithm.

• Enter ’./slam -p name of log file ’. When using the supplied dataset enter ’./slam -p loop5.log’.

The program will run and create png-files. These files are saved in the same map as the program itself. The last created file (hmap13) is the total map created by the program.

49 Employment DP-SLAM While it seems that this algorithm can create fairly accurate maps,paper [80] states that this algorithm’s data forms a blob with some estimated walls. They also claim that DP-SLAM runs slow. This can be the case when using this algorithm in real time due to its high necessity of RAM. The creators of this software acknowledge this problem. Although it’s not supported due to its Open Source,[81] states that the algorithm is very reliable. The accuracy and the ease of use are also acceptable .

50 Gmapping

Gmapping (or Grid mapping) is one of the best performing Open Source SLAM-algorithm. Giorgio Grisetti, cyrill Stachniss and Wolfram Burgard created and improved this program.

What is Gmapping Gmapping is a program that uses raw laser range data and odometry to build a map of the envi- ronment and locates the robot in it. It does this by using a Rao-Blackwellized particle filter. Each particle carries an individual map of the environment. A great number of particles would be costly to compute. Therefore, Gmappings approach is to compute an accurate proposal distribution tak- ing into account the movement of the robot and its most recent observation. The use of a scan-matcher is implemented to find the most likely pose by matching the current ob- servation against the map constructed so far. To compute the proposal distribution, the algorithm samples around the most likely position reported by the scan-matcher.

Furthermore, the algorithm selectively carries out resampling operations which seriously reduces the problem of particle depletion [82]. In this case, to efficiently draw the next generation of sam- ples, the algorithm first uses the scan-matcher to determine the meaningful area of the observation liklihood function. But to decide whether or not a resampling is needed, the effective sample size is taken into account. If no resampling operation is needed, the complexity for integrating a single observation depends only linearly on the number of particles. When resampling is required, the complexity increases. The size of the map needs to be taken into account because each particle stores its own map and needs to be copied.

Gmapping can be downloaded from openslam.org. The site states that the software requirements are linux/unix GCC3.3/4.0 and the latest version of CARMEN. But over the years people created Gmapping for other software packages like ROS and MATLAB.

Using Gmapping As previously mentioned, the program is written for Linux. But for our research, we used a version of MATLAB and a version of ROS (Robot Operating System). With MATLAB running on windows and ROS running on Ubuntu (Linux).

Windows The Windows version of this algorithm was downloaded from Github. MATLAB needs to be installed in order to run this program.

• Download the algorithm from GitHub3.

• Unpack the file.

• Set the path in MATLAB to this location.

• Type ’main’ in the commandwindow and press enter.

3https://github.com/Allopart/rbpf-gmapping

51 Figure 8.2: creating a occupancy grid map using Gmapping simulated with TurtleBot (ROS)

Note that this version is a simulation model in MATLAB. It contains the general code and workings of GMapping. However it uses no recorded dataset to create it’s map. The map is predefined and a robot simulator is included in the simulation. As the map and the robot’s movements are predefined, it builds up a map and the algorithm also displays different graphs which make it possible to evaluate the algorithm. For implementation or the use of a dataset, Linux is a better option.

Linux ROS (Robot Operating System) features a Gmapping package. The website www.wiki.ros.org/gmapping contains a detailed explanation to get the algorithm operational.

• Create or download a suitable rosbag4

• Open a first terminal and enter ’rosmake gmapping’.

• Bring up the master by entering ’roscore’.

• Set the time parameter by entering ’rosparam set use sim time true’.

• Bring up the algorithm ’rosrun gmapping slam gmapping scan:=base scan’.

• Run the rosbag by entering in a new terminal ’rosbag play –clock -the name of the bag-

• Save the map to a pgm-file by entering ’rosrun map server map saver -f -name of the map-

4Recorded ROS topics. This is the dataset needed for the algorithm.

52 For more information visit wiki.ros.org/slam gmapping/Tutorials/MappingFromLoggedData. It’s also possible to simulate the algorithm using Turtlebot. This robot simulator can be directly linked to the Gmapping algorithm to create a real time simulation.

8.0.1 Employment Gmapping This algorithm is one of the most accurate SLAM algorithms. By improving the proposal distribu- tion and adaptive resampling, the algorithm creates better estimates. Therefore less particles are needed. By only resampling when needed (adaptive resampling technique) the filter keeps a variety of samples in the particle set. This reduces the particle depletion problem. However, the complexity of the resampling step can be reduced by using an other representation of the map. The authours of [82] state that a more intelligent map representation like done in DP- SLAM, can reduce the complexity of the resampling. However, because of the adaptive resampling strategy, the resampling steps are not the dominant part of the algorithm.

53 GridSLAM

This SLAM algorithm is an easy to use and understand Rao-blackwellized particle filter. The authors are Dirk Haehnel, Dieter Fox, Wolfram Burgard and Sebastian Thrun.

What is GridSLAM GridSLAM is an algorithm that combines Rao-blackwellized particle filtering and scan matching. The use of scan matching minimizes odometric errors during the robot’s mapping. A probabilistic model of the residual errors of this process is used for the resampling steps. This has as effect that the number of particles required are seriously reduced. This improves the computing time. Also, the particle depletion problem is reduced. Else this problem would prevent the robot from closing large loops [83]. The scan matching routine is used to transform sequences of laser measurements into odometry measurements. The corrected odometry and the remaining laser scans are then used for the map estimation in the particle filter. The lower variance in the corrected odometry reduces the number of necessary resampling steps and this way decreases the particle depletion problem. Like most algorithms, this program takes raw laser range data and (scan-matched) odometry and creates a grid map.

Figure 8.3: Map created with GridSLAM. Yellow depicts a high probability of being occupied. Blue is the lowest probability of being occupied.

Using GridSLAM GridSLAM was origanily crated for Linux with using CARMEN toolkit. Over the years, developers have created a version for MATLAB. This version was downloaded from GitHub and runs on Windows OS.

54 Windows • Download GridSLAM from GitHub5.

• Unpack the file.

• Set the path in MATLAB to this location.

• Enter ’demo rbkfslam’ in the command window and press enter.

• The program will run with an included dataset.

The dataset can be changed to any MATLAB dataset.

Employment GridSLAM The use of the scan matching routine reduces the number of resampling steps. This leads to the decrease of the particle depletion problem. Although this algorithm sounds promising, the number of particles has a great influence on the computation time. When using a 100 samples, real time mapping is possible. In an other example the authors of the original paper [83] used 500 particles. This led to a computation time of several hours.

5 location for GridSLAM https://github.com/HeSunPU/grid slam

55 RoboMAP

Although this program is not a SLAM algorithm, we include this in this research. This application provides a number of algorithms simulate data and to create a map using laser range data. The aim of RoboMAP(Studio) is off-line work with data from 2D LS scanner, mainly SICK PLS100. The program also includes powerful 2D LS simulator able to simulate data of the 2D LS.

What is RoboMAP This application is a smart tool permitting to process various data types in the field of robotics. It works with vector data only. It is an analytical tool for estimating possibilities of the robot navigation using 2DLS. This 2D laser sensor data can come from a Sick laser range finder or the provided simulator.

RoboMAP contains several modules for Simultaneous Localisation and Mapping. The basic module is GeometricSLAM. It uses geometric primitives and is suitable for small indoor office environment. This SLAM algorithm implemented in this application, is very sensitive to the lack of beams in the scanning process. If there are to many beams which did not reflect, there’s a high possibility of the algorithm failing. The key idea is the use of geometric primitives or ’elements’ for representing the environment rather than using an occupancy grid. The standard structure is FastSLAM. It uses a Rao-Blackwellized particle filter. This means that every particle contains its own map. Although this method of representing the map has its advantages, it consumes a large amount of memory and is difficult to combine with different occupancy grids[84].

Using RoboMAP This application is Open Source. This means that it can be freely downloaded from it’s website[85]. This website includes a large amount of information regarding this program. This application uses MATLAB to visualise the data. This means that MATLAB needs to installed in order to use RoboMAP. It has two versions, a 32-bit version and a 64-bit version. It runs on most windows OS from Windows 2000 until Windows 8. A disadvantage is that this program works with its own datafile. Which means that every dataset that is obtained, needs to be converted. The program has a subprogram to achieve this, but it is an extra step.

Windows Following the next steps, RoboMAP(Studio) can be run on Windows Operating System. Take note that MATLAB is needed to run it. • Download and install MATLAB. • Download and install RoboMAP. • Start the program. • Go to ’MapBuilding’ and select ’geometricSLAM’.

56 • open a PLB-file. An example can be found in the map ’examples’ under ’RoboMAP’.

• The program will build up the map.

In order to use an other dataset the program provides a converter to convert it to a PLB-file. This converter can be found when opening ’geometricSLAM’ on the left side of the screen.

Employment RoboMAP This tool is an easy way to evaluate and work with laser range data. It consist of numereous algorithms but we are only interested in the SLAM algorithm. Although the key idea of using geometric shapes rather than using an occupancy grid has its advantages, the algorithm is not usable in large spaces. It works best in small office spaces and can’t cope with lacking laser beams in the data. This is also the main reason why this algorithm doesn’t work in large spaces.

Figure 8.4: RoboMAP(Studio) using 2D laser sensor data to create a vector map

57 TinySLAM

TinySLAM is an algorithm with the least lines of code in this list. The code has less than 200 lines of C-language code. It’s not only named TinySLAM. BreezySLAM and CoreSLAM are other appellations for this algorithm . The authors of TinySLAM, downloaded from openslam.org, are Bruno Steux and Oussama EL Hamzaoui.

What is TinySLAM It’s a SLAM algorithm developed to be as simple as possible. It uses a particle filter based local- ization subsystem. It uses raw laser range data and odometry to create a grid map and find the position of the robot. A great difference is that the algorithm immediately defines a pose of the robot. Other algorithms using particle filters, usually maintain a probability. With this probability, the algorithm defines multiple points where the robot might be. When more data is available, the program can then decide where the robot actually is and chose one probability. TinySLAM doesn’t do this. It immediately tries to define the highest probability and computes the map from that point. In other words, other SLAM’s with particle filters have multiple maps to overcome uncertainties. TinySLAM has only one map.

The previously mentioned website states that the program is written for Linux and Windows.When using this program on Linux, a compiler is needed namely GCC3.3/4.0.x.

Using TinySLAM For windows, MATLAB is needed. The code was downloaded from github6. This program however will not work directly. Some changes need to be made! The created classes have attributes. One of those attributes is ’Access’. In some classes this attribute where assigned an other class. However, the program doesn’t allow this. To remedy this, ’access’ was assigned ’public’. This needed to be done in the classes ’scan’ and ’map’.

Windows Following the next steps, the TinySLAM program can be run on Windows Operating System. Take note that MATLAB is needed to run it.

• Download TinySLAM (breezySLAM) from github.

• Implement a logfile in the map ”MATLAB”. Two are provided with the download (’exp1’ and ’exp2’).

• Open MATLAB and set the path to the downloaded file.

• Open the previously mentionned classes and set the attribute ”Access” to ”public”. Save the files.

• Go to the command window and type ’logdemo (name of logfile , true, 9999)’

• press enter and the program will run. 6 https://github.com/simondlevy/BreezySLAM/tree/master/matlab

58 Figure 8.5: TinySLAM: map of exp1 logfile

Linux Running TinySLAM on Linux is also a possibility. When running MATLAB on Linux, the steps are the same as for Windows. There is a youtube tutorial available on how to run TinySLAM created by Simon D. Levy7.

Employment TinySLAM The use of only one map has a disadvantage in long corridors. The robot can get lost. The goal of TinySLAM is to close small loops, exploring laboratories instead of corridors [86]. An other disadvantage of TinySLAM is that it does not render points that it has uncertainty about[80]. TinySLAM is a fast algorithm, therefore the algorithm is suitable for realtime mapping [86][81][80].

TinySLAM code The original code is written in C language. Because the algorithm is so compact, we decided to include the whole code in this document. Keep in mind that the given code is written to implement on a physical robot. The MATLAB code described in ’Using TinySLAM’ is created to work with logfiles.

7https://www.youtube.com/watch?v=uKfK41cpjts

59 Figure 8.6: TinySLAM/CoreSLAM code in C language

60 UFastSLAM

UfastSLAM tries to overcome the limitations of the Rao-Blackwellized particle filter (RBPF) and FastSLAM. These limitations are the derivation of the Jacobian matrices and the linear approxi- mations of non linear functions. These can make the filter inconsistent[87].

What is UfastSLAM

UfastSLAM computes the proposal distribution by measurment updates of the unscented filter in the sampling step of the particle filter. When multiple features are observed at the same time, the sigma points of the unscented filter are updated in every feature update step for accurate vehicle pose estimation. The algorithm also updates each feature state by the unscented filter without accumulating the lin- ear approximation error and without calculating the jacobian matrix of an observation model. The use of the scaled unscented transformation (SUT) removes the impact of an inaccurate linearised transformation from polar to Cartesian coordinates in the feature initialization[87].

(a) MATLAB simulated map (b) Comparison map & satellite image

Figure 8.7: UfastSLAM algorithm

Using UfastSLAM

Like all previous algorithms, UfastSLAM is also Open Source an can be downloaded from openslam.org. This algorithm is created for MATLAB on Windows OS. It creates feature maps of the environ- ment using laser range data and odometry. The algorithm package also contains a dataset. This dataset is created from Victoria park. When running this program, this logfile will be implemented. Figure8.7a shows the resulting feature map and the robot’s path. Figure 8.7b visually shows the created map compared to an aerial view of Victoria park.

61 Windows To get the algorithm working on windows is quite simple. Take note that this code was run on MATLAB 2011.

• Download UfastSLAM from openslam.org

• Open MATLAB and set its path to the UfastSLAM map.

• Type in the command window ’uslam’ and press enter.

• The program visualises the measurements and positions.

An other dataset can be implemented by replacing the Victoria Park dataset.

Employment UfastSLAM This SLAM was designed as an evolution of the fastSLAM algorithm. Eventually [87] states that, when measurement noise increases, UfastSLAMs estimation error increases much more slowly than that of FastSLAM. Thus UfastSLAM is more robust to high sensor noise than FastSLAM. furthermore, UfastSLAM needs only one fifth amount of particles fastSLAM needs to achieve the same estimated error. The computational cost of UfastSLAM is slightly higher than fastSLAM

62 EKFmonocularSLAM

EKFmonocularSLAM uses the filtered approach (instead of keyframe-based) and tries to overcome the limitations of the core single Extended Kalman Filter SLAM technique (filtered approach) used for realtime monocular camera tracking using inverse depth point parametrization and 1- point ranzac.

What is EKFmonocularSLAM The algorithm takes as input a monocular image sequence and its camera calibration and outputs the estimated camera motion and a sparse map of salient point features as shown in Figure 8.8.

(a) Camera image with point features. Thick red: low innovation inliers. Thin red: high innovation inliers. Magenta: rejected by 1-point RANSAC. (b) Camera motion(black) and scene in top Blue: No match found by cross-correlation view

Figure 8.8: EKFmonocularSLAM

EKFmonocularSLAM is an online visual method which uses a probabilistic filtering approach to sequentially update estimates of the positions of features (the map) and the current location of the camera. These SLAM methods have different strengths and weaknesses to visual odometry, being able to build consistant and drift-free global maps but with a bounded number of mapped features. One of the biggest limitations of the previous monocular SLAM algorithms was that they could only make use of features which were close to the camera relative to its distance of translation, and therfore exhibited significant parallax during motion. The problem was in intialising uncertain depth estimates for distant features. EKFmonocularSLAM copes with this well know problem using inverse depth point parametrization and 1-point ranzac. To summerize, when taken only the filtering methods into account this algorithm performs good en copes with uncertain depth estimations as discribed above. But compared to the latest developments in monocular SLAM algorithms the newer keyframe-based SLAM systems (for example ORB-SLAM) perform better. On of the major drawbacks in the filtered approach is that every frame is processed to jointly

63 estimate the map feature locations and the camera pose. It wastes alot of computation in processing consecutive frames with little new information and the accumulation of linearization errors. Inverse depth point paramtrization. This point model suits the projective nature of the camera sensor, allowing to code in the SLAM map very distant points naturally captured by a camera. It is also able to code infinite depth uncertainty, allowing undelayed initialization of point features from the first frame they were seen. At any step, the measurement equation shows the high degree of linearity demanded by the EKF. 1-point ransac. Robust data association has proven to be a key issue in any SLAM algorithm. 1-Point RANSAC is an algorithm based on traditional random sampling but adapted to the EKF. Specifically, it exploits probabilistic information from the EKF to greatly improve the efficiency over standard RANSAC.

Using EKFmonocularSLAM EKFmonocularSLAM is also Open Source and can be downloaded from openslam.org. This al- gorithm is created for MATLAB on Windows OS. The algorithm works as is on MATLAB 2011 but needs to be adjusted to get it working on the newer versions. It creates feature maps of the environment and the location of the camera using a monocular sequence. The algorithm package also contains a monocular sequence from an indoor invironment to test with.

Windows To get the algorithm working on windows is quite simple. Take note that this code was run on MATLAB 2011, it was not one on one compatible with MATLAB 2016.

• Download EKFmonocularSLAM from openslam.org using SVN like tortoisesvn (https://tortoisesvn.net/)

• Unzip the monocular sequence (“ic.tgz”) located in (/trunk/sequences/)

• Open MATLAB and set its path to the EKFmonocularSLAM map.

• Run monoslam.m in the command window.

• The program visualises the sequence with detected features and also the feature map with the calculated camera path.

An other monocular sequence can be implemented by replacing them in the /trunk/sequences/ic/ folder. Keep in mind when using images taken with another camera that you need to perform a camera calibration to set the correct camera parameters for the algorithm. Camera parameters can be obtained using the MATLAB camera calibration toolbox. Then the camera parameters have to be set in the function intialize cam. Intialize cam doesn’t have the same naming convention compared to the output results from the camera calibration tool so they need to be changed. See http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html for more info.

64 ORB-SLAM

Compared to the previous SLAM algorithm (EKFmonocularSLAM) ORB-SLAM is a keyframe/feature- based monocular SLAM algorithm that operates in real time in large in- and outdoor environments. The system is robust against motion blur (fast rotating and acceleration), allows loop closing and relocalization and includes automatic initialization.

What is ORB-SLAM The algorithm takes monocular monochrome or color images as input and outputs a sparse map of 3D points and also the keyframe poses and covisibility graph. ORB-SLAM is based on the most representative keyframe-based SLAM system PTAM by Klein and Murray. They first introduced the idea of splitting camera tracking and mapping into parallel threads instead of processing every frame in the filtered method. The tracking thread is responsible for estimating the camera pose assuming the map is fixed. The mapping thread is resposible for providing the map and can take lots of time per keyframe. The tracking thread sends new keyframes to the mapping thread and the mapping thread sends a map of point features to the tracking thread. The map is a cloud of point features and is made from the set of key-frames (similar to input for multi-view reconstruction but not all available from the start) and every key-frame have a known pose (except the first two). For the first two poses a stereo initialisation needs to be performed, any stereo algorithm can be used. Not every frame can be a keyframe, propperties for a good keyframe are if it is not in the same place as the previous ones, the keyframe cannot be motion blurred and the tracking should be really confident of the position estimate. To handle with the problem of motion blur (in motion blurred images we cannot detect point features correctly) edges are added. Motion blur occurs when moving fast but because motion blur is actually 1 dimentional we can stil use the horizontal or vertical edges to help tracking (figure 8.9). In addition a frame-to-frame rotation estimator is also implemented to handle with rapid accelerations (big velocities are a problem but also big accelerations). PTAM does not detect large and relocalization is also not robust and thats why

Figure 8.9: Edge extraction on a motion blurred image

ORB-SLAM uses three threads unstead of two all running in parallel. The additional third is in charge of loop closing. The loop closing thead takes the last keyframe processed by the local mapping, and tries to detect and close loops. The accuracy of the system is typically below 1cm in small indoor scenarios and of a few meters in large outdoor scenarios (once the scale is aligned with the ground truth).

65 Using ORB-SLAM ORB-SLAM is open Source and can be downloaded from openslam.org. This algorithm is tested in Ubuntu 12.04 and is compatible with ROS Fuerte, Groovy and Hydro.

Ubuntu 12.04 Prerequisites • install the Boost library (boost library is used to launch different threads of the SLAM system) “sudo apt-get install libboost-all-dev”

• it is recommended to install the full desktop version of ROS Fuerte.

• g2o dependencies (Eigen, BLAS, LAPACK, CHOLMOD). To compile g2o CHOLMOD, BLAS, LAPACK and Eigen3 needs to be installed.

– sudo apt-get install libsuitesparse-dev – sudo apt-get install libblas-dev – sudo apt-get install liblapack-dev – sudo apt-get install libeigen3-dev

• install DBoW2 library

Installation • Make sure you have installed ROS and all library dependencies (boost, eigen3, cholmod, blas, lapack).

• In your ROS package path (check your environment variable ROS PACKAGE PATH) clone this repository: git clone https://github.com/raulmur/ORB_SLAM.gitORB_SLAM

• Build g2o: go into Thridparty/g2o/ and execute:

– mkdir build – cd build – cmake .. -DCMAKE BUILD TYPE=Release – make

• Build DBoW2: go into Thirdparty/DBoW2/ and execute:

– mkdir build – cd build – cmake .. -DCMAKE BUILD TYPE=Release – make

• Build ORB SLAM: go into the ORB-SLAM root and execute:

– mkdir build

66 – cd build – cmake .. -DROS BUILD TYPE=Release – make

Check the readme file inside the trunk on howto use ORB-SLAM.

67 OpenRatSLAM

RatSLAM is an appearance based topological SLAM system based on the neural processes underly- ing navigation in the rodent brain, capable of operating with low resolution monocular image data. OpenRatSLAM is an open source, easily reconfigurable modular version of RatSLAM integrated with ROS and capable of on- and offline operation. OpenRatSLAM takes monocular images and odometry (wheel sensors) as standard ROS messages and calculates a topological map (no feature map is calculated compared to the previous vision SLAM algorithms). It is also possible to only use camera images as input. The rotation and translation speed is than calculated using the intensity profiles of the images and how the images are shifted towards another.

What is OpenRatSLAM OpenRatSLAM is based on RatSLAM. RatSLAM is a SLAM system based on computational models of the navigational processes in the hippocampus of a rat brain. The final system is an evolution over a course of more than 10 years. During this time three versions have ben developed and we will discuss the final version. This system consists of three major modules: “pose cells”, “local view cells”, and “experience map” (see figure 8.10). The pose cells are a Continuous Attractor

Figure 8.10: RatSLAM final version

Network (CAN) of units which contains cells that contain the place and heading which correspond to the three dimentional pose of a ground-based robot px, y, θq. The most stable state is a cluster of activated units (activity pack) and the centroid of this activity pack is the robots best internal estimate of its current pose. The pose cells get input from the Landmark Cues (local view cells) but also from the Self-Motion Cues which correspond to the odometry from the robot. As mentioned in OpenRatSLAM it is also possible not to use the odometry from the robot but to calculate the translation en rotation speeds by just using the camera images. The local view cells (Landmark Cues) are an expandable array of units. Each unit represents a unique visual scene in the world (image where new landmarks are detected). When a new visual

68 scene is detected a local view cell is created and associated with the raw pixel data in that scene. In addition a link is learnt between this local view cell and the centroid of the activity pack in the pose cells at that time. So for every new scene not only pixel data but also the corresponding estimated position and orientation is linked so the algorithm knows the exact position from where the landmarks are seen. The Experience Map is a graphical map (Figure 8.11) that estimates a unique estimate of the robot’s pose by combining information from the Pose Cells and the Local View Cells (Landmark Cues). The experience map is made up of nodes or experiences (black circles) representing destinct

Figure 8.11: RatSLAM Experience map places and scenes in the environment. While the robot moves around it forms more and more of these experience nodes. These experience nodes are then connected by transitions representing the movement between these nodes.

Using OpenRatSLAM OpenRatSLAM is open Source and can be downloaded from openslam.org. This algorithm is created for MATLAB and only images are used as input data.

Using Ubuntu 12.04 and ROS Groovy Dependencies OpenRatSLAM depends on ROS packages opencv2 and topological nav msgs and also on 3D graph- ics library irlicht. • Install irrlicht: sudo apt-get install libirrlicht-dev • Install OpenCV: sudo apt-get install libopencv-dev

Build instructions • setup a ROS catkin workspace

69 – source /opt/ros/groovy/setup.bash – mkdir /catkin ws/src – cd /catkin ws/src – catkin init workspace – cd.. – catkin make – source devel/setup.bash

• Checkout the source from SVN into /catkin ws/src

– cd /catkin ws/src – git clone https://github.com/davidmball/ratslam.git ratslam ros – cd ratslam ros – git checkout ratslam ros

• Build OpenRatSLAM

– cd /catkin ws – catkin make

70 OpenSeqSLAM

OpenSeqSLAM is an open source Matlab implementation of the original SeqSLAM algorithm. SeqSLAM performs place recognition by matching sequences of images. OpenSeqSLAM takes only images as input and outputs the location.

What is OpenSeqSLAM

OpenSeqSLAM is a visual navigation technique. The algorithm calculates the best candidate matching location within every local sequence of images. Localization is then achieved by recog- nizing coherent sequences of these local best matches. They force the matching algorithm to find strong candidates within every local section of the route ever traversed instead of finding one global candidate match they make the algorithm find lots of candidate matches making sure that every local sequence has a candidate match in it. This means that there are alot of false positives so they run a second step and try or match spatially coherent sequences of the local best matches. Over time, each frame is compared to all previously learnt frames to form a difference matrix (figure 8.12).

Figure 8.12: Difference matrix

A strong candidate match corresponds to an image comparison with a low difference score. Local contrast enhancement finds the best candidate match frames in every local sequence. Then a search finds coherent sequences of local best matches (diagonal dark line). Trajectories with high matching scores are identified as possible localization hypotheses. When running the algorithm the images are not directly fed to the algorithm but two steps of pre-processing are performed. To provide a little bit extra invariance to illumination change, patch normalization and downsampling (figure 8.13) of the image are used. The SeqSLAM algorithm also needs an image comparison method. For image comparison “sum of absolute differences” (SAD) is used. The corresponding pixels of each of the downsampled images are compared, then the absolute intensity difference is calculated and finally the mean of this is calculated over the entire image to get one difference score. By calculating the SAD for the current image compared to all past images we get the differencematrix 8.12.

71 Figure 8.13: Downsampling and patch normalization

Using OpenSeqSLAM OpenSeqSLAM is open Source and can be downloaded from openslam.org. This algorithm is created for MATLAB and only images are used as input data.

Using Ubuntu and MATLAB • download the source code using SVN (like tortoiseSVN)

• goto the norland directory and download the dataset using:

– cd datasets/norland – ./getDataset.bash

• run the demo using demo.m from within the matlab directory.

72 RGBDSLAM

RGB-DSLAM is open source. The system estimates the trajectory of a hand-held Kinect and generates a dense 3D model of the environment. It is possible to use three different feature de- scriptors (SIFT, SURF and ORB). The feature descriptors are used to match pairs of acquired images. RANSAC is used to estimate the 3D transformation and finally the resulting pose graph is optimized.

What is RGBDSLAM

For RGBDSLAM a Kinect style camera is used. The Kinect camera is a sensor that provides color images and depth maps at the same time. The visual features of the color images combined with the 3D depth sensing makes it possible to create a dense 3D environment. First visual features are extracted from the incoming color images and are matched against the features from the previous images. For the computation of the pairwise relations they rely on OpenCV for the detection, description and matching of various feature types, namely SURF, SIFT and ORB. By evaluating the depth images at the location of the feature points a set of point-wise 3D correspondences be- tween any two frames is created. When the correspondences between two frames are known, we can estimate the relative transformation using RANSAC. The calculation of the pairwise feature matching (SURF,...) and pairwise 6D transformation estimation using RANSAC are all performed on the frontend of the system. Finally a globally consistant trajectoryis created by optimizing the pose graph using the go framework. The go framework is an easily extensible graph optimizer that can be applied to a wide range of problems. The output of the algorithm is a globally consistent 3D model of the percieved environment rep- resented as a colored point cloud and the Octomap library is used to generate a volumetric repre- sentation of the environment (see figure 8.14).

Figure 8.14: 3D model/voxelgrid

To create this 3D model the trajectory is used in combination with the original data to construct a representation of the environment. Projecting all point measurements into a global 3D coordinate system leads to a straightforward point-based representation. They claim that the system provides a camera pose with an average root mean square of 9.7cm and 3.95 in a typical office environment and can handle high speed sequences with average velocities of up to 50deg/s and 0.43m/s. The avarage frame processing time is 0.35s.

73 Using RGBDSLAM RGBDSLAM is open Source and can be downloaded from openslam.org or you can use the RGBD- SLAM pages on the ROS wiki. The algorithm is available for UBUNTU for ROS Feurte but there is also another version available for ROS Hydro and Indigo and one version for not ROS users. Details on using the algorithm can be found on the ROS wiki pages (http://wiki.ros.org/rgbdslam).

Using Ubuntu and ROS Feurte • Download RGBDSLAM to your ros workspace.

– svn co http://alufr-ros-pkg.googlecode.com/svn/trunk/rgbdslam freiburg

• ROS needs to be aware of the folder into which you download rgbdslam freiburg! Make sure that the folder containing rgbdslam freiburg is in your ROS PACKAGE PATH. If properly configured ”roscd rgbdslam” will take you to the rgbdslam package directory.

• install rosdep first and then execute:

– rosdep update – rosdep install rgbdslam freiburg

• compile RGBDSLAM

– rosmake rgbdslam freiburg

• if ROS dependencies are still not met for some reason, you might need to install the following dependencies:

– sudo apt-get install libglew1.5-dev libdevil-dev libsuitesparse-dev

74 LSD-SLAM

Large-Scale Direct Monocular SLAM (LSD-SLAM) is open source available and runs in real-time on a CPU. The algorithm locally tracks the motion of the camera and allows to build consistant large-scale maps of the environment only using camera images.

What is LSD-SLAM

The algorithm consists of three components: tracking, depth map estimation and map optimization. The tracking component continuously tracks new camera images. It estimates their regid body pose with respect to the current keyframe using the pose of the previous frame as initialization. Depth map estimation uses tracked frames to refine or replace the current keyframe. If the camera has moved to far a new keyframe is initialized by projecting points from existing, close-by keyframes onto it. So once a new frame is chosen to become a keyframe its depth map is initialized by projecting points from the previous keyframe onto it (Depth map creation). After depth map creation the map is further refined using tracked images that do not become a keyframe to refine the current keyframe. Once a keyframe is replaced as tracking reference it is incorporated into the global map by the map optimzation component. The global map is represented as a pose graph consisting of keyframes as vertices with 3D similarity transforms as edges incorporating changing scale of the environment and allowing to detect and correct accumulated scale drift. To detect loop closures and scale-drift a simularity transform to close-by existing keyframes is estimated using scale-aware. An example of a map represented as a point cloud is displayed in figure 8.15.

Figure 8.15: Map displayed as a pointcloud

Using LSD-SLAM

LSD-SLAM is open Source available and can be downloaded from github (https://github.com/ tum-vision/lsd_slam). The algorithm is available for UBUNTU 12.04 in combination of ROS fuerte or UBUNTU 14.04 on ROS indigo. We tested the algorithm on UBUNTU 14.04 and ROS indigo. There are a number of datasets available onhttps://vision.in.tum.de/research/vslam/ lsdslam?redirect=1 as ROS bag files. Details on how to parametrize and use the algorithm can also be found on the github page.

75 Installation using UBUNTU 14.04 and ROS indigo. • First create a rosbuild workspace

– sudo apt-get install python-rosinstall – mkdir /rosbuild ws – cd /rosbuild ws – rosws init . /opt/ros/indigo – mkdir package dir – rosws set /rosbuild ws/package dir -t . – echo ”source /rosbuild ws/setup.bash” ¿¿ /.bashrc – bash – cd package dir

• If not allready installed install following system dependencies:

– sudo apt-get install ros-indigo-libg2o ros-indigo-cv-bridge liblapack-dev libblas-dev freeglut3- dev libqglviewer-dev libsuitesparse-dev libx11-dev

• Clone the repository from github in your ROS package path

– git clone https://github.com/tum-vision/lsd slam.git lsd slam

• compile the two packages by:

– rosmake lsd slam

• There is also the possibility to use openFabMap for large loop-closure detection but we cur- rently didn’t get it operational. Once working the details will be added in this inventory.

76 Bibliography

[1] L. Mainetti, L. Patrono, and I. Sergi. “A survey on indoor positioning systems”. In: Software, Telecommunications and Computer Networks (SoftCOM), 2014 22nd International Confer- ence on. 2014, pp. 111–120. doi: 10.1109/SOFTCOM.2014.7039067. [2] Neil E Klepeis et al. “The National Human Activity Pattern Survey (NHAPS): a resource for assessing exposure to environmental pollutants”. In: Journal of exposure analysis and environmental epidemiology 11.3 (2001), pp. 231–252. [3] P Connolly and D Bonte. Indoor Location in Retail: Where is the Money? Tech. rep. ABI Research Report, 2016. [4] G. Sterling and D. Top. Mapping the Indoor Marketing Opportunity. Tech. rep. Opus research, 2014. [5] R Mautz. “Indoor Positioning Technologies”. PhD thesis. ETH Zurich, 2012. [6] Stephan P. et al. “Evaluation of Indoor Positioning Technologies under Industrial Application Conditions in the SmartFactoryKL Based on EN ISO 9283”. English. In: Proceedings of the 13th IFAC Symposium on Information Control Problems in Manufacturing. Vol. 42. Moscow, Russia, June 3-5 2009, pp. 870–875. doi: https://doi.org/10.3182/20090603- 3- RU- 2001.0294. [7] Carlos Mart´ınezde la Osa et al. “Positioning Evaluation and Ground Truth Definition for Real Life Use Cases”. In: Indoor Positioning and Indoor Navigation (IPIN), 2016 International Conference on. IEEE, 2016, pp. 1–7. (Visited on 05/16/2017). [8] Stephan Adler et al. “A Survey of Experimental Evaluation in Indoor Localization Research”. In: Indoor Positioning and Indoor Navigation (IPIN), 2015 International Conference on. IEEE, 2015, pp. 1–10. (Visited on 05/16/2017). [9] Lester Madden. Professional augmented reality browsers for smartphones: programming for junaio, layar and wikitude. John Wiley & Sons, 2011. [10] Lowe’s Vision. English. Lowe’s Companies, Inc. 2016. url: https://play.google.com/ store/apps/details?id=com.lowes.vision. [11] Demetrios Zeinalipour-Yazti et al. “Internet-based Indoor Navigation Services”. In: (). [12] Santiago Diez Martinez. “CampusGuiden: Indoor Positioning, Data Analysis and Novel In- sights”. In: (2013). [13] Tango. English. Google. 2016. url: https://get.google.com/tango/. [14] C. Brown. Annual topline view cpg coupon facts for year-end 2014. Tech. rep. NCH Marketing Services, Inc, 2015.

77 [15] G. Biczk et al. “Navigating MazeMap: Indoor human mobility, spatio-logical ties and fu- ture potential”. In: Pervasive Computing and Communications Workshops (PERCOM Work- shops), 2014 IEEE International Conference on. 2014, pp. 266–271. doi: 10.1109/PerComW. 2014.6815215. [16] Holly A. Yanco. Wheelesley: A robotic wheelchair system: Indoor navigation and user in- terface. Ed. by Vibhu O. Mittal et al. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998, pp. 256–268. isbn: 978-3-540-68678-1. doi: 10.1007/BFb0055983. url: http://dx.doi. org/10.1007/BFb0055983. [17] Rudolph Triebel et al. SPENCER: A Socially Aware Service Robot for Passenger Guidance and Help in Busy Airports. Ed. by S. David Wettergreen and D. Timothy Barfoot. Cham: Springer International Publishing, 2016, pp. 607–622. isbn: 978-3-319-27702-8. doi: 10.1007/ 978-3-319-27702-8_40. url: http://dx.doi.org/10.1007/978-3-319-27702-8_40. [18] Wolfram Burgard et al. “Experiences with an interactive museum tour-guide robot”. In: Artificial Intelligence 114.1 (1999), pp. 3 –55. issn: 0004-3702. doi: http://dx.doi.org/10. 1016/S0004-3702(99)00070-3. url: http://www.sciencedirect.com/science/article/ pii/S0004370299000703. [19] Automatisch geleide voertuigen en AGV systemen. Automatisch geleide voertuigen (AGVs). Nederlands. Egemin Automation. 2016. url: http://www.egemin- automation.be/nl/ automation/logistieke-automatisering_ha-oplossingen/agv-systemen. [20] Maged N. Kamel Boulos and Geoff Berry. “Real-time locating systems (RTLS) in healthcare: a condensed primer”. In: International Journal of Health Geographics 11.1 (2012), pp. 1–8. issn: 1476-072X. doi: 10.1186/1476-072X-11-25. url: http://dx.doi.org/10.1186/1476- 072X-11-25. [21] OpenRTLS. UWB facts and figures. English. https://openrtls.com/. OpenRTLS. 2016. url: https://openrtls.com/. [22] Y. Nakazawa et al. “Indoor positioning using a high-speed, fish-eye lens-equipped camera in Visible Light Communication”. In: Indoor Positioning and Indoor Navigation (IPIN), 2013 International Conference on. 2013, pp. 1–8. doi: 10.1109/IPIN.2013.6817855. [23] Roland Siegwart. Introduction to Autonomous Mobile Robots. Vol. 53. 9. 2011, p. 472. isbn: 9788578110796. doi: 10.1017/CBO9781107415324.004. arXiv: arXiv:1011.1669v3. [24] Hui Liu et al. “Survey of wireless indoor positioning techniques and systems”. In: IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 37.6 (2007), pp. 1067–1080. [25] S Panzieri et al. “A low cost vision based localization system for mobile robots”. In: target 4 (2001), p. 5. [26] Rainer Mautz, Washington Y Ochieng, et al. “A robust indoor positioning and auto-localisation algorithm”. In: Positioning 1.11 (2007). [27] M. Kanaan and K. Pahlavan. “A comparison of wireless geolocation algorithms in the indoor environment”. In: Wireless Communications and Networking Conference, 2004. WCNC. 2004 IEEE. Vol. 1. 2004, 177–182 Vol.1. doi: 10.1109/WCNC.2004.1311539. [28] C. Drane, M. Macnaughtan, and C. Scott. “Positioning GSM telephones”. In: IEEE Commu- nications Magazine 36.4 (1998), pp. 46–54, 59. issn: 0163-6804. doi: 10.1109/35.667413.

78 [29] D. J. Torrieri. “Statistical Theory of Passive Location Systems”. In: IEEE Transactions on Aerospace and Electronic Systems AES-20.2 (1984), pp. 183–198. issn: 0018-9251. doi: 10. 1109/TAES.1984.310439. [30] Corinna Cortes and Vladimir Vapnik. “Support-vector networks”. In: Machine Learning 20.3 (1995), pp. 273–297. issn: 1573-0565. doi: 10.1007/BF00994018. url: http://dx.doi.org/ 10.1007/BF00994018. [31] Wan Mohd Bejuri et al. “Ubiquitous positioning: A taxonomy for location determination on mobile navigation system”. In: arXiv preprint arXiv:1103.5035 (2011). [32] Rainer Mautz. “Overview of current indoor positioning systems”. In: Geodezija ir Kartografija 35.1 (2009), pp. 18–22. doi: 10 . 3846 / 1392 - 1541 . 2009 . 35 . 18 - 22. eprint: http : / / www . tandfonline . com / doi / pdf / 10 . 3846 / 1392 - 1541 . 2009 . 35 . 18 - 22. url: http : //www.tandfonline.com/doi/abs/10.3846/1392-1541.2009.35.18-22. [33] E. A. Wan and A. S. Paul. “A tag-free solution to unobtrusive indoor tracking using wall- mounted ultrasonic transducers”. In: Indoor Positioning and Indoor Navigation (IPIN), 2010 International Conference on. 2010, pp. 1–10. doi: 10.1109/IPIN.2010.5648178. [34] Sonitor RTLS technologies. English. Sonitor RTLS technologies. 2015. url: http://www. sonitor.com/. [35] Nissanka Bodhi Priyantha. “The cricket indoor location system”. PhD thesis. Massachusetts Institute of Technology, 2005. [36] Marvelmind Robitics. English. Marvelmind Robitics. 2016. url: http://www.marvelmind. com/n. [37] Skyhook Wireless. English. Skyhook Wireless. 2017. url: http://www.skyhookwireless. com/. [38] Wi-Fi Desing, WLAN planning and survey tools, Wi-Fi spectrum tools. English. ekahau. 2015. url: http://www.ekahau.com/. [39] Indoor location technologies compared: Beacons. 2015. url: http://lighthouse.io/indoor- location-technologies-compared/beacons/. [40] SL Ting et al. “The study on using passive RFID tags for indoor positioning”. In: International journal of engineering business management 3.1. (2011), pp. 9–15. [41] Wanderers management based on Active RFID technology. English. Kimaldi. url: http : / / www . kimaldi . com / kimaldi _ eng / solutions / wanderer _ management / wanderers _ management_based_on_active_rfid_technology. [42] ZONITH. English. Zonith. 2016. url: http://zonith.com/. [43] Estimote Beacons - Real world context for your apps. English. estimote. 2016. url: http: //estimote.com/. [44] What is Indoor Positioning Systems. English. Senion. 2017. url: https://senion.com/ indoor-positioning-system/. [45] Quuppa - Do more with location. English. Quuppa. 2017. url: http://quuppa.com/. [46] S. Gezici et al. “Localization via ultra-wideband radios a look at positioning aspects for future sensor networks”. In: IEEE Signal Processing Magazine 22.4 (2005), pp. 70–84. issn: 1053-5888. doi: 10.1109MSP.2005.1458289.

79 [47] M. Heim et al. “Frequency modulated continuous wave ultra-wideband radar-based monitor- ing system for extending independent living”. In: 2014 IEEE Healthcare Innovation Confer- ence (HIC). 2014, pp. 211–214. doi: 10.1109/HIC.2014.7038912. [48] AngleID. English. Ubisense. 2017. url: https://ubisense.net/en/products/angleid. [49] RFID SOLUTIONS FOR WAREHOUSES AND DCS. English. ZIH corp. 2016. url: https: //www.zebra.com/us/en/solutions/warehouse-management-solutions/rfid-solutions- warehouses-dcs.html. [50] Pozyx - centimeter positioning for arduino. English. Pozyx Labs. 2017. url: https://www. pozyx.io/. [51] Hagisonic. English. Hagisonic. 2010. url: http://www.hagisonic.com/. [52] Ambiplex passive infrared localization systems. English. Ambiplex GmbH & Co. KG. 2009. url: http://www.ambiplex.com/index.php?id=4&L=1. [53] Trong-Hop Do and Myungsik Yoo. “An in-Depth Survey of Visible Light Communication Based Positioning Systems”. In: Sensors 16.5 (2016), p. 678. issn: 1424-8220. doi: 10.3390/ s16050678. url: http://www.mdpi.com/1424-8220/16/5/678. [54] Xiaohan Liu, Hideo Makino, and Yoshinobu Maeda. “Basic study on indoor location es- timation using visible light communication platform”. In: 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE. 2008, pp. 2377– 2380. [55] J. M. Kahn and J. R. Barry. “Wireless infrared communications”. In: Proceedings of the IEEE 85.2 (1997), pp. 265–298. issn: 0018-9219. doi: 10.1109/5.554222. [56] Phillips lighting. LED Based Indoor Positioning System — Philips Lighting. url: http : //www.lighting.philips.com/main/systems/themes/led-based-indoor-positioning (visited on 10/10/2017). [57] Acuity. Indoor Positioning Systems. url: http://www.acuitybrands.com/solutions/ internet-of-things/indoor-positioning-systems (visited on 10/10/2017). [58] Maury Wright. “Acuity acquires indoor-location-services specialist ByteLight - LEDs”. In: LEDs magazine (2015). url: http://www.ledsmagazine.com/articles/2015/04/acuity- acquires-indoor-location-service-specialist-bytelight.html. [59] Q-Track - Pick a Tag. English. Quuppa. 2017. url: http://q-track.com/pick-a-tag/. [60] Supreeth Sudhakaran. “Streetfight Moves Indoors To Map the Last Three Feet”. In: Geospa- tial World (2014). [61] Polhemus Electromagnetics. English. Polhemus. 2017. url: http://polhemus.com/applications/ electromagnetics/. [62] Dinesh Manandhar and Hideyuki Torimoto. Opening Up Indoors: Japans Indoor Messag- ing System, IMES. English. GNSS Technologies, Inc. 1. url: http : / / gpsworld . com / wirelessindoor-positioningopening-up-indoors-11603/. [63] Oliver Woodman and Robert Harle. “Pedestrian localisation for indoor environments”. In: Proceedings of the 10th international conference on Ubiquitous computing. ACM. 2008, pp. 114– 123.

80 [64] Carter Jamie et al. Lidar 101 : An Introduction to Lidar Technology , Data , and Ap- plications. Tech. rep. November. (NOAA) Coastal Services Center, 2012, p. 76. doi: 10 . 5194 / isprsarchives - XL - 7 - W3 - 1257 - 2015. url: https : / / coast . noaa . gov / data / digitalcoast/pdf/lidar-101.pdf. [65] Juergen Brugger, Danick Briand, and F. Tappero. “Proceedings of the Eurosensors XXIII conference Low-cost optical-based indoor tracking device for detection and mitigation of NLOS effects”. In: Procedia Chemistry 1.1 (2009), pp. 497 –500. issn: 1876-6196. doi: http: //dx.doi.org/10.1016/j.proche.2009.07.124. url: http://www.sciencedirect.com/ science/article/pii/S1876619609001259. [66] F. Boochs et al. “Increasing the accuracy of untaught robot positions by means of a multi- camera system”. In: Indoor Positioning and Indoor Navigation (IPIN), 2010 International Conference on. 2010, pp. 1–9. doi: 10.1109/IPIN.2010.5646261. [67] Shih Hau Fang and Tsung Nan Lin. “Accurate WLAN indoor localization based on RSS fluctuations modeling”. In: WISP 2009 - 6th IEEE International Symposium on Intelligent Signal Processing - Proceedings (2009), pp. 27–30. doi: 10.1109/WISP.2009.5286581. [68] Jorge Juan Robles. “Indoor localization based on wireless sensor networks”. In: {AEU} - International Journal of Electronics and Communications 68.7 (2014), pp. 578 –580. issn: 1434-8411. doi: http://dx.doi.org/10.1016/j.aeue.2014.04.004. url: http://www. sciencedirect.com/science/article/pii/S1434841114001009. [69] A. Ward, A. Jones, and A. Hopper. “A new location technique for the active office”. In: IEEE Personal Communications 4.5 (1997), pp. 42–47. issn: 1070-9916. doi: 10.1109/98.626982. [70] C. Depenthal. “Path tracking with IGPS”. In: Indoor Positioning and Indoor Navigation (IPIN), 2010 International Conference on. 2010, pp. 1–6. doi: 10.1109/IPIN.2010.5647501. [71] Motoko Oe, Tomokazu Sato, and Naokazu Yokoya. Estimating Camera Position and Posture by Using Feature Landmark Database. Ed. by Heikki Kalviainen, Jussi Parkkinen, and Arto Kaarna. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005, pp. 171–181. isbn: 978-3-540- 31566-7. doi: 10.1007/11499145_19. url: http://dx.doi.org/10.1007/11499145_19. [72] Tom Van Haute et al. “Performance analysis of multiple Indoor Positioning Systems in a healthcare environment”. In: International Journal of Health Geographics 15.1 (2016), pp. 1– 15. issn: 1476-072X. doi: 10.1186/s12942-016-0034-z. url: http://dx.doi.org/10. 1186/s12942-016-0034-z. [73] Margaret E Jefferies and Wai-Kiang Yeap. Robotics and cognitive approaches to spatial map- ping. Springer, 2008. [74] Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic robotics. MIT press, 2005. [75] Giorgio Grisetti et al. “Fast and accurate SLAM with Rao–Blackwellized particle filters”. In: Robotics and Autonomous Systems 55.1 (2007), pp. 30–38. [76] Sebastian Thrun et al. “FastSLAM: An efficient solution to the simultaneous localization and mapping problem with unknown data association”. In: Journal of Machine Learning Research 4.3 (2004), pp. 380–407. [77] DP-SLAM. 2017. url: https://users.cs.duke.edu/$\sim$parr/dpslam. [78] Austin Eliazar and Ronald Parr. “DP-SLAM: Fast, robust simultaneous localization and mapping without predetermined landmarks”. In: IJCAI. Vol. 3. 2003, pp. 1135–1142.

81 [79] How to run Linux commands on Windows using Cygwin. url: https://www.youtube.com/ watch?v=uTeH7vm8JZU. [80] Sterling Peet, Robert Grosse, and Jack Morgan. Project 6 SLAM Proposal. Report. url: http://kuriosly.com/wp-content/uploads/2013/04/SLAMfinal.pdf. [81] Robert Ouellette and Kotaro Hirasawa. “A comparison of SLAM implementations for indoor mobile robots”. In: Intelligent Robots and Systems, 2007. IROS 2007. IEEE/RSJ Interna- tional Conference on. IEEE. 2007, pp. 1479–1484. [82] Giorgio Grisetti, Cyrill Stachniss, and Wolfram Burgard. “Improved techniques for grid map- ping with rao-blackwellized particle filters”. In: IEEE transactions on Robotics 23.1 (2007), pp. 34–46. [83] Dirk Hahnel et al. “An efficient FastSLAM algorithm for generating maps of large-scale cyclic environments from raw laser range measurements”. In: Intelligent Robots and Systems, 2003.(IROS 2003). Proceedings. 2003 IEEE/RSJ International Conference on. Vol. 1. IEEE. 2003, pp. 206–211. [84] Subhrajit Bhattachary. Fast SLAM using a geometric approach and some unconventional methods. Report. url: https://www.math.upenn.edu/~subhrabh/file_cache/MEAM620_ FinalReport_Subhrajit_34229.pdf. [85] Jerry Moravec. RoboMAP Studio. 2011. url: Website:http://robomap.4fan.cz/. [86] Bruno Steux and Oussama El Hamzaoui. “tinySLAM: A SLAM algorithm in less than 200 lines C-language program”. In: Control Automation Robotics & Vision (ICARCV), 2010 11th International Conference on. IEEE. 2010, pp. 1975–1979. [87] Chanki Kim, Rathinasamy Sakthivel, and Wan Kyun Chung. “Unscented FastSLAM: A ro- bust and efficient solution to the SLAM problem”. In: IEEE Transactions on Robotics 24.4 (2008), pp. 808–820.

82