DOI QR코드

DOI QR Code

Line of Sight Vector Estimation using UWB for Augmented Reality Based Indoor Location Monitoring System

  • Chun, Sebum (Satellites Navigation Team, Korea Aerospace Research Institute) ;
  • Seo, Jae-Hee (Satellites Navigation Team, Korea Aerospace Research Institute) ;
  • Lee, Sangwoo (Satellites Navigation Team, Korea Aerospace Research Institute) ;
  • Heo, Moon-Beom (Satellites Navigation Team, Korea Aerospace Research Institute)
  • Received : 2016.08.01
  • Accepted : 2016.08.16
  • Published : 2016.09.15

Abstract

A variety of methods for indoor positioning systems have been underway to ensure the safety of emergency rescuers who are working in dangerous situations such as fire fighters. However, since most systems display locations of rescue workers in two-dimension (2D)-based maps, it is difficult for a commander located in the outside to recognize locations of rescuers inside the building intuitively. An augmented reality (AR)-based indoor positioning monitoring system can display locations of rescuer inside the building that can be seen by commanders to help intuitive recognition of positioning. To monitor AR-based indoor positioning, it is necessary to have an estimation technique of line of sight vector of observers. In the present study, an estimation technique of a line of sight vector using ultra-wide band tranceiver installed inside the indoor to trace locations is presented.

Keywords

1. INTRODUCTION

Rescue activities in indoor area during emergency situations such as fire can be dangerous always and a considerable number of fatal accidents occur during rescue activities. The analysis on accident cases showed that up to 40% of the fatal deaths could have been avoided if simple status information was provided (Choi et al. 2012).

There have been various studies to trace locations in indoor area (Dardari et al. 2015). However, environments where locations of rescuers are needed to be tracked have different requirements from those of environments where commercial location tracking services are provided (Nilsson et al. 2014). The rescuer indoor tracking system (RITS) is indoor localization system which is not rely on pre-installed infrastructure (Chun et al. 2014). RITS performs indoor localization of rescuers using an integration method of UWB-based indoor radio navigation and pedestrian dead reckoning (PDR). Figs. 1 and 2 show the overview and wearing examples of the RITS.

Fig. 1. Overview of the RITS.

Fig. 2. Example of wearing (left: communication equipment, middle: PDR sensor, right: example of overall wearing).

Despite various indoor localization methods, locations are represented in 2D or three-dimension (3D) maps in general. Map-based locating methods have disadvantages, which require digitalized maps and locations marked in maps and actual buildings are not easily matched intuitively. Moreover, disaster sites are likely to be unfamiliar places to rescuers and damage can be expanded due to wrong decision making as a result of chaotic situations.

In the present study, augmented reality (AR) was applied to indoor position monitoring to help understanding deployment status of rescuers in disaster sites. AR displays virtual objects to observers in addition to real objects which exist in reality in contrast with virtual reality (VR) which creates objects that don’t exist in reality. Observers can understand a relationship between added virtual object and real object intuitively thereby making quick decisions. However, relative position of observer and target object and estimation on a line of sight vector of observer are essential to implement AR. Although various studies have been conducted on estimation of line of sight vector, results of most studies cannot be applied in disaster sites (Persa 2006).

In the present study, a technology to estimate a line of sight vector, which was essential in implementation of AR, was studied during AR technology application for monitoring on indoor position tracking information. A line of sight vector was estimated using UWB array antenna mounted at the head mounted display (HMD) in observers and UWB tranceiver deployed in advance to track indoor locations.

This paper is organized as follows. In Section 2, technologies for implementation of AR are discussed and in Section 3, AR-based indoor positioning monitoring system is described. In Sections 4 and 5, UWB-based estimation technology of line of sight vector and performance evaluation results are presented.

2. IMPLEMENTATION OF AR

2.1 Implementation of AR

In order to implement AR, two methods can be used according to how to recognize target objects: image-based method and sensor-based method.

2.1.1 Image-based method

The image-based method is to implement AR based on information obtained by processing images acquired from cameras. This method can be divided into marker and markerless modes. A marker mode is to recognize relative distance between observer and target object and coordinates by attaching an easily recognizable marker to the target object. A marker mode has a high success rate of image recognition so it has an advantage of stable implementation of AR. It is also possible to implement virtual reality with simple operation such as marker attachment, which is why a marker mode has been widely applied to various areas such as advertisement and education-related contents due to its inexpensiveness and stability. However, since markers need to be attached in order to implement marker-based AR, it is not convenient to provide AR services in a large scale of spaces. Furthermore, it has a drawback of limited distance and angle between marker and camera because marker recognition is needed to implement AR. Fig. 3 shows the marker-based AR implementation (Multidots 2015).

Fig. 3. Examples of marker (left) used in the marker mode and AR implementation (right).

On the other hand, a markerless mode is to implement AR without markers in which a target object is directly recognized via image processing method. A markerless mode shall have a model about geometric configuration of target object by which relative location and posture between target object and observer can be identified. A markerless mode has an advantage of AR implementation without attachment of markers to target objects but its reliability is lower than marker mode since it has to recognize arbitrary target objects. Although much attention has been paid to a markerless mode due to improved image processing technologies in recent years, it is still in the research stage due to difficulties in real-time processing and high failure rate. Fig. 4 shows an example of the markerless mode (Murphy-Chutorian & Trivedi 2010, Lim et al. 2014).

Fig. 4. Example of AR implementation using markerless mode (Bonsor 2016, Jlapoutre 2011).

The AR implementation using an image-based technology is not easy in real-time and dynamic environments for both marker and markerless modes since a range of camera observation is narrow and a time delay for image processing is long. The advancement of artificial intelligence in recent years has opened a new possibility to image-based AR field.

2.1.2 Sensor-based method

A sensor-based method is to implement AR using location information and line of sight vector of observer and location information of target object without using image information, and it is also called location-based AR or position-based AR. The sensor-based method can implement AR on the assumption that the location information of a target object is known in advance even if a marker is attached to the target object or the configuration model is not secured. As a result, it has a wider bandwidth than that of image-based method and a range of observation is not limited. Furthermore, it can implement AR if a database of location information of target object is maintained, which is why AR environments can be constructed without difficulties in a large scale of spaces. However, it has a limitation that precise AR environment is not yet implemented because of limitation of sensor performance (estimation on location and line of sight vector) that can be mounted in mobile devices. Fig. 5 shows an example of sensor-based AR implementation (Multidots 2015).

Fig. 5. Example of sensor-based AR implementation.

2.2 Estimation on Line of Sight Vector

The most important information in AR implementation is a line of sight vector. In particular, since AR implementation shall maintain relative accuracy with objects that exist in reality in contrast with VR implementation, accuracy required by AR is different from that of VR that can be implemented only by estimation on changes in a line of sight. An estimation method of line of sight vector can be classified as follows:

2.2.1 Mechanical method

A mechanical estimation method of line of sight vector is to mount a sensor that can measure a yaw on the rotation part of apparatus followed by measuring a movement using a structure connected to the HMD physically thereby estimating a line of sight vector. This method can obtain a line of sight vector with high accuracy but it limits a range of movements seriously and makes facility installation difficult so it is rarely used in practice. Fig. 6 shows an example of mechanical estimation method of line of sight vector.

Fig. 6. Mechanical estimation method of line of sight vector.

2.2.2 Electromagnetic method

It is a method using a characteristic of yield of current according to coil movements located inside a constant magnetic field. It is characterized by no error divergence over time. However, it has drawbacks of low accuracy and the need of accurate mapping of magnetic field in the target space as well as high sensitivity to external noise so it is rarely used.

2.2.3 Inertial sensor method

An inertial sensor is a sensor that measures acceleration and angular velocity without contact of external objects. Changes in posture and location can be estimated through integral calculation of measured values via inertial sensors. However, this method is characterized by accumulating errors over time due to error or noise components included in measured inertial values.

In order to limit the error accumulation, a high performance inertial sensor shall be used but due to the limited space and performance improvements, a price can be increased exponentially, which limits applicable sensor specifications. In general, a variety of sensors and fusion methods are applied in most inertial sensor-based estimation methods of line of sight vector in order to overcome the performance limitation.

2.2.4 Optical sensor method

An optical sensor method is used in HMDs in aircrafts, in which a line of sight vector is estimated using signals received at a fixed receiver installed inside the cockpit and transmitter attached to the HMD. In recent years, a method using cameras rather than infrared sensors have been developed as a result of the advancement of image processing technology. However, such method can only be applicable to limited indoor spaces such as cockpits in aircrafts and has a slow response characteristic. In particular, it is sensitive to external lighting conditions and it is affected by solar light if infrared sensors are used.

In recent years, a technology that estimates a line of sight vector using images acquired through cameras has been developed with the advancement of imaging recognition technology. However, the application of this method to AR is still at an early stage since most methods recognize targets by installing markers at targets and stability is degraded even if markerless mode is used to recognize general objects as well as a large amount of computation needed.

2.2.5 Hybrid method

A marker mode or markerless mode using images has a limitation of implementation due to the burden of image processing, low reliability and low bandwidth. On the other hand, an inertial sensor method has a limitation of precise AR implementation due to error divergence over time and low precision. A hybrid method is to integrate image and inertial sensor in order to overcome the drawbacks of the two methods. In the hybrid method, if imaging recognition is possible, AR is implemented via imaging recognition, and if there is motion or imaging recognition is failed, information from inertial sensors is employed. Since a level of divergence in inertial sensor can be negligible if it is used for a short time, reliable AR can be implemented with relatively low imaging processing capability.

3. AR-BASED INDOOR LOCATION MONITORING SYSTEM

3.1 Rescuer Indoor Tracking System

The RITS is a technology to track locations of rescuers who enter disaster sites. It is a method that overcomes shortcomings of indoor radio navigation technology that needs prior installation of infrastructures and estimation navigation in which errors diverge over time. The RITS consists of pedestrian dead reckoning technology and indoor radio navigation technology. The location tracking of indoor rescuers provides location information basically using PDR and fixed transmitter and receivers are inputted at appropriate areas to suppress the error divergence. Fig. 7 shows a deployment process of the RITS.

Fig. 7. Deployment process of the RITS.

Rescuers who enter the indoor take a certain number of fixed transmitters (①). Two fixed transmitters are installed in the main entrance door at a pre-set interval to be used as reference axis for location tracking (②). If two fixed transmitters are only used, a location solution of rescuer shows a bimodal characteristic but a real location can be identified after a certain distance movement based on the PDR results. After this, a fixed transmitter is installed additionally at the corner of the main entry path (③). Once the installed fixed transmitter determines a location via distance-based simultaneous localization and mapping and the determined location is shared with all rescuers and then utilized for location tracking. The specifications of the RITS developed by our research team are summarized in Table 1.

Table 1. Main specifications of the RITS.

Specification Value
Position accuracy
 
Service area
Position update period
Indoor radio navigation
Communication
 
Horizontal: < 5 m
Vertical: Identification of inter-story
50 × 50 m (taking three fixed transmitters per rescuer)
< 1s
UWB (IEEE802.15.4a)
ISM band (400 MHz bandwidth)
Ad-hoc network using a fixed transmitter

 

3.2 AR-based Indoor Location Monitoring

Even if a location of rescuer who enters the indoor is tracked successfully via the deployment of the RITS and location information is provided to external observers, 2D-based map is generally used to indicate the location. AR is a technology that projects a location of target rescuer to the building directly when observers such as site commander who has to monitor a location of rescuer in the indoor observe the target building from externally. By doing this, a location of target rescuer can be recognized intuitively and it has an advantage of providing location information of rescuer in the indoor without digitalized maps. Fig. 8 shows an example that implements the AR-based indoor positioning monitoring system.

Fig. 8. Example of implementation of the AR-based indoor positioning monitoring system.

A configuration of the indoor positioning monitoring system using AR is shown in Fig. 9. A rescuer who enters the indoor is tracked in real time using the RITS (①). A location of external observer is also calculated using the deployed fixed transmitter simultaneously (②). A relative position vector of indoor rescuer and external observer can be estimated using the above calculated location (③). A line of sight vector of the current observer is estimated and AR can be implemented using this (④).

Fig. 9. AR-based indoor positioning monitoring.

The information that cannot be provided by existing RITS is information of a line of sight vector of the observer. In the present study, a technology that estimated a line of sight vector was studied using a fixed transmitter employed to implement the RITS.

4. ESTIMATION OF LINE OF SIGHT VECTOR BASED ON UWB

A relative position vector between target object and observer and a line of sight vector of observer are needed to implement AR. A relative vector between target object and observer can be provided in the RITS but a line of sight vector cannot be provided. In the present study, a line of sight vector was estimated using UWB-based fixed transmitter employed in the RITS. An inertial sensor was used to respond to fast change in line of sight vector and tilting during this process. Since an assumption of small angle cannot be made due to a low update rate of UWB, it has an observation model of non-linear system. Furthermore, since no initial information about yaw is assumed, it has a uniform distribution omni-directionally. In order to respond to the non-linear model and non-Gaussian error distribution, a particle filter was applied.

4.1 System Configuration

A line of sight vector of observer for AR implementation is estimated using UWB array antenna mounted in the HMD and inertial sensors. Fig. 10 shows the HMD configuration worn by the observer.

Fig. 10. HMD configuration to implement AR.

Fig. 11 shows only the array of UWB antennas mounted in the HMD and antennas mounted at the fixed transmitter. The AR implementation assumes that at least two fixed transmitters are observed around the entry where the RITS is installed at the initial operation. The black-filled circle in the UWB array antenna refers to arbitrarily designated reference antenna.

Fig. 11. UWB antenna array.

4.2 System Model

4.2.1 State vector

An increase in dimension of state vector during the implementation of particle filter results in an increase in the minimum number of particles to have stable convergence (Djurić & Bugallo 2013). In particular, since an estimation of line of sight vector shall be processed at embedded processors in real time, it is necessary to reduce the number of particles in order to reduce computation. To achieve this, the number of dimensions of state vector shall be reduced. In the present study, a location of reference antenna and a geometric array of antennas were used to reduce the state vector.

First, one of the antennas in the array is set as a reference antenna. A location of the reference antenna can be determined using a fixed antenna. If more than two fixed antennas are observed, it has a bimodal distribution and if more than three fixed antennas are observed, it has a unimodal distribution. Fig. 12 shows a location of reference antenna estimated when there are two fixed antennas. A candidate region of the reference antenna location is represented as a band shape of certain thickness due to the error of distance measurement and shaded part refers to a location area of the reference antenna estimated with two fixed antennas. The cross-sectional surface of the A-B line in Fig. 12 shows the occurrence of two peak areas (Group A and Group B) which have high particle presence probability in the shaded part. A mean location of particles that belong to each group can be calculated by group and a location of the reference antenna can be expressed by reducing it to one-dimensional group index in the 3D vector. The group index is defined by a simple number such as 1, 2, or 3.

Fig. 12. Distribution of location of the bimodal shaped reference antennas.

The geometric array of the antennas is known in advance and tilting information of the array antennas can be obtained via the inertial sensor. Thus, the state vector after the second antenna can be represented by only yaw function and the reduced state vector is summarized in Table 2.

Table 2. Reduced state vector.

State vector Dimension Note
Location of the reference antenna
Yaw
1
1
Group index
 

 

4.2.2 System model

UWB array antenna gives line of sight vector with long-term precision. But, inertial sensor gives with short-term precision. Thus, an inertial sensor is used for time propagation. An inertial sensor is used not only for time propagation but also tilting compensation of array antennas. Eq. (1) refers to a system model used to estimate a line of sight vector.

\(\left[\begin{matrix}{index}_{k+1}\\\psi_{k+1}\\\end{matrix}\right]=\left[\begin{matrix}{index}_k\\\psi_k+d\psi_k\\\end{matrix}\right]+w_k\)                                                                                              (1)

where,
\(index\) : group index (1,2,3…)
\(\psi\) : yaw angle
\(d\psi \) : incremental yaw angle from time \(k\) to \(k+1\)
\(w\) : process noise

4.2.3 Observation model

A measured value of distance between array antenna component and fixed antenna is used to estimate a line of sight vector. Tilting information provided by the inertial sensor is used as known input. Eq. (2) refers to an observation model used to estimate a line of sight vector (Blakelock 1991, Titterton & Weston 2009).

\(\left[ \begin{matrix}R_0^0 \\ R_0^1 \\ R_1^0 \\ R_1^1 \\R_2^0 \\R_2^1 \\⋮ \end{matrix} \right ] = \left[ \begin{matrix} |\vec{x}^0-\vec{x}_0| \\ |\vec{x}^1-\vec{x}_0| \\ |\vec{x}^0-\vec{x}_1| \\ |\vec{x}^1-\vec{x}_1| \\ |\vec{x}^0-\vec{x}_2| \\ |\vec{x}^1-\vec{x}_2| \\ ⋮ \end{matrix} \right ] +v_k\)                                                                                            (2)

\({\vec{x}}_a=C\cdot{\vec{l}}_a\)

\(C=\left[\begin{matrix}\cos{\theta}\cos{d\psi}&-\cos{\phi}\sin{d\psi}+\sin{\phi}\sin{\theta}\cos{d\psi}&\sin{\phi}\sin{d\psi}+\cos{\phi}\sin{\theta}\cos{d\psi}\\\cos{\theta}\sin{d\psi}&\cos{\phi}\cos{d\psi}+\sin{\phi}\sin{\theta}\sin{d\psi}&-\sin{\phi}\cos{d\psi}+\cos{\phi}\sin{\theta}\sin{d\psi}\\\sin{\theta}&\sin{\phi}\cos{\theta}&\cos{\phi}\cos{\theta}\\\end{matrix}\right]\)

where,
\(R_a^b\): range measurement between array antenna component a and fixed antenna \(b\)
\({\vec{x}}^a\): location vector of fixed antenna \(a\)
\({\vec{x}}_a\): location vector or array antenna component \(a\)
\(\phi,\ \theta\) : tilting angle (roll, pitch) from inertial sensor
\(v\) : measurement noise

4.3 Generation of Particles

Once a location of the reference antenna is determined, a yaw candidate can be created by using the reference antenna location. As mentioned in the above, since a geometric array of antennas is already known, a location of the reference antenna and an error range of yaw can be used to create particles. Here, since no prior information about yaw in the UWB array antennas is available, the distribution of candidates was created uniformly in all directions. Fig. 13 shows an example of the candidate creation.

Fig. 13. Created array antenna candidates (when the number of fixed antennas is two).

4.4 Selection of Optimum Particles

The locations of all array antenna components can be found if location and yaw of the reference antenna are used and a distance to the fixed antenna can be calculated. Here, the optimum particle minimizes an error between calculated distance and measured distance. The ambiguity problem of the reference antenna location is also identified in this stage so that candidates that are adjacent to the optimum particle have high weight.

The selection process of the optimum candidates is done by using measurement update procedure of particle filter and a weight of each candidate is calculated as shown in Eq. (3). Then, a low weight candidate is eliminated through the resampling process (Doucet et al. 2001, Ristic et al. 2004). Time propagation is performed using measured values via the inertial sensor with regard to particles that survive after the resampling process. Since a measured value of the inertial sensor for a short period of time is used, the effect of inertial sensor divergence is negligible.

\(w_i=\frac{1}{\left(2\pi\right)^{n/2}\sqrt R} \exp\left(-\frac{1}{2}r_i^TR^{-1}r_i\right)\)                                                                                                       (3)

where,
\(w_i\): patticle weight of ith candidate
\(R\): error covariance
\(r_i\): residual vector between measured range and generated range

The estimation procedure of line sight vector using particle filters is summarized as shown in Fig. 14.

Fig. 14. Estimation procedure of line of sight vector.

5. PERFORMANCE EVALUATION ON THE ESTIMATION OF LINE OF SIGHT VECTOR

5.1 Equipment Configuration and Procedure for Performance Evaluation

For UWB tranceiver equipment for the estimation on line of sight vector, DW1000 chips of DecaWave were employed and distance measured values were acquired using the two way ranging method (Decawave 2015). Fig. 15 shows the UWB tranceiver board used per antenna and Table 3 summarizes the main performance of DW1000 chip.

Fig. 15. VK-1000.

Table 3. Main specifications of DW1000 (UWB transceiver chip).

Specification Value
Frequency
Transmit power
Transmit power density
Data rate
3.5 – 6.5 GHz (6 channel)
-14/-10 dBm
< -41.3 dBm/MHz
110/850 kbit/s, 6.8 Mbit/s

 

Since estimation on line of sight vector is based on measured values of distance, it is affected by a baseline length of the array antenna. If a baseline length between array antenna components is long, an angular error with regard to the same distance error is reduced but wearing on rescuers can be difficult. The antenna array used in the present study is shown in Fig. 16.

Fig. 16. Geometric configuration of the array antennas.

A rotary table was used to measure an absolute angle with regard to yaw and experiment was conducted at 10 min after power was turned on to stabilize the room temperature crystal oscillator. The overall experiment equipment configuration is shown in Fig. 17 and the experiment scenarios are presented in Table 4. Circles in Fig. 17 indicate locations of antennas. A distance between the reference antenna and fixed anchor was set to about 6 m.

Fig. 17. Configuration of the experiment equipment (left: array antenna, right: fixed antenna).

Table 4. Experiment procedure.

order Time (min) Procedure
0
1
2
3
4
5
-10 ~ 0
0 ~ 0.5
0.5 ~ 1
1 ~ 1.5
1.5 ~ 2
2 ~ 2.5
Warm up

CCW 30°

CW 30°

 

5.2 Result

Fig. 18 shows the estimation result of the reference antenna at the static status. Since only two fixed antenna were used, the distribution of the estimated reference antennas had a bimodal shape (Group A, Group B). In the results, a dispersion of each group was not large significantly, a mean value was used. If a geometric array is inappropriate or a distance between fixed antenna and reference antenna is long, dispersion can be increased. In such case, a mean value cannot be used and a location candidate of the reference antenna shall also be used as particles by increasing a dimension of the state vector. Here, the number of particles also needs to be increased for stable convergence.

Fig. 18. Result of determination of reference antenna locations.

Based on the estimated location of the reference antenna, candidates of locations of array antennas were created. As mentioned in the above, a value provided by the inertial sensor was used for a tilting value of the array antenna without correction. Fig. 19 shows the created candidates (only 10 candidates are shown). As described earlier, the rest of the antennas excluding the reference antenna is determined by a function of yaw and location of the reference antenna.

Fig. 19. Created array antenna candidates (only 10 are displayed).

The number of particles was set to 2,000. If the number of particles is not sufficient, a problem that all candidates can be excluded during the determination process of reference antenna locations is likely to occur so that an appropriate number of particles were secured.

Fig. 20 shows the final result of estimation on the line of sight vector after performing the particle filter. As shown in the result, a line of sight vector was correctly estimated according to the scenarios. Table 5 summarizes the analysis result on the errors.

Fig. 20. Estimation result of yaw.

Table 5. Estimation error of yaw.

Error Value (°)
Maximum offset
Standard deviation
1.23
1.27

 

6. CONCLUSION

In the present study, a method of applying AR was investigated to understand a relation with target building that was observed intuitively during monitoring on location tracking results of rescuers in the indoor. In the process, a technology that estimated a line of sight vector, which was a core in the AR implementation, was investigated and a method of employing UWB tranceiver was applied to inter-operate with the previously developed RITS. The study result showed that the proposed equipment configuration and method can obtain estimated values of a line of sight vector that were sufficiently accurate to implement AR.

ACKNOWLEDGMENTS

The present study was supported by the 2016 seed project fund of the Korea Aerospace Research Institute and we appreciate the support.

References

  1. Blakelock, J. H. 1991, Automatic control of aircraft and missiles, 2nd ed. (New York: John wiley & Sons)
  2. Bonsor, K. 2016, How Augmented Reality Works [Internet], cited 2016 Feb 19, available from: http://computer. howstuffworks.com/augmented-reality.htm
  3. Choi, K. C., Park, C. S., Lee, J. K., Yang, C. S., Kim, B. U., et al. 2012, Fundamental research of safety technology for firefighter onsite response (final report), Korea Institute of Fire Science & Engineering
  4. Chun, S., Heo, M. B., & Nam, G. W. 2014, Development and evaluation of integration algorithm for personal indoor tracking system without pre-installed infrastructure, in ISGNSS 2014 in conjunction with KGS Conference, Jeju, 21-24 Oct 2014
  5. Dardari, D., Closas, P., & Djurić, P. M. 2015, Indoor tracking: theory, methods, and technologies, IEEE transactions on vehicular technology, 4, 1263-1278. http://dx.doi.org/10.1109/TVT.2015.2403868
  6. Decawave 2015, the Implementation of two-way ranging with the DW1000, Decawave, APS013
  7. Djurić, P. M. & Bugallo M. F. 2013, Particle filtering for high-dimensional systems, in 2013 5th IEEE international workshop on computational advances in multi-sensor adaptive processing (CAMSAP), Cancun, Mexico, 15-18 Dec 2013. http://dx.doi.org/10.1109/CAMSAP.2013.6714080
  8. Doucet, A., Freitas, N. D., & Gordon, N. 2001, Sequential Monte Carlo methods in practice, (New York: Springer)
  9. Jlapoutre 2011, Aurasma lights up Dutch newspaper 'De Telegraaf ' with augmented reality [Internet], cited 2011 Dec. 24, available from: https://wttfuture.wordpress.com/2011/12/24/aurasma-lights-up-dutch-newspaper-de-telegraaf-with-augmented-reality/
  10. Lim, J., Kim, H. S., Lee, J. Y., Choi, K. H., Kang, S. J., et al. 2014, Estimation of angular acceleration by a monocular vision sensor, JPNT, 3, 1-10. http://dx.doi.org/10.11003/JPNT.2014.3.1.001
  11. Nilsson, J. O., Rantakokko, J., Händel, P., Skog, I., Ohlsson, M., et al. 2014, Accurate indoor positioning of firefighters using dual foot-mounted inertial sensors and Interagent ranging, in 2014 IEEE/ION position, location and navigation symposium - PLANS 2014. http://dx.doi.org/10.1109/PLANS.2014.6851424
  12. Multidots 2015, Augmented reality [Internet], cited 2015 April 6, available from: http://www.multidots.com/augmented-reality/
  13. Murphy-Chutorian, E. & Trivedi, M. M. 2010, Head pose estimation and augmented reality tracking: an integrated system and evaluation for monitoring driver awareness, IEEE transactions on intelligent transportation system, 11, 300-311. http://dx.doi.org/10.1109/TITS.2010.2044241
  14. Persa, S. F. 2006, Sensor fusion in head pose tracking for augmented reality, PhD Dissertation, TU Delft.
  15. Ristic, B., Arulampalam, S., & Gordon, N. 2004, Beyond the Kalman filter (Boston: Artech house)
  16. Titterton, D. H. & Weston, J. L. 2009, Strapdown inertial navigation technology, 2nd ed. (Reston: AIAA)