DOI QR코드

DOI QR Code

Mobile Robot Localization in Geometrically Similar Environment Combining Wi-Fi with Laser SLAM

  • Gengyu Ge (School of Information Engineering, Zunyi Normal University) ;
  • Junke Li (School of Information Engineering, Suqian University) ;
  • Zhong Qin (School of Information Engineering, Zunyi Normal University)
  • 투고 : 2022.09.15
  • 심사 : 2023.04.10
  • 발행 : 2023.05.31

초록

Localization is a hot research spot for many areas, especially in the mobile robot field. Due to the weak signal of the global positioning system (GPS), the alternative schemes in an indoor environment include wireless signal transmitting and receiving solutions, laser rangefinder to build a map followed by a re-localization stage and visual positioning methods, etc. Among all wireless signal positioning techniques, Wi-Fi is the most common one. Wi-Fi access points are installed in most indoor areas of human activities, and smart devices equipped with Wi-Fi modules can be seen everywhere. However, the localization of a mobile robot using a Wi-Fi scheme usually lacks orientation information. Besides, the distance error is large because of indoor signal interference. Another research direction that mainly refers to laser sensors is to actively detect the environment and achieve positioning. An occupancy grid map is built by using the simultaneous localization and mapping (SLAM) method when the mobile robot enters the indoor environment for the first time. When the robot enters the environment again, it can localize itself according to the known map. Nevertheless, this scheme only works effectively based on the prerequisite that those areas have salient geometrical features. If the areas have similar scanning structures, such as a long corridor or similar rooms, the traditional methods always fail. To address the weakness of the above two methods, this work proposes a coarse-to-fine paradigm and an improved localization algorithm that utilizes Wi-Fi to assist the robot localization in a geometrically similar environment. Firstly, a grid map is built by using laser SLAM. Secondly, a fingerprint database is built in the offline phase. Then, the RSSI values are achieved in the localization stage to get a coarse localization. Finally, an improved particle filter method based on the Wi-Fi signal values is proposed to realize a fine localization. Experimental results show that our approach is effective and robust for both global localization and the kidnapped robot problem. The localization success rate reaches 97.33%, while the traditional method always fails.

키워드

1. Introduction

Mobile robots have been applied to various scenes which facilitate people’s life. In an indoor environment, vacuum cleaner robots are used to sweep the floor [1], a guide robot can guide a blind person [2], and disinfection robots and meal delivery robots are used more frequently during the COVID-19 epidemic time [3]. Localization is one of the most important capacities for an autonomously moving mobile robot and it’s a prerequisite for the following path planning and navigation [4]. If the mobile robot is in an outdoor environment, then the GPS is the most popular and widely used wireless location technology to solve the positioning problem [5]. However, the GPS signal in the indoor areas is weak and out of action. As an alternative, many signal-based methods are proposed to replace GPS solutions, for instance, Wi-Fi [6], ultrasonic, radio frequency identification (RFID) [7], ultrawide-band (UWB) [8], wireless sensor network (WSN) [9], etc. RSSI (received signal strength indicator) is usually used to calculate the approximate distance between the receiver and the signal transmitter [10]. Although the RSSI-based wireless localization solution is not accurate, it can be used for coarse positioning purposes.

In the era of mobile internet, Wi-Fi is the most common one compared to other wireless devices in indoor environment where human beings live and work, such as office rooms, hospitals, hotels, supermarket malls, teaching buildings, dormitories, factories, etc. However, the Wi-Fi-based localization result only provides a piece of basic position information without orientation. The mobile robot still cannot know where is the free area or obstacle, thus cannot navigate or move freely in the environment.

Another commonly used method is focused on the 2D laser rangefinder or Lidar which can achieve distance information of the surroundings. The laser sensor can construct a metric map that is suitable and effective for path planning in the mobile robot navigation process. Before the robot performs the localization task, it should be controlled to build a 2D probabilistic occupancy grid map by using the SLAM method. Given the map of an indoor room or area, the localization task can be easily solved by utilizing the Monte Carlo localization method (MCL) [11]. However, when the mobile robot enters into a geometrically similar area, as Fig. 1 shows, the scanning distance data from its surroundings of a 2D laser sensor are the same, thus the localization task becomes very difficult [12].

E1KOBZ_2023_v17n5_1339_f0001.png 이미지

Fig. 1. Diagram of localization system.

To make up for the shortcomings of the above two technologies and solve the localization problem in a geometrically similar indoor environment, this paper proposes an alternative approach that integrates the Wi-Fi signal and laser SLAM techniques. We adopt a coarse-to-fine paradigm that uses Wi-Fi signal retrieval to get a coarse localization and the range data to realize a fine localization. The main contributions of this work are as follows:

⦁ An occupancy grid map is built by using laser SLAM techniques. Each cell in the map has three possible states which are occupied, free, or unknown. Then, the mobile robot is controlled to move to a subarea and collect the distance data by using the laser rangefinder sensor. The range data are used to compute a geometric centroid of the local area. These centroid positions are selected as the sampling points where the Wi-Fi module mounted on the mobile robot is used to receive RSSI values.

⦁ The mean number of the several times received signal strength values are stored in the fingerprint database along with the coordinate of the associated sampling point. A position index correlates with the RSSI and the position is also stored for the later search purpose in the localization phase.

⦁ A coarse-to-fine localization paradigm is used to achieve the accurate pose. The coarse position is realized by using the RSSI value matching method and the fine localization is obtained by adopting an improved MCL algorithm. The improved MCL method includes particle initialization and resampling strategy. Experimental results indicate that our proposed approach achieves a 97.33% successful localization rate.

This paper is organized as follows. The related work of 2D laser-based SLAM, filter-based localization and Wi-Fi-based localization are discussed in Section II. Our proposed method is presented in Section III, which includes overview framework of the method, building an occupancy grid map, RSSI-distance fingerprint, selection strategy of sampling points, and the coarse-to-fine localization. The experiment and discussion are described in Section IV and Section V is the conclusion.

2. Related Work

2.1 Laser-SLAM and Localization

SLAM means that a mobile robot or handheld sensor concurrently perceives the surrounding structures and estimates their poses. Laser Lidar SLAM is mainly used for building a plane grip map or a 3D spatial voxel map of the environment where the mobile robot will perform tasks. Due to the high cost and practical application requirement, 3D laser Lidar is usually used in outdoor self-driving cars [13] or 3D reconstruction of buildings [14]. In an indoor environment, a mobile robot has limited height and only needs to know where can pass for navigation. In addition, most of the surroundings are structured scenes, and most of the objects or obstacles are placed on the ground. Consequently, a 2D laser Lidar is competent for the use of mobile robot navigation tasks in indoor rooms. Another important reason is that it can build a 2D probability occupancy grid map for the following path planning. The most famous works of the 2D laser Lidar SLAM are Gmapping [15] and Cartographer [16], the former is a filter-based scheme and the latter is a graph optimization scheme. In addition, there are other options such as fast-SLAM, hector SLAM [17], and Karto SLAM [18].

In the mobile robotic field, the localization problem refers to achieving pose information. Different from the positioning task which only needs to get the coordinate of a position, the localization of a mobile robot includes a position and an orientation. There are three localization categories which are global localization, pose tracking, and kidnapped robot problem [19]. The most difficult one is global localization because of no initial pose information. Kalman filter and its extended versions are mainly used for pose tracking, not suitable for the other two problems [20-22]. In contrast, the particle filter is compatible with all the three cases [23]. Monte Carlo localization methods use particles to simulate arbitrary distribution, more importantly, they are suitable to deal with nonlinear and non-gaussian problems. However, for those geometrically similar environments, the MCL method will fail.

2.2 Wi-Fi-based Localization

Wi-Fi works within the RF bands of 2.5 GHz under the network protocols, IEEE 802.11b, IEEE 802.11g, and IEEE 802.11n. Another RF band is 5 GHz under the protocol IEEE 802.11a. When the Wi-Fi signal is used for localization, generally, it means to achieve a position coordinate without the orientation of the user or mobile robot. A fingerprint database which is consisted of RSSI values is commonly adopted for localization [24]. The signal fingerprint is usually collected in an offline stage and used by the following online querying stage [25]. Several related works based on Wi-Fi signals achieved satisfactory results [26-28].

However, asthe environmental impacts, the RSSI value is time-varying and unreliable [29]. Besides, the orientation of the mobile robot cannot be directly obtained and the RSSI information is not enough for obstacle avoidance or navigation. Consequently, we integrate the laser SLAM and the RSSI of the Wi-Fi techniques to get optimal localization.

3. Proposed Methodology

An overview framework of the mapping and localization system is shown in Fig. 2. The system mainly consists of three steps which are grid map construction step, sampling positions selection and fingerprint database building step, and the coarse-to-fine localization step. Detailed descriptions are given in the following sections.

E1KOBZ_2023_v17n5_1339_f0002.png 이미지

Fig. 2. Framework of the mapping and localization system.

3.1 Build an Occupancy Grid Map

Before the mobile robot can perform tasks or navigate autonomously, it utilizes a 2D laser-based SLAM technique to build an occupancy grid map. In general, a 2D laser-based SLAM mapping system needs an encoder as the odometer to estimate the motion information and a laser rangefinder to achieve the measurement data. Fig. 3a means a pose of the mobile robot in a world or global coordinate system. The position coordinate is (x, y) and the orientation relative to the xworld axis is θ . Fig. 3b is a diagram of the odometry motion model which usually measures data by using an encoder. The center of the black cycle on the lower left of Fig. 3b means a position at time t −1 and the black arrow means the heading direction of the mobile robot. Thus, the pose of a mobile robot at time t −1 can be represented as xt-1 = (xt-1, yt-1, θt-1). Similarly, the upper right part of Fig. 3b means that the pose at time t is xt = (xt, yt, θt). The line of dashes means that the displacement of the mobile robot from time t −1 to time t can be decomposed into one translation (δtrans) and two rotations (δrot1 and δrot2). Compared with the velocity motion model, this odometry motion model is more accurate and suitable for localization and mapping tasks [11].

E1KOBZ_2023_v17n5_1339_f0003.png 이미지

Fig. 3. Diagram of the mobile robot motion model.

An ideal model expression of the displacement without error is shown in equation (1):

\(\begin{aligned}\left\{\begin{array}{l}\delta_{r o t 1}=a \tan 2\left(y_{t}-y_{t-1}, x_{t}-x_{t-1}\right)-\theta_{t-1} \\ \delta_{\text {trans }}=\sqrt{\left(x_{t}-x_{t-1}\right)^{2}+\left(y_{t}-y_{t-1}\right)^{2}} \\ \delta_{\text {rot } 2}=\theta_{t}-\theta_{t-1}-\delta_{r o t 1}\end{array}\right.\end{aligned}\)       (1)

However, there are too many noises and measurement errors in reality, such as the drift and slip of wheels. Fortunately, we can describe the model by using the probabilistic method. The difference between the estimation model and the ideal model can be expressed by probabilistic sampling, as equation (2) shows:

\(\begin{aligned}\left\{\begin{array}{l}\hat{\delta}_{\text {rot } 1}=\delta_{\text {rot } 1}-\operatorname{sample}\left(\alpha_{1} \delta_{\text {rot } 1}^{2}+\alpha_{2} \delta_{\text {trans }}^{2}\right) \\ \hat{\delta}_{\text {trans }}=\delta_{\text {trans }}-\operatorname{sample}\left(\alpha_{3} \delta_{\text {trans }}^{2}+\alpha_{4} \delta_{\text {rot } 1}^{2}+\alpha_{4} \delta_{\text {rot } 2}^{2}\right) \\ \hat{\delta}_{\text {rot } 2}=\delta_{\text {rot } 2}-\operatorname{sample}\left(\alpha_{1} \delta_{\text {rot } 2}^{2}+\alpha_{2} \delta_{\text {trans }}^{2}\right)\end{array}\right.\end{aligned}\)       (2)

where the variables α1, α2, α3, and α4 are the parameters that determine the motion noise of a specific mobile robot platform. \(\begin{aligned}\hat{\delta}_{r o t 1}\end{aligned}\), \(\begin{aligned}\hat{\delta}_{r o t 2}\end{aligned}\), and \(\begin{aligned}\hat{\delta}_{trans}\end{aligned}\)​​​​​​​ are the estimated values. Therefore, given a pose at time t −1 and odometry data, the pose at time t can be computed as follow:

\(\begin{aligned}\left(\begin{array}{c}x_{t} \\ y_{t} \\ \theta_{t}\end{array}\right)=\left(\begin{array}{c}x_{t-1} \\ y_{t-1} \\ \theta_{t-1}\end{array}\right)+\left(\begin{array}{c}\hat{\delta}_{t r a n s} \cos \left(\theta+\hat{\delta}_{r o t 1}\right) \\ \hat{\delta}_{t r a n s} \sin \left(\theta+\hat{\delta}_{r o t 1}\right) \\ \hat{\delta}_{r o t 1}+\hat{\delta}_{r o t 2}\end{array}\right)\end{aligned}\)       (3)

When using a range-based measurement sensor like 2D laser Lidar, the observation or measurement function has two types of representations which are the likelihood field model and the beam rangefinder model. Due to the lack of smoothness, the beam rangefinder model is not commonly used. In this work, we select the likelihood field model to express the observation. Suppose (xsens, ysens) is the position of the center of the laser sensor which has the local coordinate system fixed on the mobile robot platform. θk,sens is the angle of the k-th sensor beam relative to the heading direction of the mobile robot. Through a triangular transformation that maps measured distance data into the global coordinate system, each endpoint coordinate of the observation zkt can be computed as equation (4) shows.

\(\begin{aligned}\left(\begin{array}{l}x_{z_{t}^{k}} \\ y_{z_{t}^{k}}\end{array}\right)=\left(\begin{array}{l}x_{t} \\ y_{t}\end{array}\right)+\left(\begin{array}{cc}\cos \theta & -\sin \theta \\ \sin \theta & \cos \theta\end{array}\right)\left(\begin{array}{l}x_{\text {sens }} \\ y_{\text {sens }}\end{array}\right)+z_{t}^{k}\left(\begin{array}{l}\cos \left(\theta+\theta_{k, \text { sens }}\right) \\ \sin \left(\theta+\theta_{k, \text { sens }}\right)\end{array}\right)\end{aligned}\)​​​​​​​       (4)

The measurement probability pdist can be calculated by the likelihood domain as follows:

\(\begin{aligned}\left\{\begin{array}{l}\text { dist }=\min _{x^{\prime}, y^{\prime}}\left\{\sqrt{\left(x_{z_{t}^{k}}-x^{\prime}\right)^{2}+\left(y_{z_{t}^{k}}-y^{\prime}\right)^{2}} \mid\left\langle x^{\prime}, y^{\prime}\right\rangle \text { occupied in } m\right\} \\ p_{\text {dist }}=p_{\text {dist }} \cdot\left(z_{\text {hit }} \cdot \operatorname{prob}\left(\text { dist }, \sigma_{\text {hit }}\right)+\frac{z_{\text {random }}}{z_{\max }}\right)\end{array}\right.\end{aligned}\)       (5)

The SLAM technique can be thought of as a state estimation problem of joint estimation of pose and observation. The probabilistic occupancy grid map can be thought of as a combination of many basic grid cells. The grid cells are the discretization of a place and the mathematical equation can be described as follows:

\(\begin{aligned}p\left(m \mid z_{1: t}, x_{1: t}\right)=\prod_{i} p\left(m_{i} \mid z_{1: t}, x_{1: t}\right)\end{aligned}\)       (6)

The logarithmic probability expression (7) is to avoid the numerical instability near 0 or 1.

\(\begin{aligned}\left\{\begin{array}{l}p\left(m_{i} \mid z_{1: t}, x_{1: t}\right)=1-\frac{1}{1+\exp \left\{l_{t, i}\right\}} \\ l_{t, i}=\log \frac{p\left(m_{i} \mid z_{1: t}, x_{1: t}\right)}{1-p\left(m_{i} \mid z_{1: t}, x_{1: t}\right)}\end{array}\right.\end{aligned}\)       (7)

And the SLAM process can be expressed as:

p(x1:t, m | z1:t, u1:t ) = p(x1:t | z1:t, u1:t) · p(m | x1:t, z1:t, u1:t)

= p(m | x1:t, z1:t,) · p(x1:t | z1:t, u1:t)       (8)

3.2 Selection strategy of sampling points

The mobile robot only needs to identify the nearest one or more Wi-Fi access points, then it can know its approximate area. Given a coarse localization area, the following fine localization process can be solved by the traditional approach. Different from the traditional Wi-Fi-based localization methods, the coordinates of the APs are not essential in our proposed approach. In addition, the sampling points are not uniformly distributed throughout the map, thus reducing the number of sampling points and the sampling workload.

To achieve this goal, the geometric centroid of the local area is selected as the sampling point. As Fig. 4a shows, the mobile robot with a laser sensor scans the surroundings in a corridor environment. The geometric centroid position can be calculated by calculating the average value of effective scanning distances. The position coordinate of the geometric centroid is Pgc = (xgc, ygc), then it can be represented as follows:

E1KOBZ_2023_v17n5_1339_f0004.png 이미지

Fig. 4. The geometric centroid of the local areas. (a) A corridor area. (b) A single room.

\(\begin{aligned}\left\{\begin{array}{l}x_{g c}=\frac{1}{N} \sum_{i=1}^{N} x_{i} \\ y_{g c}=\frac{1}{N} \sum_{i=1}^{N} y_{i}\end{array}\right.\end{aligned}\)       (9)

where (xi, yi) is the coordinates of the i-th endpoint of the laser beam and N is the total number of beams. If the measured distance value exceeds the maximum measurement range, then it is ignored. Obviously, although the geometric centroid in the corridor is not unique, the point will fall on the middle line of the corridor. In a specific room as Fig. 4b shows, the geometric centroid is unique and the mobile robot can move to the position easily. Therefore, we only need to collect RSSI signals at these special locations.

3.3 RSSI-distance Fingerprint

The database of RSSI-distance fingerprints will be created according to the sampling points selection strategy in the offline phase. Fig. 5 shows the process that the mobile robot collects signal information in different positions and saves them into a fingerprint database. The blue marks are Wi-Fi access points (APs) which are thought of as the reference signal nodes or signal beacons. Although the names are called in different manners, the essential effects are the same. It is not necessary to know the specific position of each Wi-Fi AP, however, it must be ensured that each room or corridor area has an AP device.

E1KOBZ_2023_v17n5_1339_f0005.png 이미지

Fig. 5. The diagram of RSSI-distance fingerprint.

Suppose the number of APs is n, the i-th sampling position coordinate is (xi, yi), rss means a signal strength value of several times measurement, the number of sampling points is m, and FPi represents the i-th fingerprint vector at the i-th sampling position. Then the FPi can be expressed as formula 11 shows,

\(\begin{aligned}F P_{i}=\left(\begin{array}{llllll}\overline{r S S}_{i}^{1} & \overline{r S S}_{i}^{2} & \ldots & \overline{r S S}_{i}^{j} & \ldots & \overline{r S S}_{i}^{n}\end{array}\right)\end{aligned}\)       (11)

where \(\begin{aligned}\overline{r S S}_{i}^{j}\end{aligned}\) is the mean signal strength value from the j-th AP to the i-th sampling position. It should be noted that not all AP signals can be received by the terminal device. In the i-th sampling position, the receiver node can achieve n RSSI values only if all of the nodes are distributed within a visible range. If the receiving signal value from an AP is weak or no signal can be received, the rss can be set as zero.

A fingerprint database consists of m vectors which are RSS values combined and collected from all sampling points, as formula 12 shows.

\(\begin{aligned}F P=\left[\begin{array}{cccc}\overline{r S S}_{1}^{1} & {\overline{r S S_{1}}}^{2} & \ldots & \overline{r S S}_{1}^{n} \\ \overline{r S S}_{2}^{1} & \overline{r S S}_{2}^{2} & \ldots & \overline{r S S}_{2}^{n} \\ \vdots & \vdots & \vdots & \vdots \\ \overline{r S S_{m}^{1}} & \overline{r S S}_{m}^{2} & \ldots & \overline{r S S_{m}^{n}}\end{array}\right]\end{aligned}\)       (12)

Due to the inaccuracy of a single measurement, the mean value of k times measurement is considered as a final reference. In a sampling position A, a signal matrix of k times values from the n APs is expressed as follows:

\(\begin{aligned}S_{A}=\left[\begin{array}{cccc}r S S_{1,1} & r S S_{1,2} & \cdots & r S S_{1, n} \\ r S S_{2,1} & r S S_{2,2} & \cdots & r S S_{2, n} \\ \vdots & \vdots & \vdots & \vdots \\ r S S_{k, 1} & r S S_{k, 2} & \cdots & r S S_{k, n}\end{array}\right]\end{aligned}\)       (13)

The arithmetic mean signal value is computed according to equation (14):

\(\begin{aligned}\overline{r S S}_{i}^{j}=\frac{1}{k} \sum_{j=1}^{k} r S S_{i, j}\end{aligned}\)       (14)

The next work is to correlate the location information with the fingerprints. The coordinate of a sampling position is decided when the mobile robot has built an occupancy grid map and moves in the known areas. According to section 3.2, the mobile robot reads the scanning data and computes the geometric centroid of the local surrounding area. When the mobile robot moves to the position of the geometric centroid, the position coordinate is recorded as a sampling point. The associated information Geo_indexi is represented by the following formula:

Geo_indexi = {(xi, yi), FPi}       (15)

3.4 Coarse-to-fine Localization

In the online localization phase, the mobile robot achieves the pose information by using a coarse-to-fine paradigm. Firstly, the mobile robot scans the surroundings to achieve distance information relative to the surrounding walls or obstacles. Secondly, the mobile robot moves to the middle line of the corridor or the geometric centroid of a room according to equation (9). Thirdly, the RSSI values from all APs are measured to match the previously built fingerprint database. Finally, look up the fingerprint database and the associated information to find the potential position of the mobile robot.

However, the above search and match method is effective only under ideal conditions without noise or error. The actual situation is that the signal will be inference and the geometric centroid position cannot be obtained accurately. To solve the problem, we use the KNN algorithm to calculate the most likely estimated position (xest, yest) est est x y which is relative to the sampling point [30].

Fig. 6a and Fig. 7a are the coarse localization processes that the mobile robot is powered on or waked up from the system halted, the former is in a corridor and the latter is in a room or local area. Scanning distance data is computed to get a geometric centroid position, then, the mobile robot moves to the uniquely local area. Fig. 6b and Fig. 7b are the initializations of the particle filter method. The initial particles are distributed around the estimated position and scattered evenly in a circular area. The representation is shown in equation (16).

E1KOBZ_2023_v17n5_1339_f0006.png 이미지

Fig. 6. Coarse localization in a corridor. (a) Slightly movement to the middle line. (b) Particles initialization in a specific area.

E1KOBZ_2023_v17n5_1339_f0007.png 이미지

Fig. 7. Coarse localization in a room or local area. (a) Slightly movement to the geometric centroid area. (b) Particles initialization in a specific area.

(x - xest)2 + (y - yest)2 = r2       (16)

where r is the radius of the circular. In this work, the weights of the particles are computed from a gaussian function winitial = P((x, y); u, Σ), as equation (17) shown:

\(\begin{aligned} w_{\text {inital }} & =\frac{1}{2 \pi \Sigma^{\frac{1}{2}}} \exp \left(-\frac{1}{2}\left(x-x_{\text {est }}\right)^{T} \Sigma^{-1}\left(y-y_{\text {est }}\right)\right) \\ & =\frac{1}{2 \pi \sigma_{x} \sigma_{y}} \exp \left(-\left(\frac{\left(x-x_{e s t}\right)^{2}}{2 \sigma_{x}^{2}}+\frac{\left(y-y_{\text {est }}\right)^{2}}{2 \sigma_{y}^{2}}\right)\right)\end{aligned}\)       (17)

A detailed algorithm is described in Algorithm 1 to realize a fine localization.

Algorithm 1. Improved MCL Algorithm

Input: Particle set χt−1, control ut (odometry), observation zt (laser data), RSSI values

Output: Particle set χt

1: Initialization, t = 0 , \(\begin{aligned}\bar{\chi}_{t}=\chi_{t}=\varnothing\end{aligned}\)

2: Geometric centroid computing based on the laser scanning data using equation (9)

3: Coarse position estimation based on Wi-Fi RSSI retrieval

4: for i=1 to N do

5: uniformly samples particles in a circular area with the center (xest, yest) and radius r

6: \(\begin{aligned}\hat{w}_{0}^{i} \sim N\left(u_{x_{e s t}, y_{e s t}}, \Sigma_{x, y}\right)\end{aligned}\) end for

7: Normalized weights \(\begin{aligned}w_{0}^{i}=w_{0}^{i} / \sum_{j=0}^{N} w_{0}^{j}\end{aligned}\)

8: for t =1 to k do

9: for i =1to N do

10: \(\begin{aligned}\hat{x}_{t}^{[j]}\end{aligned}\) = sample_motion_model (ut, x[j]t-1) using equation (3)

11: \(\begin{aligned}\hat{w}_{t}^{[j]}\\\end{aligned}\) = measurement_model (zt, \(\begin{aligned}\hat{x}_{t}^{[j]}\end{aligned}\), m) using equation (4) and (5)

12: \(\begin{aligned}\bar{\chi}_{t}=\bar{\chi}_{t} \cup\left\{<\hat{x}_{t}, \hat{w}_{t}>\right\}\end{aligned}\)

13: end for

14: if time interval equals T

15: go to step 4 and resampling particles using a repeated coarse-to-fine scheme

16: else

17: resampling the particles using ordinary MCL

18: end if

19: end for

20: return χt

4. Experiment and Discussion

Our experimental mobile robot platform is equipped with Mecanum wheels and is shown in Fig. 8a. The microcomputer is a Raspberry Pi 4B which has a 4-core 1.5 GHz ARM Cotex-A72 CPU and 4GB LPDDR4 RAM. An RPLIDAR A2 laser Lidar (SLAMTEC company, Shanghai, China) has an effective measurement radius of 8 to 10 meters in practice. The reference signal nodes are domestic wireless routers and the WI-FI module installed on the robot is an ESP8266. The module supports standard IEEE802.11b/g/n protocol, STA, AP, and STA+AP action modes, and a built-in complete TCP/IP protocol stack. Fig. 8b is the occupancy grid map of Fig. 1 and is built by using the Gmapping SLAM method [15]. There are 12 access points and 15 sampling points are chosen in the experiment.

E1KOBZ_2023_v17n5_1339_f0008.png 이미지

Fig. 8. The mobile robot platform and the grid map.

The experiments are conducted by using the traditional MCL method without a Wi-Fi signal and the improved MCL method with Wi-Fi assistance, respectively. Due to the pose tracking or local localization task being easy when the initial pose is known, we only care about the global localization results.

The first experiment is shown in Fig. 9 where is a particle set initialization using traditional MCL method. Without any auxiliary information, the mobile robot cannot know its initial pose. Therefore, the particle filter method scatters particles evenly throughout the whole map. The red arrows represent the potential mobile robot poses. As shown in the Fig. 9, the 10 rooms are highly similar in the two-dimensional geometric structures. Besides, the corridor environment also has symmetrical and similar areas. The subsequent localization process will fail with high probability like the result of the previous work [31]. Consequently, only a 2D laser sensor is not enough for the mobile robot localization in indoor environment with geometrically similar structures.

E1KOBZ_2023_v17n5_1339_f0009.png 이미지

Fig. 9. Initialization of the particles using traditional MCL.

Different from the traditional MCL method, the second experiment was conducted by using our proposed strategy. We placed the mobile robot in the upper left room near the door before performing localization task. Although the mobile robot can receive all the RSSI values, the measured signal data has error and fluctuates. Therefore, in the next step, the scanning distance data achieved by the laser Lidar was used to compute an approximate geometric centroid position. When the mobile robot moved to the target position area, a particle set initialization is performed according to equation (16) and (17). The radius of the target circular area was set to 0.3 m by experience. Fig. 10 shows the initialization particles based on our proposed method.

E1KOBZ_2023_v17n5_1339_f0010.png 이미지

Fig. 10. Initialization of the particles using proposed method.

Different from the uniform distribution through the whole map, all particles are gathered in a specific local area, the room with AP1 and SP1. After several iterations of fine localization process, the particles are more concentrated, as Fig. 11 shows.

E1KOBZ_2023_v17n5_1339_f0011.png 이미지

Fig. 11. Several iterations of the initialization of the proposed method.

In our work, the purpose of such dispersion of sampling points is to distinguish signal values, thus can distinguish different areas of a corridor or different rooms. Even if there exists signal interference, the signal strength received from the AP1 in the central region of the room is far greater than the value received from the AP in the adjacent room. The reason is easy to understand because of the barrier of the walls.

The strategy of moving to a geometric centroid position before measuring the RSSI values from the APs is meaningful and effective. We tested 5 times at each sampling point region of the 15 sampling positions which had 75 times tests in total. The recognized subareas are specific rooms or areas near the doorway and all of them are correctly identified. If the error is allowed to be within 0.5 meters, the initial coarse localization results had 68 times success localization. When the error range is extended to 1 meter, the success times is 73. A detailed data is shown in Table 1.

Table 1. Coarse localization result using the proposed strategy

E1KOBZ_2023_v17n5_1339_t0001.png 이미지

We recorded the errors of all global localization tests and calculated the mean numerical of them. The result is shown in Table 2 where the localization error is compared with the work [26]. It demonstrates that our method needs the least number of APs to obtain more accurate positioning results.

Table 2. Localization error using number of APs

E1KOBZ_2023_v17n5_1339_t0002.png 이미지

When compared with the solution introduced in the work [31] which used visual features to assist robot localization in the symmetrical environment, this work has some advantages. The previous work mainly focused on the closed areas which were geometrically symmetrical by using a laser rangefinder. At every moment, the robot has four possible orientations within the area enclosed by a square. Consequently, clues that could indicate directions were crucial for the mobile robot. However, a hypothetical condition was that the visual texture features in four directions were different. The mobile robot cannot handle multiple rooms with similar structure and visual appearance. On the contrary, the work of this paper can solve the above problems well.

A supplementary experiment which is done by using the two methods verifies the above phenomenon and the result is shown in Table 3. Each method is tested 5 times at the sampling positions. In a single room, both the two methods can achieve 100% successful localization. Situation A means that all rooms have different visual appearance, while situation B means that all rooms have similar visual appearance. The method of this work can achieve 100% successful localization in situation B while the method [31] has low success rate. In the corridor environment, the method in this work still has the superiority.

Table 3. Localization success rate (%) compared to method [31]

E1KOBZ_2023_v17n5_1339_t0003.png 이미지

5. Conclusion

In this work, to make up for the shortcomings of the Wi-Fi and laser Lidar based SLAM which are used to solve the localization problem in a geometrically similar indoor environment, we propose a coarse-to-fine paradigm. A novel approach is proposed by integrating Wi-Fi and laser SLAM. Firstly, a probabilistic occupancy grid map is built by using the laser SLAM techniques. Then, the mobile robot is controlled to move to a subarea and collect the distance data by using the laser rangefinder sensor. The range data are used to compute a geometric centroid of the local area. These centroid positions are selected as the sampling points where the Wi-Fi module mounted on the mobile robot is used to receive RSSI values. Thirdly, the mean numerical of the several times received signal strength values are stored into the fingerprint database along with coordinate of the associated sampling point. A position index correlates the RSSI and the position is also stored for the later search purpose in the localization phase. Finally, a coarse localization is realized by using RSSI value matching method and a fine localization is obtained by adopting an improved MCL algorithm. Experimental results indicate that our proposed approach achieves a 97.33% successful localization rate while traditional MCL method always fail.

This work has effective localization result for the mobile robot in indoor environment with Wi-Fi access points. However, the solution does not consider the dynamic scenes or areas where no Wi-Fi signal covers. In the future, we will apply this scheme to more scenarios, such as large stations, factories, logistics parks, etc.

Acknowledgement

This work was supported in part by the National Natural Science Foundation of China under Grant No. 62262055, the Youth Project of Guizhou Province Education Department (QJJ[2022] No. 322), the Science and Technology Project of Guizhou ( [2018]1180) and the Key Laboratory Foundation of Guizhou Province Universities (QJJ[2002] No. 059).

참고문헌

  1. E. Anil and H. Dogan, "Design and implementation of a cost effective vacuum cleaner robot," Turkish Journal of Engineering, vol. 6, no. 2, pp. 166-177, Apr. 2022. https://doi.org/10.31127/tuje.830282
  2. R. K. Megalingam, S. Vishnu, V. Sasikumar V, et al., "Autonomous path guiding robot for visually impaired people," Cognitive Informatics and Soft Computing, Springer, Singapore, pp. 257-266, 2019.
  3. X.V. Wang and L. Wang, "A literature survey of the robotic technologies during the COVID-19 pandemic," Journal of Manufacturing Systems, vol. 60, pp. 823-836, Jul. 2021. https://doi.org/10.1016/j.jmsy.2021.02.005
  4. B.K. Patle, G. B. L, A. Pandey, D. R. K. Parhi, and A. Jagadeesh, "A review: On path planning strategies for navigation of mobile robot," Defence Technology, vol. 15, no. 4, pp. 582-606, Aug. 2019. https://doi.org/10.1016/j.dt.2019.04.011
  5. E. Zhang and N. Masoud, "Increasing GPS localization accuracy with reinforcement learning," IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 5, pp. 2615-2626, May 2021. https://doi.org/10.1109/TITS.2020.2972409
  6. B. Huang, R. Yang, B. Jia, W. Li, and G. Mao, "A theoretical analysis on sampling size in WiFi fingerprint-based localization," IEEE Transactions on Vehicular Technology, vol. 70, no. 4, pp. 3599-3608, Mar. 2021. https://doi.org/10.1109/TVT.2021.3066380
  7. A. Motroni, A. Buffi and P. Nepa, "A survey on indoor vehicle localization through RFID technology," IEEE Access, vol. 9, pp. 17921-17942, Jan. 2021. https://doi.org/10.1109/ACCESS.2021.3052316
  8. L. Barbieri, M. Brambilla, A. Trabattoni, S. Mervic, and M. Nicoli, "UWB localization in a smart factory: Augmentation methods and experimental assessment," IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1-18, Apr. 2021. https://doi.org/10.1109/TIM.2021.3074403
  9. A. Balakrishnan, K. Ramana, K. Nanmaran, et al., "RSSI based localization and tracking in a spatial network system using wireless sensor networks," Wireless Personal Communications, vol. 123, no.1, pp. 879-915, Oct. 2022. https://doi.org/10.1007/s11277-021-09161-0
  10. N. Chuku and A. Nasipuri, "RSSI-Based localization schemes for wireless sensor networks using outlier detection," Journal of Sensor and Actuator Networks, vol. 10, no. 1, Mar. 2021.
  11. S. Thrun, D. Fox, W. Burgard, and F. Dellaert, "Robust Monte Carlo localization for mobile robots," Artificial intelligence, vol. 128, no. 1-2, pp. 99-141, May. 2001. https://doi.org/10.1016/S0004-3702(01)00069-8
  12. G. Ge, Y. Zhang, W. Wang, et al., "Text-MCL: autonomous mobile robot localization in similar environment using text-level semantic information," Machines, vol. 10, no. 3, pp. 169, Mar. 2022.
  13. Y. Zhang, J. Wang, X. Wang, et al., "Road-segmentation-based curb detection method for selfdriving via a 3D-LiDAR sensor," IEEE transactions on intelligent transportation systems, vol. 19, no. 12, pp. 3981-3991, Feb.2018. https://doi.org/10.1109/TITS.2018.2789462
  14. Z. Kang, J. Yang, Z. Yang, et al., "A review of techniques for 3d reconstruction of indoor environments," ISPRS International Journal of Geo-Information, vol. 9, no. 5, pp. 330, May. 2020.
  15. G. Grisetti, C. Stachniss and W. Burgard, "Improved techniques for grid mapping with raoblackwellized particle filters," IEEE transactions on Robotics, vol. 23, no. 1, pp. 34-46, Feb. 2007. https://doi.org/10.1109/TRO.2006.889486
  16. W. Hess, D. Kohler, H. Rapp, and D. Andor, "Real-time loop closure in 2D LIDAR SLAM," in Proc. of 2016 IEEE international conference on robotics and automation (ICRA), pp. 1271-1278, Jun. 2016.
  17. S. Kohlbrecher, O. Stryk. Von, J. Meyer, and U. Klingauf, "A flexible and scalable SLAM system with full 3D motion estimation," IEEE international symposium on safety, security, and rescue robotics, PP. 155-160, 2011.
  18. I. Deutsch, M. Liu and R. Siegwart, "A framework for multi-robot pose graph SLAM," in Proc. of IEEE International Conference on Real-time Computing and Robotics (RCAR), pp. 567-572, 2016.
  19. M. Betke and L. Gurvits, "Mobile robot localization using landmarks," IEEE transactions on robotics and automation, vol. 13, no. 2, pp. 251-263, Apr. 1997. https://doi.org/10.1109/70.563647
  20. X. Xu, F. Pang, Y. Ran, et al., "An indoor mobile robot positioning algorithm based on adaptive federated Kalman Filter," IEEE Sensors Journal, vol. 21, no. 20, pp. 23098-23107, Aug. 2021. https://doi.org/10.1109/JSEN.2021.3106301
  21. B. Li, Y. Lu and H.R. Karimi, "Adaptive Fading Extended Kalman Filtering for Mobile Robot Localization Using a Doppler-Azimuth Radar," Electronics, vol. 10, no. 20, pp. 2544, Oct. 2021.
  22. M. Sun, Y. Gao, Z. Jiao, et al., "RTS assisted Kalman filtering for robot localization using UWB measurement," Mobile Networks and Applications, pp. 1-10, Apr. 2022.
  23. Q. Zhang, P. Wang and Z. Chen, "An improved particle filter for mobile robot localization based on particle swarm optimization," Expert Systems with Applications, vol. 135, pp. 181-193, Nov. 2019. https://doi.org/10.1016/j.eswa.2019.06.006
  24. F. Alhomayani and M.H. Mahoor, "Deep learning methods for fingerprint-based indoor positioning: a review," Journal of Location Based Services, vol. 14, no. 3, pp. 129-200, Sep. 2020. https://doi.org/10.1080/17489725.2020.1817582
  25. Y. Zhao, Z. Li, B. Hao, et al., "Sensor selection for TDOA-based localization in wireless sensor networks with non-line-of-sight condition," IEEE Transactions on Vehicular Technology, vol. 68, no.10, pp. 9935-9950, Aug. 2019. https://doi.org/10.1109/TVT.2019.2936110
  26. G. Lee, B.C. Moon, S. Lee, et al., "Fusion of the SLAM with Wi-Fi-based positioning methods for mobile robot-based learning data collection, localization, and tracking in indoor spaces," Sensors, vol. 20, no. 18, pp. 5182, Sep. 2020.
  27. G. Li, E. Geng, Z. Ye, et al., "Indoor positioning algorithm based on the improved RSSI distance model," Sensors, vol. 18, no. 9, pp. 2820, Aug. 2018.
  28. V. Bianchi, P. Ciampolini and I. De Munari, "RSSI-based indoor localization and identification for ZigBee wireless sensor networks in smart homes," IEEE Transactions on Instrumentation and Measurement, vol. 68, no.2, pp. 566-575, Feb. 2019. https://doi.org/10.1109/TIM.2018.2851675
  29. D.J. Suroso, M. Arifin and P. Cherntanomwong, "Distance-based Indoor Localization using Empirical Path Loss Model and RSSI in Wireless Sensor Networks," Journal of Robotics and Control (JRC), vol. 1, no.6, pp. 199-207, 2020. https://doi.org/10.18196/jrc.1638
  30. Y. Xie, Y. Wang, A. Nallanathan, et al., "An improved K-nearest-neighbor indoor localization method based on spearman distance," IEEE Signal Processing Letters, vol. 23, no. 3, pp. 351-355, Mar, 2016. https://doi.org/10.1109/LSP.2016.2519607
  31. G. Ge, Y. Zhang, Q. Jiang, et al., "Visual features assisted robot localization in symmetrical environment using laser SLAM," Sensors, vol. 21, no. 5, pp. 1772, 2021.