DOI QR코드

DOI QR Code

Voice Command-based Prediction and Follow of Human Path of Mobile Robots in AI Space

  • Received : 2023.02.20
  • Accepted : 2023.03.31
  • Published : 2023.04.30

Abstract

This research addresses sound command based human tracking problems for autonomous cleaning mobile robot in a networked AI space. To solve the problem, the difference among the traveling times of the sound command to each of three microphones has been used to calculate the distance and orientation of the sound from the cleaning mobile robot, which carries the microphone array. The cross-correlation between two signals has been applied for detecting the time difference between two signals, which provides reliable and precise value of the time difference compared to the conventional methods. To generate the tracking direction to the sound command, fuzzy rules are applied and the results are used to control the cleaning mobile robot in a real-time. Finally the experiment results show that the proposed algorithm works well, even though the mobile robot knows little about the environment.

Keywords

1. Introduction

AI Space is a space where many intelligent sensor devices with data learning are distributed throughout the whole of the space. These intelligent sensor devices have sensing, processing and networking functions, and are named DAIDs (Distributed AI Devices).

In this research, an AI Space is used in order to improved human location information. A mobile robot cooperates with multiple intelligent sensors, which are distributed in the environment[1]. The AI Space with networked sensors recognizes the walking human’s sound command and the mobile robot’s position, and give control commands to the robot in order to follow walking human in the shortest time path[2][3]. To track the walking human’s sound command by a mobile robot, the Fuzzy rule-based control algorithm has been proposed and applied. For simulating the tracking of a mobile sound command, the sound commands are located in straight, curved, and circular paths, which shows not only the estimation accuracy of the sound command but also the tracking performance of the mobile robot[4].

2. AI Space with Sound Command

As shown in Fig. 1, AI Space[41 is a space throughout which several intelligent devices are distributed. These intelligent devices have sensing, processing, and networking functions, and they are termed distributed AI devices (NAIDs). These devices observe and tracking the positions and behavior of both humans and robots that coexist in the AI Space in real time. The information acquired by each NAID is shared with the other NAIDs through the network communication system. Based on the accumulated information, the environment as a system is capable of understanding the actions of humans. In order to support humans, the environment/system utilizes machines including computers and robots [5]

SOOOB6_2023_v26n2_1_225_f0001.png 이미지

Fig. 1 Structure of the AI space with the NSD and sound command

3. Detection and Tracking of Sound Command

To estimate the distance to the sound command, three microphones are installed in a line as Fig. 2. There is a constant distance (= 15cm) gap among the three microphones  and  in a line. The distances from each of the microphones to the sound command are defined as  and  which can be estimated using the traveling time of the sound signal.

SOOOB6_2023_v26n2_1_225_f0002.png 이미지

Fig. 2 Distance measurement principle in 2D space

From Fig. 2, the distances to the microphones from the sound command can be represented as  and  as follows:

\(\left. \begin{array} { l } { R _ { 1 } = ( v \cdot t _ { 1 } = ) x } \\ { R _ { 2 } = ( v \cdot t _ { 2 } = ) x + v \cdot \Delta t _ { 12 } } \\ { R _ { 3 } = ( v \cdot t _ { 3 } = ) x + v \cdot \Delta t _ { 13 } } \end{array} \right. \)

The states of a sound command can be estimated if the initial state and input are given for the state transition model. Therefore, the states can be estimated for the next inputs by estimating the linear velocity, vk and angular velocity, wk of the walking human with sound command using the Kalman filter as a state estimator. From the linear velocity/acceleration and rotational angular velocity/acceleration data, the next states can be approximated, as in the following first order equations:

SOOOB6_2023_v26n2_1_225_f0003.png 이미지

Fig. 3 Moving direction kinematics by sound command

\(v _ { k + n } = \hat { v } _ { k } + \hat { a } _ { l k } n T\)

\(\omega _ { k + n } = \hat { \omega } _ { k } + \hat { a } _ { \omega k } n T\)

In Fig. 3, the result includes possible noise since it is a dynamically varying system, although it is surpressed by the Kalman filter. Therefore the least square estimation method is utilized, which has robust anti-noise characteristics[6].

\(\hat { E } = ( A ^ { T } A ) ^ { - 1 } A ^ { T } y\)

From the estimated inputs and using the state transition model, the trajectory of a moving object can be estimated as follows:

\(\hat { x } _ { k + m } = x _ { k } + \sum _ { h = 0 } ^ { m } v ( h ) \operatorname { cos } [ \theta ( h ) ] T\)

\(\hat { y } _ { k + m } = y _ { k } + \sum _ { h = 0 } ^ { m } v ( h ) \operatorname { sin } [ \theta ( h ) ] T\)

\(v ( h ) = \hat { v } _ { k } + \hat { a } _ { l k } h T\)

4. Following Method for Walking Human

Tracking the sound command requires an intelligent algorithm since the driving direction and orientation need to be changed according to the estimated position and orientation of the sound command dynamically. From the current location, how to efficiently drive to the estimated sound command is a real time task to the mobile robot. Therefore in this research Fuzzy rule-based algorithm has been adopted for the tracking control[5]. The performance of the Fuzzy model depends on the structure of the Fuzzy rules and to obtain high performance from the Fuzzy rule-based system, an optimization process is required in general[6]. The Fuzzy rules are given as:

Rule: IF Distance is ADi and Angle is BAi Then Left-Output is X Li and Right-Output is X Ri Where X Li , X Ri are displacements of left and right wheels, respectively and A Di , B Ai are the distance and angle between the mobile robot and the sound command, respectively.

SOOOB6_2023_v26n2_1_225_f0004.png 이미지

Fig. 4 Fuzzy input and output variables

The tracking direction of the mobile robot to the sound command is represented as d , which is determined by the Fuzzy interference using the input variables d for the distance and  for the angle to the sound command. Practical range of d is 0 m ~ 8 m, and that of angle  is -90°~90°.

Also, Tracking direction d provides the input to the Left and Right motors as a voltage of 0 ~12V. For the de-fuzzification to determine the output value, Mamdani’s center of weight method has been used with the qualitative language variables as shown in Fig. 4 and the input and output Fuzzy rules are summarized in Table 1.

The trajectory tracking performance of the mobile robot depends highly on the curvature of the path and velocity. Generally, high-speed motion on the curved path causes high tracking error because of the slippage that comes from the centripetal force. The slippage also varies highly depending on the friction between the wheels and the ground [4].

Table 1. Input and output fuzzy rules

SOOOB6_2023_v26n2_1_225_t0001.png 이미지

5. Experimental Results

To demonstrate and illustrate the proposed method, we present an example. It is assumed that the velocity limit of a mobile robot is 30 cm/sec and the initial locations of the mobile robot is (190, 100) in cm with respect to the reference frame. The velocity and angular velocity of moving object are as follows:

SOOOB6_2023_v26n2_1_225_f0005.png 이미지

Fig. 5 Predicted trajectory of walking human (reference line), and trajectory of mobile robot(proposed method and NAID-based mehtod)

SOOOB6_2023_v26n2_1_225_f0006.png 이미지

Fig. 6 Trajectory errors of mobile robot

\(v _ { k } = 30 ( \operatorname { cos } ( 0.01 k ) + 1 ) + \xi _ { v } [ cm / sec ]\)

\(\omega _ { k } = 0.7 \operatorname { sin } ( 0.03 k + \frac { \pi } { 1.5 } ) + \xi _ { \omega } [ rad / sec ]\)

The forward direction and rotational angular velocity of the moving object are Gaussian random variable with variances of 2 and 0.1, respectively. In Fig. 5, sound+NAID is the following result by sensor fusion with location information by NAID and sound command, and the location recognition result using NAID alone was compared respectively. Fig. 6 represents the distance errors between the mobile robot and the walking human with sound command, the error between the estimated velocity and the real velocity.

6. Conclusion

In this paper, the sound command was tracked by a cleaning mobile robot, which carried an array of microphones detecting the distance and orientation to the sound command. The artificial sound is generated by clapping. The sound signal was received by the three microphones and the arrival time difference and its algorithms are utilized to identify the location and the orientation of the sound command. The goal of this research is for the mobile cleaning robot to track the sound command. Proposed method is divided as follows.

Position estimation of an object like as walking human by considering coordination of the sound command of three microphones’s sound.

References

  1. T.S. Jin, J.M. Lee, and H. Hashimoto, "Position estimation of mobile robot using images of moving target in AI Space with distributed sensors," Advanced Robotics, vol.20, no.6, pp.737-762, (2006). https://doi.org/10.1163/156855306777361604
  2. H. Nock, G. Iyengar, and C. Neti, "Speaker localization using audiovisual synchrony: An empirical study, Image and Video Retrieval," Lecture Notes in Computer Science 2728, Springer, pp. 488-499, (2003).
  3. M.J. Er, T.P. Tan, and S.Y. Loh, "Control of a mobile robot using generalized dynamic fuzzy neural networks," Microprocessors and Microsystems, vol.28, no.9, pp. 491-498, (2004). https://doi.org/10.1016/j.micpro.2004.04.002
  4. Chanyoung Ju, Hyoung Il Son, "Autonomous Tracking of Micro-Sized Flying Insects Using UAV : A Preliminary Results," Journal of Korean Society of Industry Convergence, Vol.23 No.2, pp.125-137, 2020.
  5. Jung-Seok Kang, Sung-Hoon Noh, Du-Beum Kim, Ho-Yuong Bae, Sang-Hyun Kim, O-Duck Im, Sung-Hyun Han, "A Study on the Real-Tim Path Control of Robot for Transfer Automation of Forging Parts in Manufacturing Process for Smart Factory," Journal of Korean Society of Industry Convergence, Vol.22, No.3, pp.281-292, 2019.
  6. Tae-Seok Jin, "LQ control by linear model of Inverted Pendulum Robot for Robust Human Tracking," Journal of Korean Society of Industry Convergence, Vol.23, No.1, pp.49-55, 2020.