DOI QR코드

DOI QR Code

A Study on Real Time Control of Moving Stuff Action Through Iterative Learning for Mobile-Manipulator System

  • Received : 2019.02.22
  • Accepted : 2019.06.05
  • Published : 2019.07.31

Abstract

This study proposes a new approach to control Moving Stuff Action Through Iterative Learning robot with dual arm for smart factory. When robot moves object with dual arm, not only position of each hand but also contact force at surface of an object should be considered. However, it is not easy to determine every parameters for planning trajectory of the an object and grasping object concerning about variety compliant environment. On the other hand, human knows how to move an object gracefully by using eyes and feel of hands which means that robot could learn position and force from human demonstration so that robot can use learned task at variety case. This paper suggest a way how to learn dynamic equation which concern about both of position and path.

Keywords

1. Introduction

Smart Factory is a product planning. Design, production, distribution, and sales, all of which are integrated into IT technology, to produce customized products at minimum cost and time. Smart Factory, which aims to strengthen the manufacturing industry, attract talented people, and create quality jobs, aims to integrate production processes, procurement logistics and services. It is possible to realize productivity improvement, energy saving, safe production environment, and flexible production system capable of multi-product hybrid production. It is a system in which manufacturing steps are automated and informed based on IoT CPS, and the entire front and back industry is serviced in real time like one factory. In the past, factory automation was built for mass production in one factory, but Smart Factory is called the Fourth Industrial Revolution, moving all the factories to one another like a factory. The Fourth Industrial Revolution is a strategy to cope with the changes in society, technology, economy, ecology and politics faced by German manufacturing industry by integrating ICT. It aims at a smart factory that takes advantage of ICT related technologies such as Internet of things and enterprise software, location information, security, cloud, big data, virtual reality.[1]

Smart Factory combines ICT in the traditional manufacturing industry to make the facilities and processes of the individual factories intelligent and interconnected, and the knowledge of all production information can be shared and utilized in real time to optimize production and operation. It is a factory equipped with a production system that can be linked with upper and lower factories through which cooperative operation can be continued..

In this study, we propose the design and motion control of autonomous mobile robot with dual arm for smart factory, and analyze its reliability through simulation and performance experiments.

In order to numerically evaluate the stability of the dual-arm robot by simulation, we analyze the optimal posture for each motion using the analysis of the dual-arm manipulation performance index. When a dual-armed robot interacts with humans, it defines representative postures that can be taken according to the shape of the object, and uses the optimal posture evaluation function according to each posture to suit each situation. By providing the optimal posture, the dual-arm robot verifies the imitation learning processing performance for efficient object gripping and moving tasks.

In order to realize the autonomous running unmanned work in the factory, the autonomous running control based on voice recognition is performed, and the reliability about the realization of the autonomous running unmanned work is verified through performance experiments.

2. Kinematics analysis of robot system

2.1 Overview

In order for the Dual-arm robot to work on the working water, the end effector must be moved to the object. Therefore, it is necessary to be able to represent the position and azimuth angle of the work object in the reference coordinate system. The position of P point seen from the coordinate system is as follows.

\(\begin{align}A P=\left[\begin{array}{l}P_{x} \\ P_{y} \\ P_{z}\end{array}\right]\end{align}\)       (1)

This indicates the distance from the origin of the coordinate system in the x, y and z directions. Further, the orientation of the reference coordinate system of the object can be represented by rotation of the reference coordinate system of the new coordinate system set in the end effector.

That is, it is a relative coordinate system with respect to the reference coordinate system.. Generally, the change in orientation of the x, y and z axes is represented by 3X3 matrices, it was called rotation matrices. The rotation matrix of the reference coordinate system {A} of the relative coordinate system {B} is expressed as ARB.

A combination of a position vector for indicating the position of the end effector with respect to the reference coordinate system and a rotation matrix for indicating the azimuth is integrated into one to be a matrix of a car. A position vector on a three-dimensional space, a rotation matrix R3×3 and a rotation matrix for indicating an azimuth, and a matrix of a car using d = {x y z}T are displayed as the following equations.

\(\begin{align}{ }^{0} H_{1}=\left[\begin{array}{cc}R & d \\ 0 & 1\end{array}\right]\end{align}\)       (2)

2.2 Forward Kinematics

Positive kinematics refers to finding the position and azimuth of an end device using an n × 1 sized joint vector in a manipulator with an nth degree of freedom. Generally, it can be obtained by using the product of the homogeneous matrices as in (2). As a method of determining the matrix of Homogeneous matrix, a method of setting coordinate axes proposed by Denavit and Hartenberg is often used. The coordinate system setting criteria, also known as the D-H notation, are as follows.[2]

Set 1. The xi axis is set perpendicular to the zi-1 axis.

Set 2. The xi axis is set on the zi-1 axis.

In addition, the process of obtaining a homogeneous matrix using the coordinate axes set forth above is as follows.

Process 1. zi-1Rotate the {i - 1} axis about the xi axis by the θ axis to create the {i - 1}z, θ coordinate system.

Process 2. Move along the zi-1 axis by di to create the {i - 1}z,θ coordinate system.

Process 3. Move ai along the x axis of the {i - 1}z,d coordinate system above to create a {i - 1}z,a coordinate system.

Process 4. Finally, rotate by αi with respect to x axis of {i - 1}z,a coordinate system to create {i} coordinate system.[3]

If we make the homogeneous matrix according to the above procedure, it is as follows.

\(\begin{align}\begin{array}{l}\left.{ }^{i-1} A_{i}=\operatorname{Rotarion}(x, \theta) \text { Trans }\left(z, d_{i}\right)\right) \\ \text { Trans }\left(x, a_{i}\right) \text { Rotation }\left(x, \alpha_{i}\right) \\ =\left[\begin{array}{cccc}c \theta_{i} & -s \theta_{i} & 0 & 0 \\ s \theta_{i} & c \theta_{i} & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{array}\right]\left[\begin{array}{cccc}1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & d_{i} \\ 0 & 0 & 0 & 1\end{array}\right] \\ {\left[\begin{array}{cccc}1 & 0 & 0 & a_{i} \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{array}\right]\left[\begin{array}{cccc}1 & 0 & 0 & 0 \\ 0 & c \alpha_{i} & -s \alpha_{i} & 0 \\ 0 & s \alpha_{i} & c \alpha_{i} & 0 \\ 0 & 0 & 0 & 1\end{array}\right]}\end{array}\\\end{align}\)       (3)

The kinematic structure of the manipulator used in this paper is shown in Fig. 1. We are developing for research of an autonomous mobile robot with dual arms, with 2 degrees of freedom in the shoulder, 1 degree of freedom in the elbow, 1 degree of freedom in the upper arm, and 1 degree of freedom in the backflip. This Dual-arm robot is shown in Table in DH notation. Same as 1.

SOOOB6_2019_v22n4_415_f0001.png 이미지

Fig. 1 Modeling of 5-DOF robot

Table 1 and the equations to obtain the isomorphic transformation matrix of each link are as follows.

\(\begin{align}\begin{array}{l} { }^{0} H_{1}=\left[\begin{array}{cccc} c_{1} & 0 & -s_{1} & 0 \\ s_{1} & 0 & c_{1} & 0 \\ 0 & -1 & 0 & 260 \\ 0 & 0 & 0 & 1 \end{array}\right] { }^{1} H_{2}=\left[\begin{array}{cccc} s_{2} & 0 & c_{2} & 0 \\ -c_{2} & 0 & s_{2} & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right] \\ { }^{2} H_{3}=\left[\begin{array}{ccc} 0 &-c_{3} & s_{3} & 0 \\ 0 & -s_{3} & -c_{3} & 0 \\ 1 & 0 & 0 & 250 \\ 0 & 0 & 0 & 1 \end{array}\right] { }^{3} H_{4}=\left[\begin{array}{cccc} S_{4} & 0 & c_{4} & 0 \\ -C_{4} & 0 & s_{4} & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right]\\ { }^{4} H_{5}=\left[\begin{array}{cccc} c_{5} & -s_{5} & 0 & 0 \\ s_{5} & c_{5} & 0 & 0 \\ 0 & 0 & 1 & 200 \\ 0 & 0 & 0 & 1 \end{array}\right] \end{array}\end{align}\)      (4)

Table 1. D-H parameter table(1)

SOOOB6_2019_v22n4_415_t0001.png 이미지

The position and azimuth of the end device can be expressed as 0H5. This is obtained by multiplying the homogeneous matrix on each link by the following sequence.

\(\begin{align}{ }^{0} H_{5}={ }^{0} H_{1}{ }^{1} H_{2}{ }^{2} H_{3}{ }^{3} H_{4}{ }^{4} H_{5}=\left(\begin{array}{cccc}h_{1 x} & h_{2 x} & h_{3 x} & p_{x} \\ h_{1 y} & h_{2 y} & h_{3 y} & p_{y} \\ h_{1 z} & h_{2 z} & h_{3 z} & p_{z} \\ 0 & 0 & 0 & 1\end{array}\right)\end{align}\)       (5)

In the above equation, H1h … H3z represents the coordinate and px, py, pz represents the bearing.

In the above equation, the coordinate components can be obtained as follows and are summarized using the above equation.

Si = sin(θi)       (6)

Ci = cos(θi)       (7)

h1x = s5(s1c3 - c1c2s3) + c5(c4(s1s3 + c1c2c3) + c1s2s4)       (8)

h1y = -s5(c1c3 + s1c2s3) - c5(c4(c1s3 - s1c2c3) - s1s2s4)       (9)

h1z = -c5(c2s4 - s2c3c4) - s2s3s5       (10)

h2x = c5(s1c3 - c1c2s3) - s5(c4(s1s3 + c1c2c3) + c1s2s4)        (11)

h2y = s5(c4(c1s3 - s1c2c3) - s1s2s4) - c5(c1c3 + s1c2s3)       (12)

h2z = s5(c2s4 - s2c3c4) - s2s3c5       (13)

h3x = -s4(s1s3 + c1c2c3) + c1s2c4       (14)

(h3y = s1s2c4 + s4(c1s3 - s1c2c3)       (15)

h3z = -s2c3s4 - c2c4       (16)

The bearing components are as follows.

px = 200(s4(s1s3 + c1c2c3) + c1s2c4) + 250c1s2       (17)

py = -200(s4(c1s3 - s1c2c3) - s1s2c4) + 250s1s2       (18)

pz = 260 - 250c2 - 200(c2c4 - s2c3s4)       (19)

From right shoulder to shoulder to tool tip enter 50°, 20°, 40°, 43°, 40° and left arm to shoulder to tool tip –50°, -20°, -40°, -43°, -40°. On the right side of the figure came the tolerance matrix from shoulder to elbow of each arm and from shoulder to tool tip. Therefore, the elbow coordinate of the right arm is (155, 168, 80) and the tool end coordinate is (170, 340, -92). It can be seen that the elbow coordinate of the left arm is (155, 168, -360) and the tool end coordinate is (170, 340, -184).

2.3 Structure design of dual-arm robot

The dual-arm robot used in the actual experiment was composed of two degrees of freedom on the shoulder of each of the left and right arms, one degree of freedom on the upper arm, one degree of freedom on the elbow, and one degree of freedom on the back. And a gripper type robot hand was made so that the object can be grasped at the tool end part.

Fig. 2 shows the robot used for the test.

SOOOB6_2019_v22n4_415_f0003.png 이미지

Fig. 2 Structure of 5-DOF Dual-arm robot

The motor itself is linked in order to reduce the weight and length of the link considering the movement ability at the time of detailed design and the weight of each joint, rigidity and stability etc. The appearance is a member for fixing the motor Designed. To precisely control the manipulator, a harmonic gear was used to minimize the backlash of the joint, and the gear ratio was 100: 1. Each joint is composed of a DC motor, and a DC motor of the appropriate size is used according to the position of each joint. For the purpose of grasping the object, we made a gripper using one servo motor.

The controller consists of one master controller and 13 distributed slave controllers. Since the distributed controller controls individual joints independently as compared with the integrated controller, individual control is possible and upgrading can be performed by adding joints in the future.

PID control, which is a control method applied to most robots in joint control, is difficult to obtain control performance of manipulator parameter variation, load variation, and high precision at high speed operation.. Since it is difficult to control accurately and quickly only by PID control, a controller that mixes PID and fuzzy is used to improve this disadvantage. The fuzzy controller is designed by considering the various operating conditions of the system and based on the designed rule, we apply the if then rule in fuzzy with e corresponding to the difference between the position value of the joint and the current position value. And final output is determined through defuzzification process.[4]

SOOOB6_2019_v22n4_415_f0004.png 이미지

Fig. 3 The basic structure of controller

3. The path analysis for moving a stuff

In the operation of moving the object, since the position of the object is an object of interest, it is desirable to learn to indicate the position of the object. The position of the robot arm depends on the position of the object, so the position of the robot arm is determined based on the position of the object whose position is to be learned.

SOOOB6_2019_v22n4_415_f0005.png 이미지

Fig. 4 Force and path in manipulating both hands

The movement path (PO) of the object by both arms is determined from the movement path (PHL, PHR) of both hands as follows.

PO = 0.5(PHL + PHR)       (20)

Since the rotation component (θO) is determined from the direction in which the hands are in contact with the object, as shown in the figure, the rotation A1 component is rotated θHL about the z axis, or the rotation component θHR is about the z axis It is supposed to be rotated. Where θ is the rotation component represented by the Angle-Axis method.

To move without missing an object, you must learn the power of holding the object with both hands. The force can be converted into a distance by means of a spring system, so that the repulsive force acting on the robot by contact can also be converted into the distance between the intended paths (PHL, PHR) and PO of the two hands.

f = kD       (21)

Since θO is a parameter that does not change at the moment the object is lifted, it is necessary to learn PO and D (left, right), and this value is stored in chronological order when a person shows a test.

In order to learn the position of an object, it is necessary to demonstrate the task of lifting the object first. When you perform the demonstration, you can solve the kinematics to calculate the position of the object over time. Since the above expression g is a target location, x, y, z is generally a six-dimensional vector composed of the target position of the object and the target rotation amount θx, θy, θz of the object, and is generally the position at the end of the test. \(\begin{align}y\end{align}\), \(\begin{align}\dot {y}\end{align}\) is a six-dimensional vector indicating the current position and velocity of the object. The current position of the object corresponds to the target position and direction of the hand, and this value is calculated by solving the robot kinematics when showing the test.

Px, Py, Pz, θx, θy, θz Because each A is independent of the other, six DMPs can be used to train each component separately.[5]

The position of the object stored by the test can be learned by the method of the previous chapter and stored in wposition. As a result, the computed path is a path of movement of the object, so the paths of both arms must be changed. Conversion can be calculated by reversing the expression used previously.

GTO = GT0RRTHLHLTO       (22)

(HLTO)-1 = (GTO)-1GTRRTHL       (23)

OTHL = (GTO)-1GTRRTHL       (24)

GTO = GTRRTHRHRTO       (25)

(HRTO)-1 = (GTO)-1GTRRTHR       (26)

OTHR = (GTO)-1GTRRTHR       (27)

Fig. 5, G is the global reference coordinate, R is the robot coordinate, HL and HR are the coordinates of the left hand and right hand, and O is the coordinates of the object.[6]

SOOOB6_2019_v22n4_415_f0006.png 이미지

Fig. 5 Coordinate of Coordinate of dual-arm for handling of objects

4. Performance experiment

4.1 Simulation test

Fig. 6 shows the coordinate system according to the position of each joint of the dual-arm robot, and the coordinates at the bottom show the coordinates of the work space. [7]

SOOOB6_2019_v22n4_415_f0007.png 이미지

Fig. 6 Name of each joint

As the first parameter, the magnitude of the force applied to the object, changes with the change of the encoder. Therefore, when the optimal posture evaluation function is calculated as follows, all three parameters have a small value in the posture of (a). Therefore, the posture of the hexahedron is the posture of (a) with a high figure of merit. The motion of holding the object is shown in Fig. 7, respectively. (a), the gripper grips the lower part and the upper part of the object vertically. (b) was grasped at the lower part of both spheres. In (c), the hand grips the lower and upper parts of the object, and the grip surface is held horizontally on the left and right sides in the vertical direction. The standard of the object is 15cm in radius and 2.5kg in weight. Fig. 8 shows the operability ellipse for the gripping postures (a), (b), and (c). This operability ellipse is shown for the xy, xz, and yz 3 planes. Red color means (a) Posture Green means (b) Posture Blue means (c) Posture. The experiment was conducted in the same process as the hexahedral grip performance index evaluation method for the posture of each section. First, the change of the force applied to the object was the lowest in (a) and the lowest in (c) and (b). The operation performance index was (a)-(b)-(c), and the amount of power decreased in the order of (b)-(a)-(c). In addition, the value obtained by combining these values is smaller in the order of (a)-(b)-(c), and in the case of (a), the performance index is high.

SOOOB6_2019_v22n4_415_f0008.png 이미지

Fig. 7 Manipulability ellipsoid in xy, xz, yz dimension for box

SOOOB6_2019_v22n4_415_f0009.png 이미지

Fig. 8 Manipulability ellipsoid in xy, xz, yz dimension for ball

4.2 Experiment and remark

We experimented two passes of the task of transferring one object. First, in the first experiment, when given the almost same environment as learning, it is an experiment to confirm whether or not the learned operation is realized in the same way. Fig. 9 shows the results for the experiment. It has been confirmed that the object of the learned path is not missed and the path such as the target position is followed. Fig. 10 shows an experimental scene in which the robot is directly learned in order to learn the holding and transferring operation of holding a rectangular solid using both arms in eight successive steps. Fig. 11 shows a scene in which the robot autonomously executes eight stages of work motions continuously by performing eight motions learned by a person. The experiment was carried out 50 times each of carrying a cube, a cuboid, a cylinder, and a rectangle. As a result of analyzing the result of the training for that, by showing the success rate of 95% on average, we verified the feasibility of simple but repetitive work operation in daily life and manufacturing plant.

SOOOB6_2019_v22n4_415_f0010.png 이미지

Fig. 9 Experimental result graph of learning control

SOOOB6_2019_v22n4_415_f0011.png 이미지

Fig. 10 Learning scene of dual-arm robot

SOOOB6_2019_v22n4_415_f0012.png 이미지

Fig. 11 Experimental results of learning control

5. Conclusion

In this study, we proposed a new technique to the design and motion control of an autonomous mobile robot with dual arms for the realization of a SME customized smart factory, and analyzed its reliability through performance experiments.

The details of the research contents include periodical study and inverse mechanistic analysis of dual-armed robots, and imitation learning control method and human-robot interaction so that dual-armed robots can execute accurate operations in manufacturing process.

In addition, in the imitation learning method, using both arms, perform efficient work at the production site, and use the analysis of the mobile work performance index with the ones by both arms to check the stability, each motion Based on learning through optimal posture, we proposed a method that can perform optimal tasks.

References

  1. Y. C. Lee, S. J. Lim, S. W. Hwang and C. S. Han, "Object grasp method of the service robot using the robot vision," Conference on Korean Society for Precision Engineering, vol. 3, pp. 7-8, 2008.
  2. N. Vahrenkamp, A. Barski, T. Asfour, and R. Dillmann, "Planning and execution of grasping motions on a humanoid robot," IEEERAS International Conference on Humanoid Robots, 2009.
  3. C. Eitner, Y. Mori, K. Okada, and M. Inaba, "Task and vision based online manipulator trajectory generation for a humanoid robot," Humanoid Robots, 2008. Humanoids 2008. 8th IEEE-RAS International Conference, pp. 293-298, Dec. 2008.
  4. C. Ott, O. Eiberger, W. Friedl, B. Bauml, U. Hillenbrand, C. Borst, A. Albu-Schaffer, B. Brunner, H. Hirschmuller, S. Kielhofer, R. Konietschke, M. Suppa, T. Wimbock, F. Zacharias, and G. Hirzinger, "A humanoid two-arm system for dexterous manipulation," IEEE-RAS Int. Conf. on Humanoid Robots, pp. 276-283, Dec, 2006.
  5. A. Miller, S. Knoop, H. Christensen, and P. Allen, "Automatic grasp planning using shape primitives", IEEE International Conference on Robotics and Automation, vol. 2, pp.1824-1829, 2003.
  6. X. Zhixing, A. Kasper, J. M. Zoellner, R. Dillmann, "An automatic grasp planning system for service robots," 14th International Conference on Advanced Robotics, Munich, Germany, June, 2009.
  7. N. Vahrenkamp, D. Berenson, T. Asfour, J. Kuffner, and R. Dillmann, "Humanoid motion planning for dual-arm manipulation and re-grasping tasks," Intelligent Robots and Systems, IROS, Oct, 2009.