1. Introduction
Robots are used in various applications of objects handling, parts assembly and welding for the automation of manufacturing processes [1-2]. Traditionally, they are used in relatively fixed and inflexible working conditions thorough setting working positions by teaching pendent manually. In case of changing working conditions, they needs additional the same process of setting working positions manually. Therefore, it has no ability to cope with the variation that can occur in production line. Robot with sensor feedback using 2D/3D sensors can give more flexibility [3-6].
Park and Mills [3] proposed localization system that can compute the pose of a sheet metal part. It has been picked up by a robot from an arbitrary location. They used laser-based proximity and edge detectors to detect features on the part. Their approach consists of two steps. In off-line stage the mapping between sensor measurements and robot’s mislocation is determined and in on-line stage iterative pose estimation is done using the mapping from the off-line stage. Bone and Capson [4] deals with fixtureless assembly of automotive parts using 2D and 3D computer vision. They use especially designed programmable grippers for the robotic fixtureless assembly. 2D computer vision is used to pick up parts. Initial location for pick up has been taught off-line. 3D sensing system consisted of a camera and two line generating lasers is used to align the parts to joining them. Mario et al. [5] shows automatic assembly of 3D objects by recognizing objects using artificial neural network. First, object location is found in the assembly zone. Then its pose is computed using neural network. It is transferred into the robot for the manipulation of objects. Shauri and Nonami [6] proposed assembly of blot and nut using dual-arm robot. They use stereo camera system for the recognition of small nuts and bolts in an assembly task. Lisa et al. [7] presents assembly manipulation of small objects by robot with dual-arm using stereo camera system. Hvilshoj et al. [8] represents good reviews including the past, present and future of autonomous industrial mobile manipulation.
In this paper, we present an algorithm for the automatic manipulation of tie rod using robot with 3D sensing system. Sensor system consisted of a camera and slit laser is used for the acquisition of 3D information. 3D sensing system is attached on the robot arm and it is used in the entire process for the automatic manipulation of tie rod.
2. Proposed Approach
Fig. 1 shows the system used in this paper that is consisted of camera, laser, nut runner and tie rod. A camera and slit laser composes a 3D sensing system. Nut runner fastens and unfastens the stop nut and adjust bolt on the tie rod. For the automation of this process, we adopt sensing system consisted of a camera and a slit laser that can give 3D information. It is used by attaching on the nut runner as shown in Fig. 1. Nut runner is used as an end-effector of the robot. We directly use 3D information for reaching tie rod, fastening and unfastening the stop nut and adjust bolt.
Fig. 1.System configuration including robot, nut runner, and tie rod
Before using 3D sensing system with the robot, hand / eye calibration should be done. Thorough this, we can transform measurement with respect to the sensor coordinate system into the robot coordinate system. Finally, we can convert them into the coordinate system with respect to the tool center point (TCP) on the nut runner. It can be done by forward kinematics that is provided by robot manufacturer.
Fig. 2 shows the overall flow chart for the automatic manipulation of tie rod. Hand/eye calibration is done only once in off-line.
Fig. 2.The overall flow chart for the automatic manipulation of tie rod
2.1 Hand / eye calibration using simple structure
Usually, hand/eye calibration is represented as AX = XB . A is related to the robot motion, B is related to the sensor motion and X is the unknown transformation. Many algorithms are proposed and they usually use known robot motion to solve the Eq. [9]. Uncalibrated visual servoing is also good candidate for handling objects using the robot [10].
We present a simple hand/eye calibration procedure by reflecting geometric constraints on our application. Throughout hand / eye calibration, we need to get a transformation matrix between tool center point (TCP) on nut runner and 3D sensing system. After computing the unknown hand / eye information, we can transform 3D information by sensing system into the 3D information with respect to the TCP. In our application, we can install 3D sensing system where the coordinate frame is set on the origin of the laser to have small rotation with respect to the TCP as shown in Fig. 3. We can assume that rotation is negligible between two coordinate systems. Therefore, rotation matrix between TCP and laser coordinate system is identity matrix. We need to obtain only translation components between two coordinate systems. We compute unknown translation between two frames by processing 3D information by sensing system. For this, we newly designed calibration structure as shown in Fig. 3. We designed L-shape calibration structure and attached a black pattern to simply the detection of laser contour on image. It is inserted into the nut runner and located to have only translation from the origin of the nut runner.
Fig. 3.Hand / eye calibration using proposed simple calibration structure
We use known dimension of calibration structure and 3D coordinates of points on laser contour for the hand/eye calibration. Fig. 4-(a) shows the original image of hand/eye calibration structure where laser contour has red color. Fig. 4-(b) shows the result of detecting laser contour on image. We detect laser contour on image using two stage approaches where color and grey information is used and details can be found in [11]. 3D information by camera and slit laser can be computed by finding crossing point of camera ray and laser scan plane [12].
Fig. 4.Image processing for hand/eye calibration: (a) original image; (b) result of the detection of laser contour
We use the center point of laser contours on calibration structure for the hand / eye calibration. We assume that there is a small rotation between two frames N − xyz (hand frame) and L − xyz (eye frame). Therefore, the only remaining unknowns between two frames are the translation components. They can be computed as follows
(Cx, Cy) is the center point of laser contour. Sx is the distance from the TCP to the calibration structure along x direction. It can be measured manually. (Tx, Ty) is unknown translation components along x and y axis. Tz is obtained manually by measuring the distance from TCP to the origin of laser frame along z direction. Throughout this process, we can compute the unknown transformation between TCP and 3D sensing system. After hand/eye calibration, we transform measurement from 3D sensor frame into coordinates with respect to the TCP frame. Throughout this, we can directly manipulate tie rod using 3D information from sensor system.
2.2 Rotation estimation of tie rod
The overall procedures for manipulating tie rod automatically are presented in Fig. 2. 3D information is used at various stages in Fig. 2. First, robot should move to grab tie rod using nut runner. At initial stage, tie rod can be located with some deviation from the predefined position. It can happen in real production line due to error of positioning system. Therefore, automatic manipulation process needs to handle such position error.
Nut runner’s tolerance for grabbing tie rod is very tight, therefore if nut runner moves without adjustment it can hit tie rod. We adjust relative pose of nut runner before moving to the tie rod. We adjust robot such that TCP is located perpendicular to the horizontal axis of tie rod. We solve it by estimating the relative pose of the TCP with respect to the tie rod. Two measurements are obtained at the initial position and other position where robot is moved horizontally with predefined value. By analyzing tow 3D profiles on tie rod, we can compute the relative pose of tie rod with respect to the TCP using Eq. (2). Then, robot is rotated using computed rotation angle for nut runner to be positioned in perpendicular to the horizontal axis of the tie rod. After this, we can approach the tie rod moving only along x axis.
Fig. 5 shows the process for the estimation of tie rod’s rotation angle with respect to laser frame. For each image, we extract laser contour on tie rod’s surface and they are converted into 3D coordinates. We fit ellipse using them and use the center point of the ellipse for the computation of rotation angle. Using two center points p1 and p2, the rotation angle is computed as follows:
Fig. 5.The estimation of tie rod’s rotation angle using two measurements.
Fig. 6 shows the result of pose refinement of the robot with respect to tie rod. First column shows images that are acquired at two different positions of the robot. We can notice that tie rod has oblique pose compared to the nut runner on the robot. Second column shows the result of extracting laser contours on images. Third column shows images that are acquired at two different positions after robot pose adjustment. We can see that tie rod’s horizontal axis has been adjusted to be perpendicular to the nut runner. After adjusting robot pose, robot approach to grab tie rod using 3D information along x axis. Finally, robot moves along horizontal direction on tie rod to manipulate stop nut and adjust bolt.
Fig. 6.The result of estimation of the rotation angle of tie rod (first row: original images second row: result of finding laser contours on original image third row: original images after adjusting robot pose)
2.3 Manipulation of stop nut and adjust bolt using nut runner
As shown in Fig. 1, nut runner is used for fastening and unfastening the bolt and nut on tie rod. Before manipulateing them, we should know the current rotated angle of bolt and nut. Nut runner has multiple faces inside that can rotate to manipulate stop nut and adjust bolt. Therefore, it must rotate according to the current rotation angle of stop nut and adjust bolt before manipulating them. Then, it moves along the horizontal direction on tie rod to reach them. Stop nut has six faces and each face corresponds to 60°. Adjust bolt has also six faces but it small faces among them as shown in Fig. 1.
After adjusting robot pose using rotation angle of tie rod, nut runner and laser has been set perpendicular to the horizontal axis of the tie rod. Therefore, laser scan plane on stop nut and adjust bolt is perpendicular to the face of them. We compute the current rotation angle of stop nut by extracting control point corresponding to the corner point on stop nut face. In the case of adjust bolt, we extract two control points on its face. Rotation angle of stop nut and adjust bolt is computed with respect to the predefined reference position. We use the 3D coordinate of extracted control point for the computation of rotation angle of stop nut and adjust bolt with respect to the reference position. We use known radius of stop nut and adjust bolt for the computation and it is shown in Fig. 7.
Fig. 7.Computation of rotation angle of stop nut.
Fig. 8-(a) shows original image where red laser contour is on the face of the stop nut. We find multiple control points corresponding to corner of stop nut using polygon approximation. We compute the angle of a control point using two neighboring points. We select the point that has minimum angle as the control point. In Fig. 8-(b), the found control point is represented as yellow circle. Similar algorithm is applied for finding control points on adjust bolt. In this case, we select two control points by comparing the distance between control points with actual distance. Fig. 9-(a) shows the original image of adjust bolt. Fig. 9-(b) represents two selected control points that are shown as yellow circle. Details including detecting laser contours can be found in our previous approach [11].
Fig. 8.The result of estimating rotation angle of stop nut: (a) original image; (b) result of detecting a control point.
Fig. 9.The result of estimating rotation angle of adjust bolt: (a) original image; (b) result of detecting two control points.
3. Experimental Results
Experiments are done using 6-DOF industrial robot and nut runner was used for the manipulation of stop nut and adjust bolt on tie rod. 3D sensing system consisted of a camera and slit laser is attached on the nut runner as shown in Fig. 1. A 3D sensing system consisted of a camera and slit laser is calibrated using our earlier work [13]. Experiments are done to investigate the feasibility of the proposed algorithm. Tie rod are positioned at fixed position as shown in Fig. 1.
We changed the starting position of robot to reflect the perturbation that can happen in real production line due to error in positioning system. Translation was chosen randomly between ±3mm. Rotation was chosen randomly between ±3°. The diameter of tie rod, stop nut and adjust bolt is 15.1mm, 27.3mm and 16.25mm. Nut runner used in our application allows tolerance up to 0.5mm so that the accurate computation of distance and rotation angle is important for the successful operation of proposed procedures.
Error statistics are obtained from 300 trials. Experiments are done under fluorescent light that is typical on the real production line. We have not used housing to block out light from environment.
Success of the presented procedure was decided if the whole process shown in Fig. 2 was done without failure. Direct evaluation of the accuracy of hand/eye calibration is difficult. We evaluate it indirectly by checking the whole process in Fig. 2 was done correctly in each trial. If there were considerable errors in hand/eye calibration, failures would occur in some stages in Fig. 2. This is due to the fact that 3D information transformed into TCP coordinate from sensing system by hand/eye calibration is used directly for the manipulation.
Table 1 shows the performance of proposed algorithm after 300 trials. We have success rate of 90%. Further improvements are required for the robust operation in real production line. We have analyzed failure cases in terms of three categories. First case has occurred during the computation of the rotation angle of the tie rod with respect to TCP. Mainly it has occurred in the detection of laser contour on image. Partial detection of laser contours on image caused problem. Second and third case has occurred in the stage of the computation of the rotation angle of stop nut and adjust bolt. In this case, detection of laser contour on image has operated well, but the detection of control point at wrong position on the laser contour caused the error.
Table 1.The performance of proposed algorithm.
Fig. 10 shows some failure case in the computation of the rotation angle of stop nut. Laser contour on stop nut was detected successfully. But, local disturbance occurred by surface irregularity caused the wrong detection of the control point on laser contour image. Fig. 11 shows some failure case in the computation of the rotation angle of adjust bolt. In this case, polygonal approximation using laser contour on adjust bolt has caused the problem.
Fig. 10.Some failure cases in the estimation of stop nut’s rotation angle.
Fig. 11.Some failure cases in the estimation of adjust bolt’s rotation angle.
We are considering following items to improve the performance of proposed algorithm. Frist, we are considering higher resolution of image for the whole process. Currently, we are using image with size of 640 × 480 pixels. Higher resolution could give more accurate result in the computation of rotation angle of tie rod though processing time would increase.
Second, we are considering a method that uses 3D geometrical shape of nut and bolt for the computation of the rotation angle of them. Currently, we are using 2D contour information for finding control point thorough polygonal approximation. We think that this can overcome the partial irregularity of laser contour on stop nut and adjust bolt.
4. Conclusion
In this paper, an algorithm for the automatic manipulateion of tie rod is presented. 3D sensing system with a camera and slit laser is used together with robot to automate the process. Detailed procedures for the automatic manipulation of tie rod are presented. Experiments are done using robot with 3D sensing system. Nut runner is used for the manipulation of bolt and nut on the tie rod. Experimental results show the potential of presented algorithm while further research about the robust detection of laser contour on image is required for robust operation.
References
- A. Hormann, "Development of an Advanced Robot for Autonomous Assembly," in Proceedings of International Conference on Robotics and Automation, pp. 2452-2457, 1991.
- S. Jorg, J. Langwald, J. Stelter, G. Hirzinger, and C. Natale, "Flexible Robot-Assembly using a Multi- Sensor Approach," in Proceedings of International Conference on Robotics and Automation, pp. 3687-3694, 2000.
- E.J. Park, and J.K. Mills, "Three-Dimensional Localization of Thin-Walled Sheet Metal Parts for Robotic Assembly," Journal of Robotic Systems, vol. 19, no. 5, pp. 207-217, 2002. https://doi.org/10.1002/rob.10035
- G. M. Bone, and D. Capson, "Vision-guided fixtureless assembly of automotive components," Robotics and Computer Integrated Manufacturing, vol. 19, pp. 79-87, 2003. https://doi.org/10.1016/S0736-5845(02)00064-9
- P.C. Mario, L.J. Ismael, R.C. Reyes, and C.C. Jorge, "Machine vision approach for robotic assembly," Assembly Automation, vol. 25, pp. 204-216, 2011.
- R.L.A. Shauri, and K. Nonami, "Assembly manipulation of small objects by dual-arm manipulator," Assembly Automation, vol. 31, pp. 263-274, 2011. https://doi.org/10.1108/01445151111150604
- R.L.A. Shauri, and K. Nonami, "Assembly manipulation of small objects by dual-arm manipulator," Assembly Automation, Vol. 31, 2011, pp. 263-274. https://doi.org/10.1108/01445151111150604
- M. Hvilshoj, S. Bogh, O.S. Nielsen, and O. Madsen, "Autonomous industrial mobile manipulation (AIMM): past, present and future," Industrial Robot: An International Journal, vol. 39, pp.120-135, 2012. https://doi.org/10.1108/01439911211201582
- K. H. Strobl and G. Hirzinger, "Optimal Hand-Eye Calibration," in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, 2006.
- G. W. Kim, "Uncalibrated Visual Servoing through the Efficient Estimation of the Image Jacobian for Large Residual," Journal of Electrical Engineering & Technology, vol. 8, no.2, pp. 385-392, 2013. https://doi.org/10.5370/JEET.2013.8.2.385
- J. E. Ha, "An Image Processing Algorithm for the Automatic Manipulation of Tie Rod," International Journal of Control, Automation, and Systems, vol. 11, no.5, pp. 984-990, 2013. https://doi.org/10.1007/s12555-012-0545-8
- D. Lanman and G. Taubin, "Build Your Own 3D Scanner: 3D Photography for Beginners," SIGGRAPH 2009 Course Notes, 2009.
- J.E. Ha and K.W. Her, "Calibration of structured light stripe system using plane with slits," Optical Engineering, vol. 52, no.1, 2013.