Browse > Article
http://dx.doi.org/10.7746/jkros.2022.17.4.455

Reinforcement Learning-based Search Trajectory Generation and Stiffness Tuning for Connector Assembly  

Kim, Yong-Geon (Mechanical Engineering, Korea University)
Na, Minwoo (Mechanical Engineering, Korea University)
Song, Jae-Bok (Mechanical Engineering, Korea University)
Publication Information
The Journal of Korea Robotics Society / v.17, no.4, 2022 , pp. 455-462 More about this Journal
Abstract
Since electric connectors such as power connectors have a small assembly tolerance and have a complex shape, the assembly process is performed manually by workers. Especially, it is difficult to overcome the assembly error, and the assembly takes a long time due to the error correction process, which makes it difficult to automate the assembly task. To deal with this problem, a reinforcement learning-based assembly strategy using contact states was proposed to quickly perform the assembly process in an unstructured environment. This method learns to generate a search trajectory to quickly find a hole based on the contact state obtained from the force/torque data. It can also learn the stiffness needed to avoid excessive contact forces during assembly. To verify this proposed method, power connector assembly process was performed 200 times, and it was shown to have an assembly success rate of 100% in a translation error within ±4 mm and a rotation error within ±3.5°. Furthermore, it was verified that the assembly time was about 2.3 sec, including the search time of about 1 sec, which is faster than the previous methods.
Keywords
Reinforcement Learning; Robotic Assembly; Assembly Strategy; Connector Assembly;
Citations & Related Records
연도 인용수 순위
  • Reference
1 H. Park, J. H. Bae, J.-H. Part, M.-H. Baeg, and J. Park, "Intuitive Peg-in-Hole Assembly Strategy with a Compliant Manipulator," IEEE ISR 2013, Seoul, Korea, 2013, DOI: 10.1109/ISR.2013.6695699.   DOI
2 H. Park, J. Park, D.-H. Lee, J.-H. Park, M.-H. Baeg, and J- H. Bae, "Compliance-based Robotic Peg-in-Hole Assembly Strategy Without Force Feedback," IEEE Transactions on Industrial Electronics, vol. 64, no. 8, pp. 6299-6309, August, 2017, DOI:10.1109/TIE.2017.2682002.   DOI
3 W. Lian, T. Kelch, D. Holz, A. Norton, and S. Schaal, "Benchmarking Off-The-Shelf Solutions to Robotic Assembly Tasks," 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, pp. 1046-1053, 2021, DOI: 10.1109/IROS51168.2021.9636586. 2021.   DOI
4 G. Schoettler, A. Nair, J. A. Ojea, S. Levine, and E. Solowjow, "Meta-Reinforcement Learning for Robotic Industrial Insertion Tasks," 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, pp. 9728-9735, 2020, DOI: 10.1109/IROS45743.2020.9340848.   DOI
5 T. Inoue, G. De Magistris, A. Munawar, T. Yokoya, and R. Tachibana, "Deep reinforcement learning for high precision assembly tasks," 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, pp. 819-825, 2017, DOI: 10.1109/IROS.2017.8202244.   DOI
6 H.-C. Song, Y.-L. Kim, and J.-B. Song, "Guidance algorithm for complex-shape peg-in-hole strategy based on geometrical information and force control," Advanced Robotics, vol. 30, no. 8, pp. 552-563, February, 2016, DOI: 10.1080/01691864.2015.1130172.   DOI
7 D. E. Whitney, "Mechanical Assemblies: Their Design, Manufacture, and Role in Product Development," Oxford University Press, 2004, [Online], https://www.globalspec.com/reference/69914/203279/4-k-further-reading.
8 G. Schoettler, A. Nair, J. Luo, S. Bahl, J. A. Ojea, E. Solowjow, and S. Levine, "Deep Reinforcement Learning for Industrial Insertion Tasks with Visual Input and Natural Rewards," 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, pp. 5548-5555, 2020, DOI: 10.1109/IROS45743.2020.9341714.   DOI
9 F. Chen, F. Cannella, H. Huang, H. Sasaki, and T. Fukuda, "A study on error recovery search strategies of electronic connector mating for robotic fault-tolerant assembly," Journal of Intelligent Robotic Systems, vol. 81, pp. 257-271, February, 2016, DOI: 10.1007/s10846-015-0248-5.   DOI
10 T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, "Soft ActorCritic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor," Int. Conf. on Machine Learning, Stockholm Sweden, pp. 1861-1870, 2018, [Online], https://proceedings.mlr.press/v80/haarnoja18b.
11 Y.-L. Kim, K.-H. Ahn, and J.-B. Song, "Reinforcement Learning based on Movement Primitives for Contact Tasks," Robotics and Computer-Integrated Manufacturing, vol. 62, April, 2020, DOI: 10.1016/j.rcim.2019.101863.   DOI
12 N. Hogan, "Impedance Control: An Approach to Manipulation: Part III-Applications," Journal of Dynamic Systems, Measurement, and Control, vol. 107, no. 1, pp. 17-24, March, 1985, DOI: 10.1115/1.3140713.   DOI