DOI QR코드

DOI QR Code

Object Pose Estimation and Motion Planning for Service Automation System

서비스 자동화 시스템을 위한 물체 자세 인식 및 동작 계획

  • Youngwoo Kwon (Department of Electrical and Electronic Engineering, Pusan National University) ;
  • Dongyoung Lee (Department of Electrical and Electronic Engineering, Pusan National University) ;
  • Hosun Kang (Department of Electrical and Electronic Engineering, Pusan National University) ;
  • Jiwook Choi (Department of Electrical and Electronic Engineering, Pusan National University) ;
  • Inho Lee (Dept of Electronics Engineering, Pusan National University)
  • Received : 2023.12.18
  • Accepted : 2024.02.06
  • Published : 2024.05.31

Abstract

Recently, automated solutions using collaborative robots have been emerging in various industries. Their primary functions include Pick & Place, Peg in the Hole, fastening and assembly, welding, and more, which are being utilized and researched in various fields. The application of these robots varies depending on the characteristics of the grippers attached to the end of the collaborative robots. To grasp a variety of objects, a gripper with a high degree of freedom is required. In this paper, we propose a service automation system using a multi-degree-of-freedom gripper, collaborative robots, and vision sensors. Assuming various products are placed at a checkout counter, we use three cameras to recognize the objects, estimate their pose, and create grasping points for grasping. The grasping points are grasped by the multi-degree-of-freedom gripper, and experiments are conducted to recognize barcodes, a key task in service automation. To recognize objects, we used a CNN (Convolutional Neural Network) based algorithm and point cloud to estimate the object's 6D pose. Using the recognized object's 6d pose information, we create grasping points for the multi-degree-of-freedom gripper and perform re-grasping in a direction that facilitates barcode scanning. The experiment was conducted with four selected objects, progressing through identification, 6D pose estimation, and grasping, recording the success and failure of barcode recognition to prove the effectiveness of the proposed system.

Keywords

Acknowledgement

This paper was supported by Korea Institute for Advancement of Technology (KIAT) grant funded by the Korea Government (MOTIE) (P0008473, HRD Program for Industrial Innovation). This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2021R1C1C1009989).

References

  1. Y. Sim and S. Jin, "Gripper Design with Adjustable Working Area for Depalletizing Delivery Cardboard box of Various Sizes," Journal of Korea Robotics Society, vol. 18, no. 1, pp. 29-36, Feb., 2023, DOI: 10.7746/jkros.2023.18.1.029.
  2. J. Cho, S. S. Kang, and K. K. Kim, "Object Recognition and Pose Estimation Based on Deep Learning for Visual Servoing," Journal of Korea Robotics Society, vol. 14, no. 1, pp. 1-7, Feb., 2019, DOI: 10.7746/jkros.2019.14.1.001.
  3. E-.C. Hwang, "Artificial Intelligence Service Robot Market Trend," Korean Society of Computer Information Conference, vol. 29, no. 1, pp. 111-112, 2021, [Online], https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE10532113.
  4. D. Song, J.-B. Yi, and S.-J. Yi, "Development of an Efficient 3D Object Recognition Algorithm for Robotic Grasping in Cluttered Environments," Journal of Korea Robotics Society, vol. 17, no. 3, pp. 255-263, Aug., 2022, DOI: 10.7746/jkros.2022.17.3.255.
  5. R. Mykhailyshyn, V. Savkiv, F. Duchon, R. Trembach, and I. M. Diahovchenko, "Research of Energy Efficiency of Manipulation of Dimensional Objects with the Use of Pneumatic Gripping Devices," 2019 IEEE 2nd Ukraine Conference on Electrical and Computer Engineering (UKRCON), Lviv, Ukraine, pp. 527-532, 2019, DOI: 10.1109/ukrcon.2019.8879957.
  6. H. J. Lee, S. Y. Han, and H.-S. Yoon, "A Soft Gripper with Variable Stiffness for Stable Gripping using an Auxetic Structure," Journal of the Korean Society of Manufacturing Process Engineers, vol. 22, no. 9, pp. 96-104, Sept., 2023, DOI: 10.14775/ksmpe.2023.22.09.096.
  7. T. Chen, M. Tippur, S. Wu, V. Kumar, E. Adelson, and P. Agrawal, "Visual dexterity: In-hand dexterous manipulation from depth," arXiv:2211.11744, 2022, DOI: 10.48550/arXiv.2211.11744.
  8. R. Q. Charles, H. Su, M. Kaichun, and L. J. Guibas, "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation," 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, pp. 77-85, 2017, DOI: 10.1109/cvpr.2017.16.
  9. Z. Wang, Y. Xu, Q. He, Z. Fang, G. Xu, and J. Fu, "Grasping pose estimation for SCARA robot based on deep learning of point cloud," The International Journal of Advanced Manufacturing Technology, vol. 108, pp. 1217-1231, Apr., 2020, DOI: 10.1007/s00170-020-05257-2.
  10. H.-Y. Lin, S.-C. Liang, and Y.-K. Chen, "Robotic Grasping With Multi-View Image Acquisition and Model-Based Pose Estimation," IEEE Sensors Journal, vol. 21, no. 10, pp. 11870- 11878, pp. 11870-11878, May, 2021, DOI: 10.1109/jsen.2020.3030791.
  11. Y. Xiang, T. Schmidt, V. Narayanan, and D. Fox, "PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes," Robotics: Science and Systems, Jun., 2018, DOI: 10.15607/rss.2018.xiv.019.
  12. B. Tekin, S. N. Sinha, and P. Fua, "Real-Time Seamless Single Shot 6D Object Pose Prediction," 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, pp. 292-301, 2018, DOI: 10.1109/cvpr.2018.00038.
  13. S. Peng, X. Zhou, Y. Liu, H. Lin, Q. Huang, and H. Bao, "PVNet: Pixel-Wise Voting Network for 6DoF Object Pose Estimation," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 6, pp. 3212-3223, Jun., 2022, DOI: 10.1109/tpami.2020.3047388.
  14. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You Only Look Once: Unified, Real-Time Object Detection," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, pp. 779-788, 2016, DOI: 10.1109/cvpr.2016.91.
  15. M. A. Fischler and R. C. Bolles, "Random sample consensus," Communications of the ACM, vol. 24, no. 6, pp. 381-395, Jun., 1981, DOI: 10.1145/358669.358692.
  16. H. Balta, J. Velagic, W. Bosschaerts, G. De Cubber, and B. Siciliano, "Fast Statistical Outlier Removal Based Method for Large 3D Point Clouds of Outdoor Environments," IFACPapersOnLine, vol. 51, no. 22, pp. 348-353, 2018, DOI: 10.1016/j.ifacol.2018.11.566.
  17. R. Bro and A. K. Smilde, "Principal component analysis," Anal. Methods, vol. 6, no. 9, pp. 2812-2831, 2014, DOI: 10.1039/c3ay41907j.
  18. Q.-Y. Zhou, J. Park, and V. Koltun, "Open3D: A modern library for 3D data processing," arXiv:1801.09847, 2018, DOI: 10.48550/arXiv.1801.09847.
  19. T. M. Kodinariya and P. Makwana, "Review on determining number of Cluster in K-Means Clustering," International Journal, vol. 1, no. 6 pp. 90-95, Jan., 2013, [Online], https://www.researchgate.net/publication/313554124.https://www.researchgate.net/publication/313554124.
  20. M. Ester, H. Kriegel, J. Sander, and X. Xu, "A density-based algorithm for discovering clusters in large spatial databases with noise," Knowledge Discovery and Data Mining, vol. 96, no. 34, Aug., 1996, [Online], https://cdn.aaai.org/KDD/1996/KDD96-037.pdf.
  21. S. Hinterstoisser, V. Lepetit, S. Ilic, S. Holzer, G. Bradski, K. Konolige, and N. Navab, "Model Based Training, Detection and Pose Estimation of Texture-Less 3D Objects in Heavily Cluttered Scenes," Lecture Notes in Computer Science, vol. 7724, pp. 548-562, 2013, DOI: 10.1007/978-3-642-37331-2_42.