DOI QR코드

DOI QR Code

Class-Agnostic 3D Mask Proposal and 2D-3D Visual Feature Ensemble for Efficient Open-Vocabulary 3D Instance Segmentation

효율적인 개방형 어휘 3차원 개체 분할을 위한 클래스-독립적인 3차원 마스크 제안과 2차원-3차원 시각적 특징 앙상블

  • Sungho Song ;
  • Kyungmin Park ;
  • Incheol Kim
  • 송성호 (경기대학교 컴퓨터과학과) ;
  • 박경민 (경기대학교 컴퓨터과학과 ) ;
  • 김인철 (경기대학교 AI컴퓨터공학부 )
  • Received : 2024.06.18
  • Accepted : 2024.07.10
  • Published : 2024.07.31

Abstract

Open-vocabulary 3D point cloud instance segmentation (OV-3DIS) is a challenging visual task to segment a 3D scene point cloud into object instances of both base and novel classes. In this paper, we propose a novel model Open3DME for OV-3DIS to address important design issues and overcome limitations of the existing approaches. First, in order to improve the quality of class-agnostic 3D masks, our model makes use of T3DIS, an advanced Transformer-based 3D point cloud instance segmentation model, as mask proposal module. Second, in order to obtain semantically text-aligned visual features of each point cloud segment, our model extracts both 2D and 3D features from the point cloud and the corresponding multi-view RGB images by using pretrained CLIP and OpenSeg encoders respectively. Last, to effectively make use of both 2D and 3D visual features of each point cloud segment during label assignment, our model adopts a unique feature ensemble method. To validate our model, we conducted both quantitative and qualitative experiments on ScanNet-V2 benchmark dataset, demonstrating significant performance gains.

개방형 어휘 3차원 포인트 클라우드 개체 분할은 3차원 장면 포인트 클라우드를 훈련단계에서 등장하였던 기본 클래스의 개체들뿐만 아니라 새로운 신규 클래스의 개체들로도 분할해야 하는 어려운 시각적 작업이다. 본 논문에서는 중요한 모델 설계 이슈별 기존 모델들의 한계점들을 극복하기 위해, 새로운 개방형 어휘 3차원 개체 분할 모델인 Open3DME를 제안한다. 첫째, 제안 모델은 클래스-독립적인 3차원 마스크의 품질을 향상시키기 위해, 새로운 트랜스포머 기반 3차원 포인트 클라우드 개체 분할 모델인 T3DIS[6]를 마스크 제안 모듈로 채용한다. 둘째, 제안 모델은 각 포인트 세그먼트별로 텍스트와 의미적으로 정렬된 시각적 특징을 얻기 위해, 사전 학습된 OpenScene 인코더와 CLIP 인코더를 적용하여 포인트 클라우드와 멀티-뷰 RGB 영상들로부터 각각 3차원 및 2차원 특징들을 추출한다. 마지막으로, 제안 모델은 개방형 어휘 레이블 할당 과정동안 각 포인트 클라우드 세그먼트별로 추출한 2차원 시각적 특징과 3차원 시각적 특징을 상호 보완적으로 함께 이용하기 위해, 특징 앙상블 기법을 적용한다. 본 논문에서는 ScanNet-V2 벤치마크 데이터 집합을 이용한 다양한 정량적, 정성적 실험들을 통해, 제안 모델의 성능 우수성을 입증한다.

Keywords

Acknowledgement

본 연구는 2024년 경기대학교 대학원 연구원장학생 장학금 지원에 의하여 수행되었음.

References

  1. B. Yang, J. Wang, R. Clark, Q. Hu, S. Wang, A. Markham, and N. Trigoni, "Learning object bounding boxes for 3D instance segmentation on point clouds," In Proceedings of the Nueral Information Processing Systems (NeurlPS), 2019. 
  2. S. Liu, S. Yu, S. Wu, H. Chen, and T. Liu, "Learning gaussian instance segmentation in point clouds," arXiv preprint arXiv:2007.09860, 2020. 
  3. T. Vu, K. Kim, T. Luu, T. Nguyen, and C. D. Yoo, "SoftGroup for 3D instance segmentation on point clouds," In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 
  4. Z. Liang, Z. Li, S. Xu, M. Tan, and K. Jia, "Instance segmentation in 3D Scenes using semantic superpoint tree networks," In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021. 
  5. J. Schult, F. Engelmann, A. Hermans, O. Litany, S. Tang, and B. Leibe, "Mask3D: Mask transformer for 3D semantic instance segmentation," In Proceedings of the International Conference on Robotics and Automation (ICRA), 2023. 
  6. S. Song and I. Kim, "T3DIS: Transformer-based 3D instance segmentation with auxiliary denoising learning," In Proceedings of the Journal of Institue of Control, Robotics and Systems(J Inst Contr Robot Syst), Vol.29, No.12, pp, 954-965, 2023. 
  7. A. Takmaz, E. Fedele, R. Sumner, M. Pollefeys, F. Tombari, and F. Engelmann, "OpenMask3D: Open-vocabulary 3D instance segmentation," In Proceedings of the Nueral Information Processing Systems (NeurlPS), 2023. 
  8. Z. Huang, X. Wu, X. Chen, H. Zhao, L. Zhu, and J. Lasenby, "OpenIns3D: Snap and Lookup for 3D Open-vocabulary Instance Segmentation," preprint arXiv:2309.00616, 2023. 
  9. S. Lu, H. Chang, E. Jing, A. Boularias, and K. Bekris, "OVIR-3D: Open-vocabulary 3D instance retrieval without training on 3D data," In Proceedings of the Conference on Robot Learning(CoRL), 2023. 
  10. R. Ding, J. Yang,, C. Xue, W. Zhang, S. Bai and X. Qi, "Lowis3D: Language-Driven Open-World Instance-Level 3D Scene Understanding," preprint arXiv:2308.00353, 2023. 
  11. A. Radford et al., "Learning transferable visual models from natural language supervision," preprint arXiv:2103.00020, 2021. 
  12. C. Jia et al., "Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision," In Proceedings of the International Conference on Machine Learning(ICML), 2021. 
  13. G. Ghiasi, X. Gu, Y. Cui, and T. Lin, "Scaling Open-vocabulary image segmentation with image-level labels," In Proceedings of the roceedings of the European Conference on Computer Vision (ECCV), 2022. 
  14. J. Ding, N. Xue, G. Xia, and D. Dai, "Decoupling zero-shot semantic segmentation," In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 
  15. F. Liang, B. Wu, X. Dai, K. Li, Y. Zhao, H. Zhang, P. Zhang, P. Vajda, and D. Marculescu, "Open-vocabulary semantic segmentation with mask-adapted CLIP," In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023. 
  16. J. Qin et al., "FreeSeg: Unified, universal and open-vocabulary image segmentation," In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023. 
  17. Y. Yang, X. Wu, T. He, H. Zhao, and X. Liu, "SAM3D: Segment anything in 3D scenes," arXiv preprint arXiv:2306.03908. 
  18. A. Kirillov et al., "Segment Anything," In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023. 
  19. R. Chen et al., "CLIP2Scene: Towards label-efficient 3D scene understanding by CLIP," In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023. 
  20. S. Peng, K. Genova, C. Jiang, A. Tagliasacchi, M. Pollefeys, and T. Funkhouser, "OpenScene: 3D Scene understanding with open vocabularies," In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023. 
  21. X. Zhou, R. Girdhar, A. Joulin, P. Krahenbuhl, and I. Misra, "Detecting twenty-thousand classes using image-level supervision," In Proceedings of the roceedings of the European Conference on Computer Vision (ECCV), 2022. 
  22. C. Choy, J. Gwak, and S. Savarese "4D Spatio-Temporal ConvNets: Minkowski convolutional neural networks," In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 
  23. X. Wu, Y. Lao, L. Jiang, X. Liu, and H. Zhao, "Point Transformer V2: Grouped vector attention and partition-based pooling," In Proceedings of the Nueral Information Processing Systems (NeurlPS), 2022.