• Title/Summary/Keyword: 멀티 뷰포인트

Search Result 5, Processing Time 0.021 seconds

Multi-Modal Cross Attention for 3D Point Cloud Semantic Segmentation (3차원 포인트 클라우드의 의미적 분할을 위한 멀티-모달 교차 주의집중)

  • HyeLim Bae;Incheol Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.660-662
    • /
    • 2023
  • 3차원 포인트 클라우드의 의미적 분할은 환경을 구성하는 물체 단위로 포인트 클라우드를 분할하는 작업으로서, 환경의 3차원적 구성을 이해하고 환경과 상호작용에 필수적인 시각 지능을 요구한다. 본 논문에서는 포인트 클라우드에서 추출하는 3차원 기하학적 특징과 함께 멀티-뷰 영상에서 추출하는 2차원 시각적 특징들도 활용하는 새로운 3차원 포인트 클라우드 의미적 분할 모델 MFNet을 제안한다. 제안 모델은 서로 이질적인 2차원 시각적 특징과 3차원 기하학적 특징의 효과적인 융합을 위해, 새로운 중기 융합 전략과 멀티-모달 교차 주의집중을 이용한다. 본 논문에서는 ScanNetV2 벤치마크 데이터 집합을 이용한 다양한 실험들을 통해, 제안 모델 MFNet의 우수성을 입증한다.

Skeleton-based 3D Pointcloud Registration Method (스켈레톤 기반의 3D 포인트 클라우드 정합 방법)

  • Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.89-90
    • /
    • 2021
  • 본 논문에서는 3D(dimensional) 스켈레톤을 이용하여 멀티 뷰 RGB-D 카메라를 캘리브레이션 하는 새로운 기법을 제안하고자 한다. 멀티 뷰 카메라를 캘리브레이션 하기 위해서는 일관성 있는 특징점이 필요하다. 우리는 다시점 카메라를 캘리브레이션 하기 위한 특징점으로 사람의 스켈레톤을 사용한다. 사람의 스켈레톤은 최신의 자세 추정(pose estimation) 알고리즘들을 이용하여 쉽게 구할 수 있게 되었다. 우리는 자세 추정 알고리즘을 통해서 획득된 3D 스켈레톤의 관절 좌표를 특징점으로 사용하는 RGB-D 기반의 캘리브레이션 알고리즘을 제안한다.

  • PDF

Creating Simultaneous Story Arcs Using Constraint Based Narrative Structure (제약 조건 기반 서술구조를 이용한 동시 진행 이야기의 생성)

  • Moon, Sung-Hyun;Kim, Seok-Kyoo;Hong, Euy-Seok;Han, Sang-Yong
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.5
    • /
    • pp.107-114
    • /
    • 2010
  • A nonlinear story is generated through the interactivity with users using the interactive storytelling system. In a play or movie, audiences can watch one scene at a time, and in order to watch next scene, they should wait for the end of current scene. In the real world, however, various events can simultaneously happen at different places, and even those events performed by characters may dramatically affect the flow of the story. This paper suggests Constraint Based narrative structure to create such story, known as "Simultaneous Story Arcs", and "Multi Viewpoint" to simultaneously lead the direction of the stories in each place.

Effective Multi-Modal Feature Fusion for 3D Semantic Segmentation with Multi-View Images (멀티-뷰 영상들을 활용하는 3차원 의미적 분할을 위한 효과적인 멀티-모달 특징 융합)

  • Hye-Lim Bae;Incheol Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.12
    • /
    • pp.505-518
    • /
    • 2023
  • 3D point cloud semantic segmentation is a computer vision task that involves dividing the point cloud into different objects and regions by predicting the class label of each point. Existing 3D semantic segmentation models have some limitations in performing sufficient fusion of multi-modal features while ensuring both characteristics of 2D visual features extracted from RGB images and 3D geometric features extracted from point cloud. Therefore, in this paper, we propose MMCA-Net, a novel 3D semantic segmentation model using 2D-3D multi-modal features. The proposed model effectively fuses two heterogeneous 2D visual features and 3D geometric features by using an intermediate fusion strategy and a multi-modal cross attention-based fusion operation. Also, the proposed model extracts context-rich 3D geometric features from input point cloud consisting of irregularly distributed points by adopting PTv2 as 3D geometric encoder. In this paper, we conducted both quantitative and qualitative experiments with the benchmark dataset, ScanNetv2 in order to analyze the performance of the proposed model. In terms of the metric mIoU, the proposed model showed a 9.2% performance improvement over the PTv2 model using only 3D geometric features, and a 12.12% performance improvement over the MVPNet model using 2D-3D multi-modal features. As a result, we proved the effectiveness and usefulness of the proposed model.

Using Skeleton Vector Information and RNN Learning Behavior Recognition Algorithm (스켈레톤 벡터 정보와 RNN 학습을 이용한 행동인식 알고리즘)

  • Kim, Mi-Kyung;Cha, Eui-Young
    • Journal of Broadcast Engineering
    • /
    • v.23 no.5
    • /
    • pp.598-605
    • /
    • 2018
  • Behavior awareness is a technology that recognizes human behavior through data and can be used in applications such as risk behavior through video surveillance systems. Conventional behavior recognition algorithms have been performed using the 2D camera image device or multi-mode sensor or multi-view or 3D equipment. When two-dimensional data was used, the recognition rate was low in the behavior recognition of the three-dimensional space, and other methods were difficult due to the complicated equipment configuration and the expensive additional equipment. In this paper, we propose a method of recognizing human behavior using only CCTV images without additional equipment using only RGB and depth information. First, the skeleton extraction algorithm is applied to extract points of joints and body parts. We apply the equations to transform the vector including the displacement vector and the relational vector, and study the continuous vector data through the RNN model. As a result of applying the learned model to various data sets and confirming the accuracy of the behavior recognition, the performance similar to that of the existing algorithm using the 3D information can be verified only by the 2D information.