• 제목/요약/키워드: 3D Model Segmentation

검색결과 148건 처리시간 0.023초

관심 객체 분할을 위한 삼차원 능동모양모델 기법 (Three-dimensional Active Shape Model for Object Segmentation)

  • 임성재;호요성
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2006년도 하계종합학술대회
    • /
    • pp.335-336
    • /
    • 2006
  • In this paper, we propose an active shape image segmentation method for three-dimensional(3-D) medical images using a generation method of the 3-D shape model. The proposed method generates the shape model using a distance transform and a tetrahedron method for landmarking. After generating the 3-D model, we extend the training and segmentation processes of 2-D active shape model(ASM) and improve the searching process. The proposed method provides comparative results to 2-D ASM, region-based or contour-based methods. Experimental results demonstrate that this algorithm is effective for a semi-automatic segmentation method of 3-D medical images.

  • PDF

3D Mesh Model Exterior Salient Part Segmentation Using Prominent Feature Points and Marching Plane

  • Hong, Yiyu;Kim, Jongweon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권3호
    • /
    • pp.1418-1433
    • /
    • 2019
  • In computer graphics, 3D mesh segmentation is a challenging research field. This paper presents a 3D mesh model segmentation algorithm that focuses on removing exterior salient parts from the original 3D mesh model based on prominent feature points and marching plane. To begin with, the proposed approach uses multi-dimensional scaling to extract prominent feature points that reside on the tips of each exterior salient part of a given mesh. Subsequently, a set of planes intersect the 3D mesh; one is the marching plane, which start marching from prominent feature points. Through the marching process, local cross sections between marching plane and 3D mesh are extracted, subsequently, its corresponding area are calculated to represent local volumes of the 3D mesh model. As the boundary region of an exterior salient part generally lies on the location at which the local volume suddenly changes greatly, we can simply cut this location with the marching plane to separate this part from the mesh. We evaluated our algorithm on the Princeton Segmentation Benchmark, and the evaluation results show that our algorithm works well for some categories.

치과용 CT영상의 3차원 Visualization을 위한 Segmentation에 관한 연구 (A Study of Segmentation for 3D Visualization In Dental Computed Tomography image)

  • 민상기;채옥삼
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 추계종합학술대회 논문집(3)
    • /
    • pp.177-180
    • /
    • 2000
  • CT images are sequential images that provide medical doctors helpful information for treatment and surgical operation. It is also widely used for the 3D reconstruction of human bone and organs. In the 3D reconstruction, the quality of the reconstructed 3D model heavily depends on the segmentation results. In this paper, we propose an algorithm suitable for the segmentation of teeth and the maxilofacial bone.

  • PDF

멀티-뷰 영상들을 활용하는 3차원 의미적 분할을 위한 효과적인 멀티-모달 특징 융합 (Effective Multi-Modal Feature Fusion for 3D Semantic Segmentation with Multi-View Images)

  • 배혜림;김인철
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제12권12호
    • /
    • pp.505-518
    • /
    • 2023
  • 3차원 포인트 클라우드 의미적 분할은 각 포인트별로 해당 포인트가 속한 물체나 영역의 분류 레이블을 예측함으로써, 포인트 클라우드를 서로 다른 물체들이나 영역들로 나누는 컴퓨터 비전 작업이다. 기존의 3차원 의미적 분할 모델들은 RGB 영상들에서 추출하는 2차원 시각적 특징과 포인트 클라우드에서 추출하는 3차원 기하학적 특징의 특성을 충분히 고려한 특징 융합을 수행하지 못한다는 한계가 있다. 따라서, 본 논문에서는 2차원-3차원 멀티-모달 특징을 이용하는 새로운 3차원 의미적 분할 모델 MMCA-Net을 제안한다. 제안 모델은 중기 융합 전략과 멀티-모달 교차 주의집중 기반의 융합 연산을 적용함으로써, 이질적인 2차원 시각적 특징과 3차원 기하학적 특징을 효과적으로 융합한다. 또한 3차원 기하학적 인코더로 PTv2를 채용함으로써, 포인트들이 비-정규적으로 분포한 입력 포인트 클라우드로부터 맥락정보가 풍부한 3차원 기하학적 특징을 추출해낸다. 본 논문에서는 제안 모델의 성능을 분석하기 위해 벤치마크 데이터 집합인 ScanNetv2을 이용한 다양한 정량 및 정성 실험들을 진행하였다. 성능 척도 mIoU 측면에서 제안 모델은 3차원 기하학적 특징만을 이용하는 PTv2 모델에 비해 9.2%의 성능 향상을, 2차원-3차원 멀티-모달 특징을 사용하는 MVPNet 모델에 비해 12.12%의 성능 향상을 보였다. 이를 통해 본 논문에서 제안한 모델의 효과와 유용성을 입증하였다.

Deep learning approach to generate 3D civil infrastructure models using drone images

  • Kwon, Ji-Hye;Khudoyarov, Shekhroz;Kim, Namgyu;Heo, Jun-Haeng
    • Smart Structures and Systems
    • /
    • 제30권5호
    • /
    • pp.501-511
    • /
    • 2022
  • Three-dimensional (3D) models have become crucial for improving civil infrastructure analysis, and they can be used for various purposes such as damage detection, risk estimation, resolving potential safety issues, alarm detection, and structural health monitoring. 3D point cloud data is used not only to make visual models but also to analyze the states of structures and to monitor them using semantic data. This study proposes automating the generation of high-quality 3D point cloud data and removing noise using deep learning algorithms. In this study, large-format aerial images of civilian infrastructure, such as cut slopes and dams, which were captured by drones, were used to develop a workflow for automatically generating a 3D point cloud model. Through image cropping, downscaling/upscaling, semantic segmentation, generation of segmentation masks, and implementation of region extraction algorithms, the generation of the point cloud was automated. Compared with the method wherein the point cloud model is generated from raw images, our method could effectively improve the quality of the model, remove noise, and reduce the processing time. The results showed that the size of the 3D point cloud model created using the proposed method was significantly reduced; the number of points was reduced by 20-50%, and distant points were recognized as noise. This method can be applied to the automatic generation of high-quality 3D point cloud models of civil infrastructures using aerial imagery.

MRI 영상을 이용한 한국인 인체 두부의 FDTD 모델링 (FDTD Modeling of the Korean Human Head using MRI Images)

  • 이재용;명노훈;최명선;오학태;홍수원;김기회
    • 한국전자파학회논문지
    • /
    • 제11권4호
    • /
    • pp.582-591
    • /
    • 2000
  • 본 논문에서는 휴대전화기에 의한 인체 영향을 FDTD (시간영역 유한차분법) 해석할 수 있도록 한국인 표준 에 알맞는 인체 두부의 FDTD 모텔 제작 방법을 소개하였다. 한국인 표준에 알맞은 사람의 두부를 MRI 촬영한 다음.2차원 MRI 영상 데이터를 이용하여 2차원 segmentation을 하였다. segmentation은 반자동법을 적용하였 으며 제작된 2차원 se밍nentation 데이터를 토대로 $1mm\times1mm\times1mm$크기의 3차원 고해상도 segmentation 데이터를 제작하였다. 3차원 고해상도 segmentation 데이터를 이용하여 휴대전화기의 사용 상황에 어올리도록 다양한 각도로 기울인 인체 두부의 FDTD 모델을 제작하였다.

  • PDF

Grid 방법을 이용한 측정 점데이터로부터의 CAD모델 생성에 관한 연구 (CAD Model Generation from Point Clouds using 3D Grid Method)

  • 우혁제;강의철;이관행
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2001년도 춘계학술대회 논문집
    • /
    • pp.435-438
    • /
    • 2001
  • Reverse engineering technology refers to the process that creates a CAD model of an existing part using measuring devices. Recently, non-contact scanning devices have become more accurate and the speed of data acquisition has increased drastically. However, they generate thousands of points per second and various types of point data. Therefore, it becomes a major issue to handle the huge amount and various types of point data. To generate a CAD model from scanned point data efficiently, these point data should be well arranged through point data handling processes such as data reduction and segmentation. This paper proposes a new point data handling method using 3D grids. The geometric information of a part is extracted from point cloud data by estimating normal values of the points. The non-uniform 3D grids for data reduction and segmentation are generated based on the geometric information. Through these data reduction and segmentation processes, it is possible to create CAD models autmatically and efficiently. The proposed method is applied to two quardric medels and the results are discussed.

  • PDF

Accuracy evaluation of liver and tumor auto-segmentation in CT images using 2D CoordConv DeepLab V3+ model in radiotherapy

  • An, Na young;Kang, Young-nam
    • 대한의용생체공학회:의공학회지
    • /
    • 제43권5호
    • /
    • pp.341-352
    • /
    • 2022
  • Medical image segmentation is the most important task in radiation therapy. Especially, when segmenting medical images, the liver is one of the most difficult organs to segment because it has various shapes and is close to other organs. Therefore, automatic segmentation of the liver in computed tomography (CT) images is a difficult task. Since tumors also have low contrast in surrounding tissues, and the shape, location, size, and number of tumors vary from patient to patient, accurate tumor segmentation takes a long time. In this study, we propose a method algorithm for automatically segmenting the liver and tumor for this purpose. As an advantage of setting the boundaries of the tumor, the liver and tumor were automatically segmented from the CT image using the 2D CoordConv DeepLab V3+ model using the CoordConv layer. For tumors, only cropped liver images were used to improve accuracy. Additionally, to increase the segmentation accuracy, augmentation, preprocess, loss function, and hyperparameter were used to find optimal values. We compared the CoordConv DeepLab v3+ model using the CoordConv layer and the DeepLab V3+ model without the CoordConv layer to determine whether they affected the segmentation accuracy. The data sets used included 131 hepatic tumor segmentation (LiTS) challenge data sets (100 train sets, 16 validation sets, and 15 test sets). Additional learned data were tested using 15 clinical data from Seoul St. Mary's Hospital. The evaluation was compared with the study results learned with a two-dimensional deep learning-based model. Dice values without the CoordConv layer achieved 0.965 ± 0.01 for liver segmentation and 0.925 ± 0.04 for tumor segmentation using the LiTS data set. Results from the clinical data set achieved 0.927 ± 0.02 for liver division and 0.903 ± 0.05 for tumor division. The dice values using the CoordConv layer achieved 0.989 ± 0.02 for liver segmentation and 0.937 ± 0.07 for tumor segmentation using the LiTS data set. Results from the clinical data set achieved 0.944 ± 0.02 for liver division and 0.916 ± 0.18 for tumor division. The use of CoordConv layers improves the segmentation accuracy. The highest of the most recently published values were 0.960 and 0.749 for liver and tumor division, respectively. However, better performance was achieved with 0.989 and 0.937 results for liver and tumor, which would have been used with the algorithm proposed in this study. The algorithm proposed in this study can play a useful role in treatment planning by improving contouring accuracy and reducing time when segmentation evaluation of liver and tumor is performed. And accurate identification of liver anatomy in medical imaging applications, such as surgical planning, as well as radiotherapy, which can leverage the findings of this study, can help clinical evaluation of the risks and benefits of liver intervention.

효율적인 개방형 어휘 3차원 개체 분할을 위한 클래스-독립적인 3차원 마스크 제안과 2차원-3차원 시각적 특징 앙상블 (Class-Agnostic 3D Mask Proposal and 2D-3D Visual Feature Ensemble for Efficient Open-Vocabulary 3D Instance Segmentation)

  • 송성호;박경민;김인철
    • 정보처리학회 논문지
    • /
    • 제13권7호
    • /
    • pp.335-347
    • /
    • 2024
  • 개방형 어휘 3차원 포인트 클라우드 개체 분할은 3차원 장면 포인트 클라우드를 훈련단계에서 등장하였던 기본 클래스의 개체들뿐만 아니라 새로운 신규 클래스의 개체들로도 분할해야 하는 어려운 시각적 작업이다. 본 논문에서는 중요한 모델 설계 이슈별 기존 모델들의 한계점들을 극복하기 위해, 새로운 개방형 어휘 3차원 개체 분할 모델인 Open3DME를 제안한다. 첫째, 제안 모델은 클래스-독립적인 3차원 마스크의 품질을 향상시키기 위해, 새로운 트랜스포머 기반 3차원 포인트 클라우드 개체 분할 모델인 T3DIS[6]를 마스크 제안 모듈로 채용한다. 둘째, 제안 모델은 각 포인트 세그먼트별로 텍스트와 의미적으로 정렬된 시각적 특징을 얻기 위해, 사전 학습된 OpenScene 인코더와 CLIP 인코더를 적용하여 포인트 클라우드와 멀티-뷰 RGB 영상들로부터 각각 3차원 및 2차원 특징들을 추출한다. 마지막으로, 제안 모델은 개방형 어휘 레이블 할당 과정동안 각 포인트 클라우드 세그먼트별로 추출한 2차원 시각적 특징과 3차원 시각적 특징을 상호 보완적으로 함께 이용하기 위해, 특징 앙상블 기법을 적용한다. 본 논문에서는 ScanNet-V2 벤치마크 데이터 집합을 이용한 다양한 정량적, 정성적 실험들을 통해, 제안 모델의 성능 우수성을 입증한다.

다시점 객체 공분할을 이용한 2D-3D 물체 자세 추정 (2D-3D Pose Estimation using Multi-view Object Co-segmentation)

  • 김성흠;복윤수;권인소
    • 로봇학회논문지
    • /
    • 제12권1호
    • /
    • pp.33-41
    • /
    • 2017
  • We present a region-based approach for accurate pose estimation of small mechanical components. Our algorithm consists of two key phases: Multi-view object co-segmentation and pose estimation. In the first phase, we explain an automatic method to extract binary masks of a target object captured from multiple viewpoints. For initialization, we assume the target object is bounded by the convex volume of interest defined by a few user inputs. The co-segmented target object shares the same geometric representation in space, and has distinctive color models from those of the backgrounds. In the second phase, we retrieve a 3D model instance with correct upright orientation, and estimate a relative pose of the object observed from images. Our energy function, combining region and boundary terms for the proposed measures, maximizes the overlapping regions and boundaries between the multi-view co-segmentations and projected masks of the reference model. Based on high-quality co-segmentations consistent across all different viewpoints, our final results are accurate model indices and pose parameters of the extracted object. We demonstrate the effectiveness of the proposed method using various examples.