• 제목/요약/키워드: 3D Point Data

검색결과 1,128건 처리시간 0.032초

Daubechies D4 필터를 사용한 시간가변(time-varying) 볼륨 데이터의 압축 (Compression of time-varying volume data using Daubechies D4 filter)

  • 허영주;이중연;구기범
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2007년도 학술대회 1부
    • /
    • pp.982-987
    • /
    • 2007
  • The necessity of data compression scheme for volume data has been increased because of the increase of data capacity and the amount of network uses. Now we have various kinds of compression schemes, and we can choose one of them depending on the data types, application fields, the preferences, etc. However, the capacity of data which is produced by application scientists has been excessively increased, and the format of most scientific data is 3D volume. For 2D image or 3D moving pictures, many kinds of standards are established and widely used, but for 3D volume data, specially time-varying volume data, it is very difficult to find any applicable compression schemes. In this paper, we present a compression scheme for encoding time-varying volume data. This scheme is aimed to encoding time-varying volume data for visualization. This scheme uses MPEG's I- and P-frame concept for raising compression ratio. Also, it transforms volume data using Daubechies D4 filter before encoding, so that the image quality is better than other wavelet-based compression schemes. This encoding scheme encodes time-varying volume data composed of single precision floating-point data. In addition, this scheme provides the random reconstruction accessibility for an unit, and can be used for compressing large time-varying volume data using correlation between frames while preserving image qualities.

  • PDF

Depth 정보를 이용한 Texturing 의 View Selection 알고리즘 (View Selection Algorithm for Texturing Using Depth Maps)

  • 한현덕;한종기
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송∙미디어공학회 2022년도 하계학술대회
    • /
    • pp.1207-1210
    • /
    • 2022
  • 2D 이미지로부터 카메라의 위치 정보를 추정할 수 있는 Structure-from-Motion (SfM) 기술과 dense depth map 을 추정하는 Multi-view Stereo (MVS) 기술을 이용하여 2D 이미지에서 point cloud 와 같은 3D data 를 얻을 수 있다. 3D data 는 VR, AR, 메타버스와 같은 컨텐츠에 사용되기 위한 핵심 요소이다. Point cloud 는 보통 VR, AR, 메타버스와 같은 많은 분야에 이용되기 위해 mesh 형태로 변환된 후 texture 를 입히는 Texturing 과정이 필요하다. 기존의 Texturing 방법에서는 mesh의 face에 사용될 image의 outlier를 제거하기 위해 color 정보만을 이용했다. Color 정보를 이용하는 방법은 mesh 의 face 에 대응되는 image 의 수가 충분히 많고 움직이는 물체에 대한 outlier 에는 효과적이지만 image 의 수가 부족한 경우와 부정확한 카메라 파라미터에 대한 outlier 에는 부족한 성능을 보인다. 본 논문에서는 Texturing 과정의 view selection 에서 depth 정보를 추가로 이용하여 기존 방법의 단점을 보완할 수 있는 방법을 제안한다.

  • PDF

GPU 가속화를 통한 이미지 특징점 기반 RGB-D 3차원 SLAM (Image Feature-Based Real-Time RGB-D 3D SLAM with GPU Acceleration)

  • 이동화;김형진;명현
    • 제어로봇시스템학회논문지
    • /
    • 제19권5호
    • /
    • pp.457-461
    • /
    • 2013
  • This paper proposes an image feature-based real-time RGB-D (Red-Green-Blue Depth) 3D SLAM (Simultaneous Localization and Mapping) system. RGB-D data from Kinect style sensors contain a 2D image and per-pixel depth information. 6-DOF (Degree-of-Freedom) visual odometry is obtained through the 3D-RANSAC (RANdom SAmple Consensus) algorithm with 2D image features and depth data. For speed up extraction of features, parallel computation is performed with GPU acceleration. After a feature manager detects a loop closure, a graph-based SLAM algorithm optimizes trajectory of the sensor and builds a 3D point cloud based map.

CT Based 3-Dimensional Treatment Planning of Intracavitary Brachytherapy for Cancer of the Cervix : Comparison between Dose-Volume Histograms and ICRU Point Doses to the Rectum and Bladder

  • Hashim, Natasha;Jamalludin, Zulaikha;Ung, Ngie Min;Ho, Gwo Fuang;Malik, Rozita Abdul;Ee Phua, Vincent Chee
    • Asian Pacific Journal of Cancer Prevention
    • /
    • 제15권13호
    • /
    • pp.5259-5264
    • /
    • 2014
  • Background: CT based brachytherapy allows 3-dimensional (3D) assessment of organs at risk (OAR) doses with dose volume histograms (DVHs). The purpose of this study was to compare computed tomography (CT) based volumetric calculations and International Commission on Radiation Units and Measurements (ICRU) reference-point estimates of radiation doses to the bladder and rectum in patients with carcinoma of the cervix treated with high-dose-rate (HDR) intracavitary brachytherapy (ICBT). Materials and Methods: Between March 2011 and May 2012, 20 patients were treated with 55 fractions of brachytherapy using tandem and ovoids and underwent post-implant CT scans. The external beam radiotherapy (EBRT) dose was 48.6Gy in 27 fractions. HDR brachytherapy was delivered to a dose of 21 Gy in three fractions. The ICRU bladder and rectum point doses along with 4 additional rectal points were recorded. The maximum dose ($D_{Max}$) to rectum was the highest recorded dose at one of these five points. Using the HDRplus 2.6 brachyhtherapy treatment planning system, the bladder and rectum were retrospectively contoured on the 55 CT datasets. The DVHs for rectum and bladder were calculated and the minimum doses to the highest irradiated 2cc area of rectum and bladder were recorded ($D_{2cc}$) for all individual fractions. The mean $D_{2cc}$ of rectum was compared to the means of ICRU rectal point and rectal $D_{Max}$ using the Student's t-test. The mean $D_{2cc}$ of bladder was compared with the mean ICRU bladder point using the same statistical test. The total dose, combining EBRT and HDR brachytherapy, were biologically normalized to the conventional 2 Gy/fraction using the linear-quadratic model. (${\alpha}/{\beta}$ value of 10 Gy for target, 3 Gy for organs at risk). Results: The total prescribed dose was $77.5Gy{\alpha}/{\beta}10$. The mean dose to the rectum was $4.58{\pm}1.22Gy$ for $D_{2cc}$, $3.76{\pm}0.65Gy$ at $D_{ICRU}$ and $4.75{\pm}1.01Gy$ at $D_{Max}$. The mean rectal $D_{2cc}$ dose differed significantly from the mean dose calculated at the ICRU reference point (p<0.005); the mean difference was 0.82 Gy (0.48-1.19Gy). The mean EQD2 was $68.52{\pm}7.24Gy_{{\alpha}/{\beta}3}$ for $D_{2cc}$, $61.71{\pm}2.77Gy_{{\alpha}/{\beta}3}$ at $D_{ICRU}$ and $69.24{\pm}6.02Gy_{{\alpha}/{\beta}3}$ at $D_{Max}$. The mean ratio of $D_{2cc}$ rectum to $D_{ICRU}$ rectum was 1.25 and the mean ratio of $D_{2cc}$ rectum to $D_{Max}$ rectum was 0.98 for all individual fractions. The mean dose to the bladder was $6.00{\pm}1.90Gy$ for $D_{2cc}$ and $5.10{\pm}2.03Gy$ at $D_{ICRU}$. However, the mean $D_{2cc}$ dose did not differ significantly from the mean dose calculated at the ICRU reference point (p=0.307); the mean difference was 0.90 Gy (0.49-1.25Gy). The mean EQD2 was $81.85{\pm}13.03Gy_{{\alpha}/{\beta}3}$ for $D_{2cc}$ and $74.11{\pm}19.39Gy_{{\alpha}/{\beta}3}$ at $D_{ICRU}$. The mean ratio of $D_{2cc}$ bladder to $D_{ICRU}$ bladder was 1.24. In the majority of applications, the maximum dose point was not the ICRU point. On average, the rectum received 77% and bladder received 92% of the prescribed dose. Conclusions: OARs doses assessed by DVH criteria were higher than ICRU point doses. Our data suggest that the estimated dose to the ICRU bladder point may be a reasonable surrogate for the $D_{2cc}$ and rectal $D_{Max}$ for $D_{2cc}$. However, the dose to the ICRU rectal point does not appear to be a reasonable surrogate for the $D_{2cc}$.

3차원 표면에서의 시계열 분석결과 표출방법 (Method of Displaying Time Series Analysis Results on a Three-Dimensional Surface)

  • 이봉준;박철희
    • 한국지리정보학회지
    • /
    • 제27권1호
    • /
    • pp.1-11
    • /
    • 2024
  • 현재 측정되는 많은 데이터가 지표를 기반으로 측정되지만, 측정지점의 높이값이 기초자료로 활용되지 않기 때문에, 3차원 지리정보시스템에 활용하고자 할 때 어려움이 발생한다. 지표면에 많은 양을 표시하기 위해서는 지형정보를 이용하여 지표면에 점을 그리거나, 지표면의 각 측정지점의 높이값을 추출하여 다각형을 생성하는 방법이 다양하게 사용될 수 있다. 본 연구에서는 다양한 형태의 데이터 표현 방법 중 지표면에 표현되는 시계열 측정 데이터의 시각화 성능을 향상시키기 위한 데이터 구축 및 표시 방법을 시도하고, 그 절차와 장단점을 살펴본다.

3차원 스캔 데이터를 이용하여 임의의 신체 치수에 대응하는 인체 형상 모델 생성 방법 (Synthesis of Human Body Shape for Given Body Sizes using 3D Body Scan Data)

  • 장태호;백승엽;이건우
    • 한국CDE학회논문집
    • /
    • 제14권6호
    • /
    • pp.364-373
    • /
    • 2009
  • In this paper, we suggest the method for constructing parameterized human body model which has any required body sizes from 3D scan data. Because of well developed 3D scan technology, we can get more detailed human body model data which allow to generate precise human model. In this field, there are a lot of research is performed with 3D scan data. But previous researches have some limitations to make human body model. They need too much time to perform hole-filling process or calculate parameterization of model. Even more they missed out verification process. To solve these problems, we used several methods. We first choose proper 125 3D scan data from 5th Korean body size survey of Size Korea according to age, height and weight. We also did post process, feature point setting, RBF interpolation and align, to parameterize human model. Then principal component analysis is adapted to the result of post processed data to obtain dominant shape parameters. These steps allow to reduce process time without loss of accuracy. Finally, we compare these results and statistical data of Size Korea to verify our parameterized human model.

Development of an Automation Tool for the Three-Dimensional Finite Element Analysis of Machine Tool Spindles

  • Choi, Jin-Woo
    • 한국생산제조학회지
    • /
    • 제24권2호
    • /
    • pp.166-171
    • /
    • 2015
  • In this study, an automation tool was developed for rapid evaluation of machine tool spindle designs with automated three-dimensional finite element analysis (3D FEA) using solid elements. The tool performs FEA with the minimum data of point coordinates to define the section of the spindle shaft and bearing positions. Using object-oriented programming techniques, the tool was implemented in the programming environment of a CAD system to make use of its objects. Its modules were constructed with the objects to generate the geometric model and then to convert it into the FE model of 3D solid elements at the workbenches of the CAD system using the point data. Graphic user interfaces were developed to allow users to interact with the tool. This tool is helpful for identification of a near optimal design of the spindle based on, for example, stiffness with multiple design changes and then FEAs.

Feature Detection and Simplification of 3D Face Data with Facial Expressions

  • Kim, Yong-Guk;Kim, Hyeon-Joong;Choi, In-Ho;Kim, Jin-Seo;Choi, Soo-Mi
    • ETRI Journal
    • /
    • 제34권5호
    • /
    • pp.791-794
    • /
    • 2012
  • We propose an efficient framework to realistically render 3D faces with a reduced set of points. First, a robust active appearance model is presented to detect facial features in the projected faces under different illumination conditions. Then, an adaptive simplification of 3D faces is proposed to reduce the number of points, yet preserve the detected facial features. Finally, the point model is rendered directly, without such additional processing as parameterization of skin texture. This fully automatic framework is very effective in rendering massive facial data on mobile devices.

MMS로부터 취득된 LiDAR 점군데이터의 반사강도 영상과 UAV 영상의 정합을 위한 특징점 기반 매칭 기법 연구 (Feature-based Matching Algorithms for Registration between LiDAR Point Cloud Intensity Data Acquired from MMS and Image Data from UAV)

  • 최윤조;;홍승환;손홍규
    • 한국측량학회지
    • /
    • 제37권6호
    • /
    • pp.453-464
    • /
    • 2019
  • 최근 3차원 공간정보에 대한 수요가 증가함에 따라 신속하고 정확한 데이터 구축의 중요성이 증대되어 왔다. 정밀한 3차원 데이터 구축이 가능한 LiDAR (Light Detection and Ranging) 데이터를 기준으로 UAV (Unmanned Aerial Vehicle) 영상을 정합하기 위한 연구가 다수 수행되어 왔으나, MMS (Mobile Mapping System)로부터 취득된 LiDAR 점군데이터의 반사강도 영상을 활용한 연구는 미흡한 실정이다. 따라서 본 연구에서는 MMS로부터 취득된 LiDAR 점군데이터를 반사영상으로 변환한 데이터와 UAV 영상 데이터의 정합을 위해 9가지의 특징점 기반매칭 기법을 비교·분석하였다. 분석 결과 SIFT (Scale Invariant Feature Transform) 기법을 적용하였을 때 안정적으로 높은 매칭 정확도를 확보할 수 있었으며, 다양한 도로 환경에서도 충분한 정합점을 추출할 수 있었다. 정합 정확도 분석 결과 SIFT 알고리즘을 적용한 경우 중복도가 낮으며 동일한 패턴이 반복되는 경우를 제외하고는 약 10픽셀 수준으로 정확도를 확보할 수 있었으며, UAV 영상 촬영 당시 UAV 자세에 따른 왜곡이 포함되어 있음을 감안할 때 합리적인 결과라고 할 수 있다. 따라서 본 연구의 분석 결과는 향후 LiDAR 점군데이터와 UAV 영상의 3차원 정합을 위한 기초연구로 활용될 수 있을 것으로 기대된다.

근사 함수를 이용한 Point-Based Simplification (Point-Based Simplification Using Moving-Least-Squrares)

  • 조현철;배진석;김창헌
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2004년도 추계학술대회 논문집
    • /
    • pp.1312-1314
    • /
    • 2004
  • This paper proposes a new simplification algorithm that simplifies reconstructed polygonal mesh from 3D point set considering an original point set. Previous method computes error using mesh information, but it makes to increase error of difference between an original and a simplified model by reason of implementation of simplification. Proposed method simplifies a reconstructed model using an original point data, we acquire a simplified model similar an original. We show several simplified results to demonstrate the usability of our methods.

  • PDF