• Title/Summary/Keyword: The 3D image

Search Result 5,101, Processing Time 0.039 seconds

Comparison of the observer reliability of cranial anatomic landmarks based on cephalometric radiograph and three-dimensional computed tomography scans (삼차원 전산화단층촬영사진과 측모두부 방사선규격사진의 계측자에 따른 계측오차에 대한 비교분석)

  • Kim, Jae-Young;Lee, Dong-Keun;Lee, Sang-Han
    • Journal of the Korean Association of Oral and Maxillofacial Surgeons
    • /
    • v.36 no.4
    • /
    • pp.262-269
    • /
    • 2010
  • Introduction: Accurate diagnosis and treatment planning are very important for orthognathic surgery. A small error in diagnosis can cause postoperative functional and esthetic problems. Pre-existing 2-dimensional (D) chephalogram analysis has a high likelihood of error due to its intrinsic and extrinsic problems. A cephalogram can also be inaccurate due to the limited anatomic points, superimposition of the image, and the considerable time and effort required. Recently, an improvement in technology and popularization of computed tomography (CT) provides patients with 3-D computer based cephalometric analysis, which complements traditional analysis in many ways. However, the results are affected by the experience and the subject of the investigator. Materials and Methods: The effects of the sources human error in 2-D cephalogram analysis and 3-D computerized tomography cephalometric analysis were compared using Simplant CMF program. From 2008 Jan to 2009 June, patients who had undergone CT, cephalo AP, lat were investigated. Results: 1. In the 3 D and 2 D images, 10 out of 93 variables (10.4%) and 11 out 44 variables (25%), respectively, showed a significant difference. 2. Landmarks that showed a significant difference in the 2 D image were the points frequently superimposed anatomically. 3. Go Po Orb landmarks, which showed a significant difference in the 3 D images, were found to be the artificial points for analysis in the 2 D image, and in the current definition, these points cannot be used for reproducibility in the 3 D image. Conclusion: Generally, 3-D CT images provide more precise identification of the traditional cephalometric landmark. Greater variability of certain landmarks in the mediolateral direction is probably related to the inadequate definition of the landmarks in the third dimension.

A Study on Process of Creating 3D Models Using the Application of Artificial Intelligence Technology

  • Jiayuan Liang;Xinyi Shan;Jeanhun Chung
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.4
    • /
    • pp.346-351
    • /
    • 2023
  • With the rapid development of Artificial Intelligence (AI) technology, there is an increasing variety of methods for creating 3D models. These include innovations such as text-only generation, 2D images to 3D models, and combining images with cue words. Each of these methods has unique advantages, opening up new possibilities in the field of 3D modeling. The purpose of this study is to explore and summarize these methods in-depth, providing researchers and practitioners with a comprehensive perspective to understand the potential value of these methods in practical applications. Through a comprehensive analysis of pure text generation, 2D images to 3D models, and images with cue words, we will reveal the advantages and disadvantages of the various methods, as well as their applicability in different scenarios. Ultimately, this study aims to provide a useful reference for the future direction of AI modeling and to promote the innovation and progress of 3D model generation technology.

Reconstruction of Collagen Using Tensor-Voting & Graph-Cuts

  • Park, Doyoung
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.9 no.1
    • /
    • pp.89-102
    • /
    • 2019
  • Collagen can be used in building artificial skin replacements for treatment of burns and towards the reconstruction of bone as well as researching cell behavior and cellular interaction. The strength of collagen in connective tissue rests on the characteristics of collagen fibers. 3D confocal imaging of collagen fibers enables the characterization of their spatial distribution as related to their function. However, the image stacks acquired with confocal laser-scanning microscope does not clearly show the collagen architecture in 3D. Therefore, we developed a new method to reconstruct, visualize and characterize collagen fibers from fluorescence confocal images. First, we exploit the tensor voting framework to extract sparse reliable information about collagen structure in a 3D image and therefore denoise and filter the acquired image stack. We then propose to segment the collagen fibers by defining an energy term based on the Hessian matrix. This energy term is minimized by a min cut-max flow algorithm that allows adaptive regularization. We demonstrate the efficacy of our methods by visualizing reconstructed collagen from specific 3D image stack.

The Value of Three-Dimensional Reconstructions of MRI Imaging using Maximum Intensity Projection Technique (유방 MRI의 최대강도투사 기법에 의한 3차원 재구성 영상의 유용성)

  • Cho, Jae-Hwan;Lee, Hae-Kag;Hong, In-Sik;Kim, Hyun-Joo;Jang, Hyun-Cheol;Park, Cheol-Soo;Park, Tae-Nam
    • Journal of Digital Contents Society
    • /
    • v.12 no.2
    • /
    • pp.157-164
    • /
    • 2011
  • The purpose of this study was to examine the usefulness of 3D reconstruction images in breast MRI by performing a quantitative comparative analysis in patients diagnosed with DCIS. On a 3.0T MR scanner, subtraction images and 3D reconstruction images were obtained from 20 patients histologically diagnosed with ductal carcinoma in situ (DCIS). The findings from the quantitative image analysis are the following: The 3D reconstruction images showed higher SNR at the lesion area, ductal area, and fat area that of the subtraction image. In addition, the CNR were not significantly different in the lesion area itself between the subtraction images and 3D reconstruction images.

Study on Co-Simulation Method of Dynamics and Guidance Algorithms for Strap-Down Image Tracker Using Unity3D (Unity3D를 이용한 스트랩 다운 영상 추적기의 동역학 및 유도 법칙 알고리즘의 상호-시뮬레이션 방법에 관한 연구)

  • Marin, Mikael;Kim, Taeho;Bang, Hyochoong;Cho, Hanjin;Cho, Youngki;Choi, Yonghoon
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.46 no.11
    • /
    • pp.911-920
    • /
    • 2018
  • In this study, we performed a study to track the angle between the guided weapon and the target by using the strap-down image seeker, and constructed a test bed that can simulate it visually. This paper describes a method to maintain high-performance feature distribution in the implementation of sparse feature tracking algorithm such as Lucas Kanade's optical flow algorithm for target tracking using image information. We have extended the feature tracking problem to the concept of feature management. To realize this, we constructed visual environment using Unity3D engine and developed image processing simulation using OpenCV. For the co-simulation, dynamic system modeling was performed with Matlab Simulink, the visual environment using Unity3D was constructed, and computer vision work using OpenCV was performed.

Clinical Application of Three-Dimensional Reconstruction in Shoulder Surgeries

  • Kim, Sung-Hwan;Ha, Seung-Joo
    • Journal of International Society for Simulation Surgery
    • /
    • v.1 no.2
    • /
    • pp.67-70
    • /
    • 2014
  • 3-D medical image reconstruction technique using computer simulation technology has been used in the knowledge of the anatomical features and the biomechanical characteristics with the advancement of computer hardware and software. Especially, the use of 3-D image reconstruction technique in orthopaedics demonstrates that this technique is useful to improve surgical technique as well as to help inform the knowledge of shoulder joint anatomy. The purpose of this article is to introduce the utilization of 3-D image technology in shoulder surgeries.

Deep Learning Based Gray Image Generation from 3D LiDAR Reflection Intensity (딥러닝 기반 3차원 라이다의 반사율 세기 신호를 이용한 흑백 영상 생성 기법)

  • Kim, Hyun-Koo;Yoo, Kook-Yeol;Park, Ju H.;Jung, Ho-Youl
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.14 no.1
    • /
    • pp.1-9
    • /
    • 2019
  • In this paper, we propose a method of generating a 2D gray image from LiDAR 3D reflection intensity. The proposed method uses the Fully Convolutional Network (FCN) to generate the gray image from 2D reflection intensity which is projected from LiDAR 3D intensity. Both encoder and decoder of FCN are configured with several convolution blocks in the symmetric fashion. Each convolution block consists of a convolution layer with $3{\times}3$ filter, batch normalization layer and activation function. The performance of the proposed method architecture is empirically evaluated by varying depths of convolution blocks. The well-known KITTI data set for various scenarios is used for training and performance evaluation. The simulation results show that the proposed method produces the improvements of 8.56 dB in peak signal-to-noise ratio and 0.33 in structural similarity index measure compared with conventional interpolation methods such as inverse distance weighted and nearest neighbor. The proposed method can be possibly used as an assistance tool in the night-time driving system for autonomous vehicles.

Recent Trends of Weakly-supervised Deep Learning for Monocular 3D Reconstruction (단일 영상 기반 3차원 복원을 위한 약교사 인공지능 기술 동향)

  • Kim, Seungryong
    • Journal of Broadcast Engineering
    • /
    • v.26 no.1
    • /
    • pp.70-78
    • /
    • 2021
  • Estimating 3D information from a single image is one of the essential problems in numerous applications. Since a 2D image inherently might originate from an infinite number of different 3D scenes, thus 3D reconstruction from a single image is notoriously challenging. This challenge has been overcame by the advent of recent deep convolutional neural networks (CNNs), by modeling the mapping function between 2D image and 3D information. However, to train such deep CNNs, a massive training data is demanded, but such data is difficult to achieve or even impossible to build. Recent trends thus aim to present deep learning techniques that can be trained in a weakly-supervised manner, with a meta-data without relying on the ground-truth depth data. In this article, we introduce recent developments of weakly-supervised deep learning technique, especially categorized as scene 3D reconstruction and object 3D reconstruction, and discuss limitations and further directions.

Evaluation of Radioactivity Concentration According to Radioactivity Uptake on Image Acquisition of PET/CT 2D and 3D (PET/CT 2D와 3D 영상 획득에서 방사능 집적에 따른 방사능 농도의 평가)

  • Park, Sun-Myung;Hong, Gun-Chul;Lee, Hyuk;Kim, Ki;Choi, Choon-Ki;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.111-114
    • /
    • 2010
  • Purpose: There has been recent interest in the radioactivity uptake and image acquisition of radioactivity concentration. The degree of uptake is strongly affected by many factors containing $^{18}F$-FDG injection volume, tumor size and the density of blood glucose. Therefore, we investigated how radioactivity uptake in target influences 2D or 3D image analysis and elucidate radioactivity concentration that mediate this effect. This study will show the relationship between the radioactivity uptake and 2D,3D image acquisition on radioactivity concentration. Materials and Methods: We got image with 2D and 3D using 1994 NEMA PET phantom and GE Discovery(GE, U.S.A) STe 16 PET/CT setting the ratio of background and hot sphere's radioactivity concentration as being a standard of 1:2, 1:4, 1:8, 1:10, 1:20, and 1:30 respectively. And we set 10 minutes for CT attenuation correction and acquisition time. For the reconstruction method, we applied iteration method with twice of the iterative and twenty times subset to both 2D and 3D respectively. For analyzing the images, We set the same ROI at the center of hot sphere and the background radioactivity. We measured the radioactivity count of each part of hot sphere and background, and it was comparative analyzed. Results: The ratio of hot sphere's radioactivity density and the background radioactivity with setting ROI was 1:1.93, 1:3.86, 1:7.79, 1:8.04, 1:18.72, and 1:26.90 in 2D, and 1:1.95, 1:3.71, 1:7.10, 1:7.49, 1:15.10, and 1:23.24 in 3D. The differences of percentage were 3.50%, 3.47%, 8.12%, 8.02%, 10.58%, and 11.06% in 2D, the minimum differentiation was 3.47%, and the maximum one was 11.06%. In 3D, the difference of percentage was 3.66%, 4.80%, 8.38%, 23.92%, 23.86%, and 22.69%. Conclusion: The difference of accumulated concentrations is significantly increased following enhancement of radioactivity concentration. The change of radioactivity density in 2D image is affected by less than 3D. For those reasons, when patient is examined as follow up scan with changing the acquisition mode, scan should be conducted considering those things may affect to the quantitative analysis result and take into account these differences at reading.

  • PDF

Analysis of image distortion in 3D integral imaging display (집적결상된 3차원 영상의 중복 및 누락 왜곡에 대한 연구)

  • 서장일;차성도;신승호
    • Korean Journal of Optics and Photonics
    • /
    • v.15 no.3
    • /
    • pp.234-240
    • /
    • 2004
  • In the integral imaging system for 3D display, we have investigated the image distortions, such as duplication and omission, which are presented in the reconstructed image. We have also discussed the quantitative condition which minimizes the distortion, with several fundamental variables. In addition, we present the experimental results which support the quantitative analysis of the distortion.