• 제목/요약/키워드: 3D images

검색결과 3,524건 처리시간 0.031초

Recognition of partially occluded 3-D targets from computationally reconstructed integral images

  • Lee, Keong-Jin;Li, Gen;Lee, Guen-Sik;Hwang, Dong-Choon;Kim, Eun-Soo
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 한국정보디스플레이학회 2008년도 International Meeting on Information Display
    • /
    • pp.761-762
    • /
    • 2008
  • In this paper, a novel approach for robust recognition of partially occluded 3-D target objects from computationally reconstructed integral images is proposed. The occluding object noises are selectively removed from the picked-up elemental images and performance of the proposed integral imaging-based 3-D target recognition system can be improved.

  • PDF

3D Visualization for Extremely Dark Scenes Using Merging Reconstruction and Maximum Likelihood Estimation

  • Lee, Jaehoon;Cho, Myungjin;Lee, Min-Chul
    • Journal of information and communication convergence engineering
    • /
    • 제19권2호
    • /
    • pp.102-107
    • /
    • 2021
  • In this paper, we propose a new three-dimensional (3D) photon-counting integral imaging reconstruction method using a merging reconstruction process and maximum likelihood estimation (MLE). The conventional 3D photon-counting reconstruction method extracts photons from elemental images using a Poisson random process and estimates the scene using statistical methods such as MLE. However, it can reduce the photon levels because of an average overlapping calculation. Thus, it may not visualize 3D objects in severely low light environments. In addition, it may not generate high-quality reconstructed 3D images when the number of elemental images is insufficient. To solve these problems, we propose a new 3D photon-counting merging reconstruction method using MLE. It can visualize 3D objects without photon-level loss through a proposed overlapping calculation during the reconstruction process. We confirmed the image quality of our proposed method by performing optical experiments.

2D 이미지를 이용한 3D 공간상의 자연현상 라이브러리 구축 (Construction of Library for 3D Natural Phenomena Using 2D Images)

  • 김종찬;김종성;김응곤;김치용
    • 디지털콘텐츠학회 논문지
    • /
    • 제9권3호
    • /
    • pp.461-470
    • /
    • 2008
  • 영상기술을 이용하여 자연현상을 표현하는 방법에는 컴퓨터를 이용한 자연현상 시뮬레이션과 스크립트 기반으로 원하는 영상을 출력하는 방법이 있다. 대다수의 전문가들은 많은 데이터와 고도의 수식 등을 가지고 출력을 영상으로 획득한다. 이런 방법을 이용하여 출력영상을 취득하면 시간과 비용이 많이 소요된다는 문제점이 있다. 그래서 본 논문에서는 자연현상을 3차원 공간상에 구현함에 있어서 복잡한 수식이나 프로그래밍, 촬영 등을 배제하고 단지 2차원 이미지를 이용하여 유체의 자연현상 중 안개를 쉽게 표현함으로써 3차원 공간상에서 동양화 제작 시 배경처리를 효율적으로 표현할 수 있는 자연현상 라이브러리를 구축한다.

  • PDF

Precision comparison of 3D photogrammetry scans according to the number and resolution of images

  • Park, JaeWook;Kim, YunJung;Kim, Lyoung Hui;Kwon, SoonChul;Lee, SeungHyun
    • International journal of advanced smart convergence
    • /
    • 제10권2호
    • /
    • pp.108-122
    • /
    • 2021
  • With the development of 3D graphics software and the speed of computer hardware, it is an era that can be realistically expressed not only in movie visual effects but also in console games. In the production of such realistic 3D models, 3D scans are increasingly used because they can obtain hyper-realistic results with relatively little effort. Among the various 3D scanning methods, photogrammetry can be used only with a camera. Therefore, no additional hardware is required, so its demand is rapidly increasing. Most 3D artists shoot as many images as possible with a video camera, etc., and then calculate using all of those images. Therefore, the photogrammetry method is recognized as a task that requires a lot of memory and long hardware operation. However, research on how to obtain precise results with 3D photogrammetry scans is insufficient, and a large number of photos is being utilized, which leads to increased production time and data capacity and decreased productivity. In this study, point cloud data generated according to changes in the number and resolution of photographic images were produced, and an experiment was conducted to compare them with original data. Then, the precision was measured using the average distance value and standard deviation of each vertex of the point cloud. By comparing and analyzing the difference in the precision of the 3D photogrammetry scans according to the number and resolution of images, this paper presents a direction for obtaining the most precise and effective results to 3D artists.

Optimization of block-matching and 3D filtering (BM3D) algorithm in brain SPECT imaging using fan beam collimator: Phantom study

  • Do, Yongho;Cho, Youngkwon;Kang, Seong-Hyeon;Lee, Youngjin
    • Nuclear Engineering and Technology
    • /
    • 제54권9호
    • /
    • pp.3403-3414
    • /
    • 2022
  • The purpose of this study is to model and optimize the block-matching and 3D filtering (BM3D) algorithm and to evaluate its applicability in brain single-photon emission computed tomography (SPECT) images using a fan beam collimator. For quantitative evaluation of the noise level, the coefficient of variation (COV) and contrast-to-noise ratio (CNR) were used, and finally, a no-reference-based evaluation parameter was used for optimization of the BM3D algorithm in the brain SPECT images. As a result, optimized results were derived when the sigma values of the BM3D algorithm were 0.15, 0.2, and 0.25 in brain SPECT images acquired for 5, 10, and 15 s, respectively. In addition, when the sigma value of the optimized BM3D algorithm was applied, superior results were obtained compared with conventional filtering methods. In particular, we confirmed that the COV and CNR of the images obtained using the BM3D algorithm were improved by 2.40 and 2.33 times, respectively, compared with the original image. In conclusion, the usefulness of the optimized BM3D algorithm in brain SPECT images using a fan beam collimator has been proven, and based on the results, it is expected that its application in various nuclear medicine examinations will be possible.

3D 초음파 영상에서 방광 내 잔뇨량 추정을 위한 새로운 알고리즘 (A New Algorithm to Estimate Urine Volume from 3D Ultrasound Bladder Images)

  • 조태식;이수열;조민형
    • 대한의용생체공학회:의공학회지
    • /
    • 제37권1호
    • /
    • pp.31-38
    • /
    • 2016
  • For the patients with bladder dysfunction, measurement of urine volume inside the bladder is very critical to avoid bladder failure. In measuring urine volume inside a bladder, low-resolution 3D ultrasound images are widely used. However, urine volume estimation from 3D ultrasound images is prone to big errors and inconsistency because of low spatial resolution and low signal-to-noise ratio of ultrasound images. We developed a new robust volume estimation algorithm which is not computationally expensive. We tested the algorithm on a lab-built ultrasound bladder phantom and volunteers. The average error rate of the human bladder volume estimation was 5.9% which was better than the commercial machine.

Hologram Generation of 3D Objects Using Multiple Orthographic View Images

  • Kim, Min-Su;Baasantseren, Ganbat;Kim, Nam;Park, Jae-Hyeung
    • Journal of the Optical Society of Korea
    • /
    • 제12권4호
    • /
    • pp.269-274
    • /
    • 2008
  • We propose a new synthesis method for the hologram of 3D objects using incoherent multiple orthographic view images. The 3D objects are captured and their multiple orthographic view images are generated from the captured image. Each orthographic view image is numerically overridden by the plane wave propagating in the direction of the corresponding view angle and integrated to form a point in the hologram plane. By repeating this process for all orthographic view images, we can generate the Fourier hologram of the 3D objects.

3D 스캔 데이터를 이용한 얼굴 애니메이션 시스템 (A Facial Animation System Using 3D Scanned Data)

  • 구본관;정철희;이재윤;조선영;이명원
    • 정보처리학회논문지A
    • /
    • 제17A권6호
    • /
    • pp.281-288
    • /
    • 2010
  • 본 논문에서는 3차원 얼굴 스캔 데이터와 사진 이미지를 이용하여 고화질의 3차원 얼굴 모델과 모핑 애니메이션을 생성하는 시스템 개발에 대해 기술한다. 본 시스템은 얼굴 특징점 입력 도구, 얼굴 텍스처매핑 인터페이스, 3차원 얼굴 모핑 인터페이스로 구성되어 있다. 얼굴 특징점 입력 도구는 3차원 텍스처매핑과 모핑 애니메이션을 위한 보조 도구로서 얼굴의 특징점을 입력하여 텍스처매핑과 임의의 두 얼굴간의 모핑 영역을 정할 때 사용된다. 텍스처매핑은 3D 스캐너로부터 획득한 얼굴의 기하 데이터에 세 방향의 사진 이미지를 이용하여 매핑한다. 3D 얼굴모핑은 얼굴 특징점 입력 도구로부터 얻은 특징점을 중심으로 얼굴 영역을 분류하여 임의의 두 얼굴 간의 영역간 매핑을 실현한다. 본 시스템은 사용자가 별도의 프로그래밍 작업 없이 대화형 인터페이스에서 3D 스캐너에서 획득한 얼굴 메쉬 데이터를 이용하여 사진 이미지로 텍스처 매핑을 실행하여 사실적인 3D 얼굴 모델을 얻을 수 있고, 임의의 서로 다른 얼굴 모델들간의 모핑 애니메이션을 쉽게 실현할 수가 있다.

초음파 진단기의 설정 파라미터가 영상의 질에 미치는 효과 (Effects of Ultrasonic Scanner Setting Parameters on the Quality of Ultrasonic Images)

  • 양정화;이경성;강관석;팽동국;최민주
    • 한국음향학회지
    • /
    • 제27권2호
    • /
    • pp.57-65
    • /
    • 2008
  • 초음파 진단기의 설정은 영상의 질에 영향을 준다. sonographer는 최적의 영상을 얻기 위해 초음파 영상의 질에 영향을 주는 설정 변수에 대한 효과를 이해해야 한다. 본 연구에서는 4가지 대표적인 영상 조절 변수 즉 TGC (Time Gain Control), 이득 (Gain), 주파수 (Frequency), DR (Dynamic Range)를 고려하였다. 초음파 영상의 질은 LCS (Law Contrast Sensitivity) 관점에서 정량적으로 비친 평가하였다. 실험은 임상용 초음파 진단기 (SA-9000 PRIME, Medison, Korea)를 사용하여 초음파 평가 팬텀 (539, ATS, USA)의 LCS 타겟을 영상화하였다. 영상 조절 변수의 설정을 변화하면서, 각 설정에 대한, 6개의 LCS 영상 (+15 dB, +C dB, +3 dB, -3 dB, -6 dB, -15 dB)을 취득하고, 영상에 대한 LCS픽셀 값을 계산하였다. 실험결과 TGC가 최대, Gain은 중간에서 최대사이, 주파수가 Pen모드, DR이 40-66 dB일 때 타겟 영상 (LCS)의 평균 픽셀 값이 높았다. 모든 타겟 영상에서 DR이 40 dB일 때에 LCS가 좋은 영상을 얻었다. 본 결과는 임상에서 잘 발견되는 solid lesion (+15, +6, +3 dB 타겟과 유사) 또는 cystic lesion (-15, -6, -3 dB 타겟과 비슷)이 있는 mass 평가 시 적절한 영상조절 변수 설정에 유용한 정보를 제공할 것으로 기대된다.

Difference in glenoid retroversion between two-dimensional axial computed tomography and three-dimensional reconstructed images

  • Kim, Hyungsuk;Yoo, Chang Hyun;Park, Soo Bin;Song, Hyun Seok
    • Clinics in Shoulder and Elbow
    • /
    • 제23권2호
    • /
    • pp.71-79
    • /
    • 2020
  • Background: The glenoid version of the shoulder joint correlates with the stability of the glenohumeral joint and the clinical results of total shoulder arthroplasty. We sought to analyze and compare the glenoid version measured by traditional axial two-dimensional (2D) computed tomography (CT) and three-dimensional (3D) reconstructed images at different levels. Methods: A total of 30 cases, including 15 male and 15 female patients, who underwent 3D shoulder CT imaging was randomly selected and matched by sex consecutively at one hospital. The angular difference between the scapular body axis and 2D CT slice axis was measured. The glenoid version was assessed at three levels (midpoint, upper one-third, and center of the lower circle of the glenoid) using Friedman's method in the axial plane with 2D CT images and at the same level of three different transverse planes using a 3D reconstructed image. Results: The mean difference between the scapular body axis on the 3D reconstructed image and the 2D CT slice axis was 38.4°. At the level of the midpoint of the glenoid, the measurements were 1.7°±4.9° on the 2D CT images and -1.8°±4.1° in the 3D reconstructed image. At the level of the center of the lower circle, the measurements were 2.7°±5.2° on the 2D CT images and -0.5°±4.8° in the 3D reconstructed image. A statistically significant difference was found between the 2D CT and 3D reconstructed images at all three levels. Conclusions: The glenoid version is measured differently between axial 2D CT and 3D reconstructed images at three levels. Use of 3D reconstructed imaging can provide a more accurate glenoid version profile relative to 2D CT. The glenoid version is measured differently at different levels.