• Title/Summary/Keyword: 3D images

Search Result 3,552, Processing Time 0.035 seconds

Enhancement Pattern of the Normal Facial Nerve on Three - Dimensional (3D) - Fluid Attenuated Inversion Recovery (FLAIR) Sequence at 3.0 T MR Units (3.0T 자기공명영상기기에서 시행한 3D-FLAIR 영상에서의 정상 안면신경의 조영증강 양상)

  • Hyun, Dong-Ho;Lim, Hyun-Kyung;Park, Jee-Won;Kim, Jong-Lim;Lee, Ha-Young;Park, Soon-Chan;Ahn, Joong-Ho;Baek, Jung-Hwan;Choi, Choong-Gon;Lee, Jeong-Hyun
    • Investigative Magnetic Resonance Imaging
    • /
    • v.16 no.1
    • /
    • pp.25-30
    • /
    • 2012
  • Purpose : To compare the enhancement pattern of normal facial nerves on 3D-FLAIR and 3D-T1-FFE-F) sequences at 3.0 T MR units. Materials and Methods: We assessed 20 consecutive subjects without a history of facial nerve abnormalities who underwent temporal bone MRI with contrast enhancement between January 2008 and March 2009. Two neuroradiologists independently reviewed pre-/post-enhanced 3D-T1-FFE-FS and 3D-FLAIR images respectively with 2-week interval to assess the enhancement of normal facial nerves divided into five anatomical segments. The degree of enhancement in each segment was graded as none, mild or strong, and the results of 3D-FLAIR and 3D-T1-FFE-FS image sets were compared. Results: On 3D-FLAIR images, one of the two reviewers observed mild enhancement of the genu segment in two (10%) subjects. On 3D-T1-FFE-FS images, at least one segment of the facial nerve was enhanced in 13 (65%) subjects. At least one reviewer found that 17 of the 100 segments showed enhancement on 3D-T1-FFE-FS images, with the mastoid segment being the most commonly enhanced. Interobserver agreement on 3D-T1-FFE-FS images was good for enhancement of the normal facial nerve (${\kappa}$= 0.589). Conclusion: In contrast to 3D-T1-FFE-FS, normal facial nerve segments rarely showed enhancement on 3D-FLAIR images.

A New Illumination Compensation Method based on Color Optimization Function for Generating 3D Volumetric Model (3차원 체적 모델의 생성을 위한 색상 최적화 함수 기반의 조명 보상 기법)

  • Park, Byung-Seo;Kim, Kyung-Jin;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.25 no.4
    • /
    • pp.598-608
    • /
    • 2020
  • In this paper, we propose a color correction technique for images acquired through a multi-view camera system for acquiring a 3D model. It is assumed that the 3D volume is captured indoors, and the position and intensity of the light is constant over time. 8 multi-view cameras are used, and converging toward the center of the space, so even if the lighting is constant, the intensity and angle of light entering each camera may be different. Therefore, a color optimization function is applied to a color correction chart taken from all cameras, and a color conversion matrix defining a relationship between the obtained 8 images is calculated. Using this, the images of all cameras are corrected based on the standard color correction chart. This paper proposed a color correction method to minimize the color difference between cameras when acquiring an image using 8 cameras of 3D objects, and experimentally proved that the color difference between images is reduced when it is restored to a 3D image.

A study on the 3D simulation system improvement through comparing visual images between the real garment and the 3D garment simulation of women's Jacket (여성 재킷의 실제착의와 가상착의 비교를 통한 3D 가상착의 시스템 개선에 대한 연구)

  • Kwak, Younsin
    • The Journal of the Convergence on Culture Technology
    • /
    • v.2 no.3
    • /
    • pp.15-22
    • /
    • 2016
  • The purpose of this study is to propose improvements for 3D garment simulation system by comparison with the difference between real garment and 3D garment simulation of women's jacket. The process of the study was to take pictures on the standard sized subject wearing the jacket of basic size, to get a avatar from body sizes of the subject, and to obtain images of 3D garment simulation on the avatar. The appearance evaluation was resulted by the method of a questionnaire survey after presenting the images to 24 members of patterner and 22 members of designer. On that appearance evaluation by designer group, perform comparative analysis of differences between the real garment and the 3D garment simulation of women's jacket. On that appearance evaluation by patterner group, perform comparative analysis of differences between the real garment and the 3D garment simulation of women's jacket. There were the differences on 4 areas: 1 questions of the side, 1 questions on the back, 7 questions on the sleeve, and 1 questions on the collar, and the results showed that the 3D garment simulation was preferable on each question.

Curved Projection Integral Imaging Using an Additional Large-Aperture Convex Lens for Viewing Angle Improvement

  • Hyun, Joo-Bong;Hwang, Dong-Choon;Shin, Dong-Hak;Lee, Byung-Gook;Kim, Eun-Soo
    • ETRI Journal
    • /
    • v.31 no.2
    • /
    • pp.105-110
    • /
    • 2009
  • In this paper, we propose a curved projection integral imaging system to improve the horizontal and vertical viewing angles. The proposed system can be easily implemented by additional use of a large-aperture convex lens in conventional projection integral imaging. To obtain the simultaneous display of 3D images through real and virtual image fields, we propose a computer-generated pickup method based on ray optics and elemental images, which are synthesized for the proposed system. To show the feasibility of the proposed system, preliminary experiments are carried out. Experimental results indicate that our system improves the viewing angle and displays 3D images simultaneously in real and virtual image fields.

  • PDF

Reconstruction of Fourier hologram for 3D objects using repeated multiple orthographic view images (3차원 물체의 반복된 다중 직교 투영 영상을 이용한 푸리에 홀로그램의 재생)

  • Kim, Min-Su;Kim, Nam;Park, Jae-Hyeong;Gil, Sang-Geun
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2009.02a
    • /
    • pp.167-168
    • /
    • 2009
  • We propose a new computing method for Fourier hologram of 3D objects captured by lens array. Fourier hologram of the two objects which positioned at different distances can be calculated using multiple orthographic view images. The size of the Fourier hologram is in proportion to the number of the orthographic view images. By repeating the orthographic view images, the size of the Fourier hologram can be increased. The principle is verified by numerically reconstructing the hologram which is synthesized from the orthographic images captured optically.

  • PDF

Manufacture of 3-Dimensional Image and Virtual Dissection Program of the Human Brain (사람 뇌의 3차원 영상과 가상해부 풀그림 만들기)

  • Chung, M.S.;Lee, J.M.;Park, S.K.;Kim, M.K.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1998 no.11
    • /
    • pp.57-59
    • /
    • 1998
  • For medical students and doctors, knowledge of the three-dimensional (3D) structure of brain is very important in diagnosis and treatment of brain diseases. Two-dimensional (2D) tools (ex: anatomy book) or traditional 3D tools (ex: plastic model) are not sufficient to understand the complex structures of the brain. However, it is not always guaranteed to dissect the brain of cadaver when it is necessary. To overcome this problem, the virtual dissection programs of the brain have been developed. However, most programs include only 2D images that do not permit free dissection and free rotation. Many programs are made of radiographs that are not as realistic as sectioned cadaver because radiographs do not reveal true color and have limited resolution. It is also necessary to make the virtual dissection programs of each race and ethnic group. We attempted to make a virtual dissection program using a 3D image of the brain from a Korean cadaver. The purpose of this study is to present an educational tool for those interested in the anatomy of the brain. The procedures to make this program were as follows. A brain extracted from a 58-years old male Korean cadaver was embedded with gelatin solution, and serially sectioned into 1.4 mm-thickness using a meat slicer. 130 sectioned specimens were inputted to the computer using a scanner ($420\times456$ resolution, true color), and the 2D images were aligned on the alignment program composed using IDL language. Outlines of the brain components (cerebrum, cerebellum, brain stem, lentiform nucleus, caudate nucleus, thalamus, optic nerve, fornix, cerebral artery, and ventricle) were manually drawn from the 2D images on the CorelDRAW program. Multimedia data, including text and voice comments, were inputted to help the user to learn about the brain components. 3D images of the brain were reconstructed through the volume-based rendering of the 2D images. Using the 3D image of the brain as the main feature, virtual dissection program was composed using IDL language. Various dissection functions, such as dissecting 3D image of the brain at free angle to show its plane, presenting multimedia data of brain components, and rotating 3D image of the whole brain or selected brain components at free angle were established. This virtual dissection program is expected to become more advanced, and to be used widely through Internet or CD-title as an educational tool for medical students and doctors.

  • PDF

Corrected 3D Reconstruction Based on Continuous Image Sets (연속 다중 이미지 기반 3D 생성 모델 보정 기술 개발)

  • Kim, TaeYeon;Jo, Dongsik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.374-375
    • /
    • 2022
  • Recently, Metaverse service has been widely used to naturally communicate with a remote location, freeing from time and spatial constraints. In order to produce such contents, it is necessary to restore and synthesize a 3D model based on real space data. In this paper, a 3D-generated reconstruction model is produced based on continuous images using multiple cameras and a technique to correct the reconstructed 3D model is presented. For this. offline multi-camera setup was performed, errors were analyzed on the 3D model created through images obtained from various angles, and correction was performed using a matching technique between image frames. It is expected that 3D reconstructed data can be utilized in various service fields such as culture, tourism, and medical care.

  • PDF

Integral Imaging Monitors with an Enlarged Viewing Angle

  • Dorado, Adria;Saavedra, Genaro;Sola-Pikabea, Jorge;Martinez-Corral, Manuel
    • Journal of information and communication convergence engineering
    • /
    • v.13 no.2
    • /
    • pp.132-138
    • /
    • 2015
  • Enlarging the horizontal viewing angle is an important feature of integral imaging monitors. Thus far, the horizontal viewing angle has been enlarged in different ways, such as by changing the size of the elemental images or by tilting the lens array in the capture and reconstruction stages. However, these methods are limited by the microlenses used in the capture stage and by the fact that the images obtained cannot be easily projected into different displays. In this study, we upgrade our previously reported method, called SPOC 2.0. In particular, our new approach, which can be called SPOC 2.1, enlarges the viewing angle by increasing the density of the elemental images in the horizontal direction and by an appropriate application of our transformation and reshape algorithm. To illustrate our approach, we have calculated some high-viewing angle elemental images and displayed them on an integral imaging monitor.

3D Shape Reconstruction using the Focus Estimator Value from Multi-Focus Cell Images (다초점 세포 영상으로부터 추정된 초점 값을 이용한 3차원 형태 복원)

  • Choi, Yea-Jun;Lee, Dong-Woo;Kim, Myoung-Hee;Choi, Soo-Mi
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.4
    • /
    • pp.31-40
    • /
    • 2017
  • As 3D cell culture has recently become possible, it has been able to observe a 3D shape of cell and volume. Generally, 3D information of a cell should be observed with a special microscope such as a confocal microscope or an electron microscope. However, a confocal microscope is more expensive than a conventional microscope and takes longer time to capture images. Therefore, there is a need for a method that can reconstruct the 3D shape of cells using a common microscope. In this paper, we propose a method of reconstructing 3D cells using the focus estimator value from multi-focal fluorescence images of cells. Initially, 3D cultured cells are captured with an optical microscope by changing the focus. Then the approximate position of the cells is assigned as ROI (Region Of Interest) using the circular Hough transform in the images. The MSBF (Modified Sliding Band Filter) is applied to the obtained ROI to extract the outlines of the cell clusters, and the focus estimator values are computed based on the extracted outlines. Using the computed focus estimator values and the numerical aperture (NA) of the microscope, we extract the outline of the cell cluster considering the depth and reconstruct the cells into 3D based on the extracted outline. The reconstruction results are examined by comparing with the combined in-focus portions of the cell images.

Stereoscopic Video Compositing with a DSLR and Depth Information by Kinect (키넥트 깊이 정보와 DSLR을 이용한 스테레오스코픽 비디오 합성)

  • Kwon, Soon-Chul;Kang, Won-Young;Jeong, Yeong-Hu;Lee, Seung-Hyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.10
    • /
    • pp.920-927
    • /
    • 2013
  • Chroma key technique which composes images by separating an object from its background in specific color has restrictions on color and space. Especially, unlike general chroma key technique, image composition for stereo 3D display requires natural image composition method in 3D space. The thesis attempted to compose images in 3D space using depth keying method which uses high resolution depth information. High resolution depth map was obtained through camera calibration between the DSLR and Kinect sensor. 3D mesh model was created by the high resolution depth information and mapped with RGB color value. Object was converted into point cloud type in 3D space after separating it from its background according to depth information. The image in which 3D virtual background and object are composed obtained and played stereo 3D images using a virtual camera.