• Title/Summary/Keyword: 3D Image

Search Result 5,091, Processing Time 0.037 seconds

A 3D Image Player for CRT/LCD Monitors

  • Ko, Yoon-Ho;Choi, Chul-Ho;Kwon, Byong-Heon;Choi, Myung-Ryul
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2002.08a
    • /
    • pp.895-898
    • /
    • 2002
  • In this paper, we propose a 3D image player for LCD monitors as well as CRT monitors. As we consider an afterglow and digital processing of LCD monitors, a stereoscopic images can be shown on CRT monitors as well as LCD monitors using the proposed a3D image player. We have implemented a 3D image player using FPGA (MAX 9320), We show prove that a stereoscopic images are shown on the LCD monitors.

  • PDF

Effects of Depth Map Quantization for Computer-Generated Multiview Images using Depth Image-Based Rendering

  • Kim, Min-Young;Cho, Yong-Joo;Choo, Hyon-Gon;Kim, Jin-Woong;Park, Kyoung-Shin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.11
    • /
    • pp.2175-2190
    • /
    • 2011
  • This paper presents the effects of depth map quantization for multiview intermediate image generation using depth image-based rendering (DIBR). DIBR synthesizes multiple virtual views of a 3D scene from a 2D image and its associated depth map. However, it needs precise depth information in order to generate reliable and accurate intermediate view images for use in multiview 3D display systems. Previous work has extensively studied the pre-processing of the depth map, but little is known about depth map quantization. In this paper, we conduct an experiment to estimate the depth map quantization that affords acceptable image quality to generate DIBR-based multiview intermediate images. The experiment uses computer-generated 3D scenes, in which the multiview images captured directly from the scene are compared to the multiview intermediate images constructed by DIBR with a number of quantized depth maps. The results showed that there was no significant effect on depth map quantization from 16-bit to 7-bit (and more specifically 96-scale) on DIBR. Hence, a depth map above 7-bit is needed to maintain sufficient image quality for a DIBR-based multiview 3D system.

Registration of 3D CT Data to 2D Endoscopic Image using a Gradient Mutual Information based Viewpoint Matching for Image-Guided Medialization Laryngoplasty

  • Yim, Yeny;Wakid, Mike;Kirmizibayrak, Can;Bielamowicz, Steven;Hahn, James
    • Journal of Computing Science and Engineering
    • /
    • v.4 no.4
    • /
    • pp.368-387
    • /
    • 2010
  • We propose a novel method for the registration of 3D CT scans to 2D endoscopic images during the image-guided medialization laryngoplasty. This study aims to allow the surgeon to find the precise configuration of the implant and place it into the desired location by employing accurate registration methods of the 3D CT data to intra-operative patient and interactive visualization tools for the registered images. In this study, the proposed registration methods enable the surgeon to compare the outcome of the procedure to the pre-planned shape by matching the vocal folds in the CT rendered images to the endoscopic images. The 3D image fusion provides an interactive and intuitive guidance for surgeon by visualizing a combined and correlated relationship of the multiple imaging modalities. The 3D Magic Lens helps to effectively visualize laryngeal anatomical structures by applying different transparencies and transfer functions to the region of interest. The preliminary results of the study demonstrated that the proposed method can be readily extended for image-guided surgery of real patients.

Optimized Multiple Description Lattice Vector Quantization Coding for 3D Depth Image

  • Zhang, Huiwen;Bai, Huihui;Liu, Meiqin;Zhao, Yao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.3
    • /
    • pp.1140-1154
    • /
    • 2015
  • Multiple Description (MD) coding is a promising alternative for the robust transmission of information over error-prone channels. Lattice vector quantization (LVQ) is a significant version of MD techniques to design an MD image coder. However, different from the traditional 2D texture image, the 3D depth image has its own special characteristics, which should be taken into account for efficient compression. In this paper, an optimized MDLVQ scheme is proposed in view of the characteristics of 3D depth image. First, due to the sparsity of depth image, the image blocks can be classified into edge blocks and smooth blocks, which are encoded by different modes. Furthermore, according to the boundary contents in edge blocks, the step size of LVQ can be regulated adaptively for each block. Experimental results validate the effectiveness of the proposed scheme, which show better rate distortion performance compared with the conventional MDLVQ.

Single Image-Based 3D Face Modeling for 3D Printing (3D 프린팅을 위한 단일 영상 기반 3D 얼굴 모델링 연구)

  • Song, Eungyeol;Koh, Wan-Ki;Yu, Sunjin
    • Journal of the Korean Society of Radiology
    • /
    • v.10 no.8
    • /
    • pp.571-576
    • /
    • 2016
  • 3D printing has recently been used in various fields. Among various applications, 3D face data must be generated for 3D face printing. A laser scanner is used to acquire 3D face data, but there is a restriction that a person should not move during scanning. In this paper, we propose a 3D face modeling method based on a single image and a face transformation system to use the generated 3D face for virtual cosmetic surgery. We have defined facial feature points from the 3D face database for 3D face data generation. After extracting feature points from a single face image, 3D face of the input face image is generated corresponding to the 3D face feature points defined from the 3D face database. After 3D face modeling, 3D face modification part is applied for use such as virtual cosmetic surgery.

Comparison of 64 Channel 3 Dimensional Volume CT with Conventional 3D CT in the Diagnosis and Treatment of Facial Bone Fractures (얼굴뼈 골절의 진단과 치료에 64채널 3D VCT와 Conventional 3D CT의 비교)

  • Jung, Jong Myung;Kim, Jong Whan;Hong, In Pyo;Choi, Chi Hoon
    • Archives of Plastic Surgery
    • /
    • v.34 no.5
    • /
    • pp.605-610
    • /
    • 2007
  • Purpose: Facial trauma is increasing along with increasing popularity in sports, and increasing exposure to crimes or traffic accidents. Compared to the 3D CT of 1990s, the latest CT has made significant improvement thus resulting in higher accuracy of diagnosis. The objective of this study is to compare 64 channel 3 dimensional volume CT(3D VCT) with conventional 3D CT in the diagnosis and treatment of facial bone fractures. Methods: 45 patients with facial trauma were examined by 3D VCT from Jan. 2006 to Feb. 2007. 64 channel 3D VCT which consists of 64 detectors produce axial images of 0.625 mm slice and it scans 175 mm per second. These images are transformed into 3 dimensional image using software Rapidia 2.8. The axial image is reconstructed into 3 dimensional image by volume rendering method. The image is also reconstructed into coronal or sagittal image by multiplanar reformatting method. Results: Contrasting to the previous 3D CT which formulates 3D images by taking axial images of 1-2 mm, 64 channel 3D VCT takes 0.625 mm thin axial images to obtain full images without definite step ladder appearance. 64 channel 3D VCT is effective in diagnosis of thin linear bone fracture, depth and degree of fracture deviation. Conclusion: In its expense and speed, 3D VCT is superior to conventional 3D CT. Owing to its ability to reconstruct full images regardless of the direction using 2 times higher resolution power and 4 times higher speed of the previous 3D CT, 3D VCT allows for accurate evaluation of the exact site and deviation of fine fractures.

Extraction of Subject Size in Still Image Using Floor Pattern (바닥 패턴을 이용한 단일영상 내의 피사체 크기 추출)

  • Hwang, Min-Gu;Kim, Dong-Min;Har, Dong-Hwan
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.4
    • /
    • pp.11-17
    • /
    • 2011
  • This paper aims to realize the information of a subject existing in a still image with objective values. To attain the goal, this research takes the vanishing point that a 2D still image has as the basis and recomposes the still image into a 3D image using a 3D program. Also, in order to set up the axis of the camera necessary to recompose a 3D image, this paper used the lens angle of view that the image has and floor patterns as well. The 3D image completed in this way can measure the size and distance of all subjects in the floor patterns if the size value of a particular reference subject is known, and through this, it can be possible to acquire basic information of a subject that can be either a criminal or a clue in the images of CCTVs or some criminal scene.

3D Visualization Technique for Occluded Objects in Integral Imaging Using Modified Smart Pixel Mapping

  • Lee, Min-Chul;Han, Jaeseung;Cho, Myungjin
    • Journal of information and communication convergence engineering
    • /
    • v.15 no.4
    • /
    • pp.256-261
    • /
    • 2017
  • In this paper, we propose a modified smart pixel mapping (SPM) to visualize occluded three-dimensional (3D) objects in real image fields. In integral imaging, orthoscopic real 3D images cannot be displayed because of lenslets and the converging light field from elemental images. Thus, pseudoscopic-to-orthoscopic conversion which rotates each elemental image by 180 degree, has been proposed so that the orthoscopic virtual 3D image can be displayed. However, the orthoscopic real 3D image cannot be displayed. Hence, a conventional SPM that recaptures elemental images for the orthoscopic real 3D image using virtual pinhole array has been reported. However, it has a critical limitation in that the number of pixels for each elemental image is equal to the number of elemental images. Therefore, in this paper, we propose a modified SPM that can solve this critical limitation in a conventional SPM and can also visualize the occluded objects efficiently.

A New Image Analysis Method based on Regression Manifold 3-D PCA (회귀 매니폴드 3-D PCA 기반 새로운 이미지 분석 방법)

  • Lee, Kyung-Min;Lin, Chi-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.2
    • /
    • pp.103-108
    • /
    • 2022
  • In this paper, we propose a new image analysis method based on regression manifold 3-D PCA. The proposed method is a new image analysis method consisting of a regression analysis algorithm with a structure designed based on an autoencoder capable of nonlinear expansion of manifold 3-D PCA and PCA for efficient dimension reduction when entering large-capacity image data. With the configuration of an autoencoder, a regression manifold 3-DPCA, which derives the best hyperplane through three-dimensional rotation of image pixel values, and a Bayesian rule structure similar to a deep learning structure, are applied. Experiments are performed to verify performance. The image is improved by utilizing the fine dust image, and accuracy performance evaluation is performed through the classification model. As a result, it can be confirmed that it is effective for deep learning performance.

PROTOTYPE AUTOMATIC SYSTEM FOR CONSTRUCTING 3D INTERIOR AND EXTERIOR IMAGE OF BIOLOGICAL OBJECTS

  • Park, T. H.;H. Hwang;Kim, C. S.
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2000.11b
    • /
    • pp.318-324
    • /
    • 2000
  • Ultrasonic and magnetic resonance imaging systems are used to visualize the interior states of biological objects. These nondestructive methods have many advantages but too much expensive. And they do not give exact color information and may miss some details. If it is allowed to destruct some biological objects to get the interior and exterior information, constructing 3D image from the series of the sliced sectional images gives more useful information with relatively low cost. In this paper, PC based automatic 3D model generator was developed. The system was composed of three modules. One is the object handling and image acquisition module, which feeds and slices objects sequentially and maintains the paraffin cool to be in solid state and captures the sectional image consecutively. The second is the system control and interface module, which controls actuators for feeding, slicing, and image capturing. And the last is the image processing and visualization module, which processes a series of acquired sectional images and generates 3D graphic model. The handling module was composed of the gripper, which grasps and feeds the object and the cutting device, which cuts the object by moving cutting edge forward and backward. Sliced sectional images were acquired and saved in the form of bitmap file. The 3D model was generated to obtain the volumetric information using these 2D sectional image files after being segmented from the background paraffin. Once 3-D model was constructed on the computer, user could manipulate it with various transformation methods such as translation, rotation, scaling including arbitrary sectional view.

  • PDF