• Title/Summary/Keyword: 인체 매핑

Search Result 13, Processing Time 0.019 seconds

Synthesis method of elemental images from Kinect images for space 3D image (공간 3D 영상디스플레이를 위한 Kinect 영상의 요소 영상 변환방법)

  • Ryu, Tae-Kyung;Hong, Seok-Min;Kim, Kyoung-Won;Lee, Byung-Gook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2012.05a
    • /
    • pp.162-163
    • /
    • 2012
  • In this paper, we propose a synthesis method of elemental images from Kinect images for 3D integral imaging display. Since RGB images and depth image obtained from Kinect are not able to display 3D images in integral imaging system, we need transform the elemental images in integral imaging display. To do so, we synthesize the elemental images based on the geometric optics mapping from the depth plane images obtained from RGB image and depth image. To show the usefulness of the proposed system, we carry out the preliminary experiments using the two person object and present the experimental results.

  • PDF

A New Mapping Algorithm for Depth Perception in 3D Screen and Its Implementation (3차원 영상의 깊이 인식에 대한 매핑 알고리즘 구현)

  • Ham, Woon-Chul;Kim, Seung-Hwan
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.6
    • /
    • pp.95-101
    • /
    • 2008
  • In this paper, we present a new smoothing algorithm for variable depth mapping for real time stereoscopic image for 3D display. Proposed algorithm is based on the physical concept, called Laplacian equation and we also discuss the mapping of the depth from scene to displayed image. The approach to solve the problem in stereoscopic image which we adopt in this paper is similar to multi-region algorithm which was proposed by N.Holliman. The main difference thing in our algorithm compared with the N.Holliman's multi-region algorithm is that we use the Laplacian equation by considering the distance between viewer and object. We implement the real time stereoscopic image generation method for OpenGL on the circular polarized LCD screen to demonstrate its real functioning in the visual sensory system in human brain. Even though we make and use artificial objects by using OpenGL to simulate the proposed algorithm we assure that this technology may be applied to stereoscopic camera system not only for personal computer system but also for public broad cast system.

Effective Volume Rendering and Virtual Staining Framework for Visualizing 3D Cell Image Data (3차원 세포 영상 데이터의 효과적인 볼륨 렌더링 및 가상 염색 프레임워크)

  • Kim, Taeho;Park, Jinah
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.1
    • /
    • pp.9-16
    • /
    • 2018
  • In this paper, we introduce a visualization framework for cell image data obtained from optical diffraction tomography (ODT), including a method for representing cell morphology in 3D virtual environment and a color mapping protocol. Unlike commonly known volume data sets, such as CT images of human organ or industrial machinery, that have solid structural information, the cell image data have rather vague information with much morphological variations on the boundaries. Therefore, it is difficult to come up with consistent representation of cell structure for visualization results. To obtain desired visual representation of cellular structures, we propose an interactive visualization technique for the ODT data. In visualization of 3D shape of the cell, we adopt a volume rendering technique which is generally applied to volume data visualization and improve the quality of volume rendering result by using empty space jittering method. Furthermore, we provide a layer-based independent rendering method for multiple transfer functions to represent two or more cellular structures in unified render window. In the experiment, we examined effectiveness of proposed method by visualizing various type of the cell obtained from the microscope which can capture ODT image and fluorescence image together.