• Title/Summary/Keyword: Rendering Map

Search Result 147, Processing Time 0.026 seconds

Development of High Dynamic Range Panorama Environment Map Production System Using General-Purpose Digital Cameras (범용 디지털 카메라를 이용한 HDR 파노라마 환경 맵 제작 시스템 개발)

  • Park, Eun-Hea;Hwang, Gyu-Hyun;Park, Sang-Hun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.2
    • /
    • pp.1-8
    • /
    • 2012
  • High dynamic range (HDR) images represent a far wider numerical range of exposures than common digital images. Thus it can accurately store intensity levels of light found in the specific scenes generated by light sources in the real world. Although a kind of professional HDR cameras which support fast accurate capturing has been developed, high costs prevent from employing those in general working environments. The common method to produce a HDR image with lower cost is to take a set of photos of the target scene with a range of exposures by general purpose cameras, and then to transform them into a HDR image by commercial softwares. However, the method needs complicate and accurate camera calibration processes. Furthermore, creating HDR environment maps which are used to produce high quality imaging contents includes delicate time-consuming manual processes. In this paper, we present an automatic HDR panorama environment map generating system which was constructed to make the complicated jobs of taking pictures easier. And we show that our system can be effectively applicable to photo-realistic compositing tasks which combine 3D graphic models with a 2D background scene using image-based lighting techniques.

An Integrated VR Platform for 3D and Image based Models: A Step toward Interactivity with Photo Realism (상호작용 및 사실감을 위한 3D/IBR 기반의 통합 VR환경)

  • Yoon, Jayoung;Kim, Gerard Jounghyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.6 no.4
    • /
    • pp.1-7
    • /
    • 2000
  • Traditionally, three dimension model s have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity. it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined. traversed, and rendered together. In fact, as suggested by Shade et al. [1]. these different representations can be used as different LOD's for a given object. For in stance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range. and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform : designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection. handling their transition s. implementing appropriate interaction schemes. and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit. to accommodate new node types for environment maps. billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also. during interaction, regardless of the viewing distance. a 3D representation would be used, if it exists. Finally. we carried out experiments to verify the theoretical derivation of the switching rule and obtained positive results.

  • PDF

Automatic Lower Extremity Vessel Extraction based on Bone Elimination Technique in CT Angiography Images (CT 혈관 조영 영상에서 뼈 소거법 기반의 하지 혈관 자동 추출)

  • Kim, Soo-Kyung;Hong, Helen
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.12
    • /
    • pp.967-976
    • /
    • 2009
  • In this paper, we propose an automatic lower extremity vessel extraction based on rigid registration and bone elimination techniques in CT and CT angiography images. First, automatic partitioning of the lower extremity based on the anatomy is proposed to consider the local movement of the bone. Second, rigid registration based on distance map is performed to estimate the movement of the bone between CT and CT angiography images. Third, bone elimination and vessel masking techniques are proposed to remove bones in CT angiography image and to prevent the vessel near to bone from eroding. Fourth, post-processing based on vessel tracking is proposed to reduce the effect of misalignment and noises like a cartilage. For the evaluation of our method, we performed the visual inspection, accuracy measures and processing time. For visual inspection, the results of applying general subtraction, registered subtraction and proposed method are compared using volume rendering and maximum intensity projection. For accuracy evaluation, intensity distributions of CT angiography image, subtraction based method and proposed method are analyzed. Experimental result shows that bones are accurately eliminated and vessels are robustly extracted without the loss of other structure. The total processing time of thirteen patient datasets was 40 seconds on average.

A 3D Terrain Reconstruction System using Navigation Information and Realtime-Updated Terrain Data (항법정보와 실시간 업데이트 지형 데이터를 사용한 3D 지형 재구축 시스템)

  • Baek, In-Sun;Um, Ky-Hyun;Cho, Kyung-Eun
    • Journal of Korea Game Society
    • /
    • v.10 no.6
    • /
    • pp.157-168
    • /
    • 2010
  • A terrain is an essential element for constructing a virtual world in which game characters and objects make various interactions with one another. Creating a terrain requires a great deal of time and repetitive editing processes. This paper presents a 3D terrain reconstruction system to create 3D terrain in virtual space based on real terrain data. In this system, it converts the coordinate system of the height maps which are generated from a stereo camera and a laser scanner from global GPS into 3D world using the x and z axis vectors of the global GPS coordinate system. It calculates the movement vectors and the rotation matrices frame by frame. Terrain meshes are dynamically generated and rendered in the virtual areas which are represented in an undirected graph. The rendering meshes are exactly created and updated by correcting terrain data errors. In our experiments, the FPS of the system was regularly checked until the terrain was reconstructed by our system, and the visualization quality of the terrain was reviewed. As a result, our system shows that it has 3 times higher FPS than other terrain management systems with Quadtree for small area, improves 40% than others for large area. The visualization of terrain data maintains the same shape as the contour of real terrain. This system could be used for the terrain system of realtime 3D games to generate terrain on real time, and for the terrain design work of CG Movies.

View Synthesis Error Removal for Comfortable 3D Video Systems (편안한 3차원 비디오 시스템을 위한 영상 합성 오류 제거)

  • Lee, Cheon;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.1 no.3
    • /
    • pp.36-42
    • /
    • 2012
  • Recently, the smart applications, such as smart phone and smart TV, become a hot issue in IT consumer markets. In particular, the smart TV provides 3D video services, hence efficient coding methods for 3D video data are required. Three-dimensional (3D) video involves stereoscopic or multi-view images to provide depth experience through 3D display systems. Binocular cues are perceived by rendering proper viewpoint images obtained at slightly different view angles. Since the number of viewpoints of the multi-view video is limited, 3D display devices should generate arbitrary viewpoint images using available adjacent view images. In this paper, after we explain a view synthesis method briefly, we propose a new algorithm to compensate view synthesis errors around object boundaries. We describe a 3D warping technique exploiting the depth map for viewpoint shifting and a hole filling method using multi-view images. Then, we propose an algorithm to remove boundary noises that are generated due to mismatches of object edges in the color and depth images. The proposed method reduces annoying boundary noises near object edges by replacing erroneous textures with alternative textures from the other reference image. Using the proposed method, we can generate perceptually inproved images for 3D video systems.

  • PDF

Consider the directional hole filling method for virtual view point synthesis (가상 시점 영상 합성을 위한 방향성 고려 홀 채움 방법)

  • Mun, Ji Hun;Ho, Yo Sung
    • Smart Media Journal
    • /
    • v.3 no.4
    • /
    • pp.28-34
    • /
    • 2014
  • Recently the depth-image-based rendering (DIBR) method is usually used in 3D image application filed. Virtual view image is created by using a known view with associated depth map to make a virtual view point which did not taken by the camera. But, disocclusion area occur because the virtual view point is created using a depth image based image 3D warping. To remove those kind of disocclusion region, many hole filling methods are proposed until now. Constant color region searching, horizontal interpolation, horizontal extrapolation, and variational inpainting techniques are proposed as a hole filling methods. But when using those hole filling method some problem occurred. The different types of annoying artifacts are appear in texture region hole filling procedure. In this paper to solve those problem, the multi-directional extrapolation method is newly proposed for efficiency of expanded hole filling performance. The proposed method is efficient when performing hole filling which complex texture background region. Consideration of directionality for hole filling method use the hole neighbor texture pixel value when estimate the hole pixel value. We can check the proposed hole filling method can more efficiently fill the hole region which generated by virtual view synthesis result.

Quantitative Evaluation of Regional Cerebral Blood Flow by Visual Stimulation in $^{99m}Tc-HMPAO$ Brain SPECT ($^{99m}Tc-HMPAO$ 뇌 SPECT에서 시각자극에 의한 국소 뇌 혈류변화의 정량적 검증)

  • Juh, Ra-Hyeong;Suh, Tae-Suk;Kwark, Chul-Eun;Choe, Bo-Young;Lee, Hyoung-Koo;Chung, Yong-An;Kim, Sung-Hoon;Chung, Soo-Kyo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.36 no.3
    • /
    • pp.166-176
    • /
    • 2002
  • Purpose: The purpose of this study is to investigate the effects of visual activation and quantitative analysis of regional cerebral blood flow. Visual activation was known to increase regional cerebral blood flow in the visual cortex in occipital lobe. We evaluated that change in the distribution of $^{99m}Tc-HMPAO$ (Hexamethyl propylene amine oxime) to reflect in regional cerebral blood flow. Materials and Methods: The six volunteers were injected with 925 MBq (mean ages: 26.75 years, n=6, 3men, 3women) underwent MRI and $^{99m}Tc-HMPAO$ SPECT during a rest state with closed eyes and visual stimulated with 8 Hz LED. We delineate the legion of interest and calculated the mean count per voxel in each of the fifteen slices to quantitative analysis. The ROI to whole brain ratio and regional index was calculated pixel to pixel subtraction visual non-activation image from visual activation image and constructed brain map using a statistical parameter map (SPM99). Results: The mean regional cerebral blood flow was increased due to visual stimulation. The increase rate of the mean regional cerebral blood flow which of the activation region in primary visual cortex of occipital lobe was $32.50{\pm}5.67%$. The significant activation sites using a statistical parameter of brain constructed a rendering image and image fusion with SPECT and MRI. Conclusion: Visual activation was revealed significant increase through quantitative analysis in visual cortex. Activation region was certified in Talairach coordinate and primary visual cortex (Ba17),visual association area (Ba18,19) of Brodmann.