• Title/Summary/Keyword: full-3D rendering

Search Result 15, Processing Time 0.027 seconds

A Simplified Graphics System Based on Direct Rendering Manager System

  • Baek, Nakhoon
    • Journal of information and communication convergence engineering
    • /
    • v.16 no.2
    • /
    • pp.125-129
    • /
    • 2018
  • In the field of computer graphics, rendering speed is one of the most important factors. Contemporary rendering is performed using 3D graphics systems with windowing system support. Since typical graphics systems, including OpenGL and the DirectX library, focus on the variety of graphics rendering features, the rendering process itself consists of many complicated operations. In contrast, early computer systems used direct manipulation of computer graphics hardware, and achieved simple and efficient graphics handling operations. We suggest an alternative method of accelerated 2D and 3D graphics output, based on directly accessing modern GPU hardware using the direct rendering manager (DRM) system. On the basis of this DRM support, we exchange the graphics instructions and graphics data directly, and achieve better performance than full 3D graphics systems. We present a prototype system for providing a set of simple 2D and 3D graphics primitives. Experimental results and their screen shots are included.

Occlusion-based Direct Volume Rendering for Computed Tomography Image

  • Jung, Younhyun
    • Journal of Multimedia Information System
    • /
    • v.5 no.1
    • /
    • pp.35-42
    • /
    • 2018
  • Direct volume rendering (DVR) is an important 3D visualization method for medical images as it depicts the full volumetric data. However, because DVR renders the whole volume, regions of interests (ROIs) such as a tumor that are embedded within the volume maybe occluded from view. Thus, conventional 2D cross-sectional views are still widely used, while the advantages of the DVR are often neglected. In this study, we propose a new visualization algorithm where we augment the 2D slice of interest (SOI) from an image volume with volumetric information derived from the DVR of the same volume. Our occlusion-based DVR augmentation for SOI (ODAS) uses the occlusion information derived from the voxels in front of the SOI to calculate a depth parameter that controls the amount of DVR visibility which is used to provide 3D spatial cues while not impairing the visibility of the SOI. We outline the capabilities of our ODAS and through a variety of computer tomography (CT) medical image examples, compare it to a conventional fusion of the SOI and the clipped DVR.

A Prototype Implementation for 3D Animated Anaglyph Rendering of Multi-typed Urban Features using Standard OpenGL API

  • Lee, Ki-Won
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.5
    • /
    • pp.401-408
    • /
    • 2007
  • Animated anaglyph is the most cost-effective method for 3D stereo visualization of virtual or actual 3D geo-based data model. Unlike 3D anaglyph scene generation using paired epipolar images, the main data sets of this study is the multi-typed 3D feature model containing 3D shaped objects, DEM and satellite imagery. For this purpose, a prototype implementation for 3D animated anaglyph using OpenGL API is carried out, and virtual 3D feature modeling is performed to demonstrate the applicability of this anaglyph approach. Although 3D features are not real objects in this stage, these can be substituted with actual 3D feature model with full texture images along all facades. Currently, it is regarded as the special viewing effect within 3D GIS application domains, because just stereo 3D viewing is a part of lots of GIS functionalities or remote sensing image processing modules. Animated anaglyph process can be linked with real-time manipulation process of 3D feature model and its database attributes in real world problem. As well, this approach of feature-based 3D animated anaglyph scheme is a bridging technology to further image-based 3D animated anaglyph rendering system, portable mobile 3D stereo viewing system or auto-stereo viewing system without glasses for multi-viewers.

A Study of Artificial Intelligence Generated 3D Engine Animation Workflow

  • Chenghao Wang;Jeanhun Chung
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.286-292
    • /
    • 2023
  • This article is set against the backdrop of the rapid development of the metaverse and artificial intelligence technologies, and aims to explore the possibility and potential impact of integrating AI technology into the traditional 3D animation production process. Through an in-depth analysis of the differences when merging traditional production processes with AI technology, it aims to summarize a new innovative workflow for 3D animation production. This new process takes full advantage of the efficiency and intelligent features of AI technology, significantly improving the efficiency of animation production and enhancing the overall quality of the animations. Furthermore, the paper delves into the creative methods and developmental implications of artificial intelligence technology in real-time rendering engines for 3D animation. It highlights the importance of these technologies in driving innovation and optimizing workflows in the field of animation production, showcasing how they provide new perspectives and possibilities for the future development of the animation industry.

음성인식 기반 인터렉티브 미디어아트의 연구 - 소리-시각 인터렉티브 설치미술 "Water Music" 을 중심으로-

  • Lee, Myung-Hak;Jiang, Cheng-Ri;Kim, Bong-Hwa;Kim, Kyu-Jung
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.354-359
    • /
    • 2008
  • This Audio-Visual Interactive Installation is composed of a video projection of a video Projection and digital Interface technology combining with the viewer's voice recognition. The Viewer can interact with the computer generated moving images growing on the screen by blowing his/her breathing or making sound. This symbiotic audio and visual installation environment allows the viewers to experience an illusionistic spacephysically as well as psychologically. The main programming technologies used to generate moving water waves which can interact with the viewer in this installation are visual C++ and DirectX SDK For making water waves, full-3D rendering technology and particle system were used.

  • PDF

A Fast Volume Rendering Algorithm for Virtual Endoscopy

  • Ra Jong Beom;Kim Sang Hun;Kwon Sung Min
    • Journal of Biomedical Engineering Research
    • /
    • v.26 no.1
    • /
    • pp.23-30
    • /
    • 2005
  • 3D virtual endoscopy has been used as an alternative non-invasive procedure for visualization of hollow organs. However, due to computational complexity, this is a time-consuming procedure. In this paper, we propose a fast volume rendering algorithm based on perspective ray casting for virtual endoscopy. As a pre-processing step, the algorithm divides a volume into hierarchical blocks and classifies them into opaque or transparent blocks. Then, in the first step, we perform ray casting only for sub-sampled pixels on the image plane, and determine their pixel values and depth information. In the next step, by reducing the sub-sampling factor by half, we repeat ray casting for newly added pixels, and their pixel values and depth information are determined. Here, the previously obtained depth information is utilized to reduce the processing time. This step is recursively performed until a full-size rendering image is acquired. Experiments conducted on a PC show that the proposed algorithm can reduce the rendering time by 70- 80% for bronchus and colon endoscopy, compared with the brute-force ray casting scheme. Using the proposed algorithm, interactive volume rendering becomes more realizable in a PC environment without any specific hardware.

Study on full color RGB LED source lighting for general lighting and Improvement of CRI (Color Rendering Index)

  • Park, Yung-Kyung
    • Science of Emotion and Sensibility
    • /
    • v.15 no.3
    • /
    • pp.381-388
    • /
    • 2012
  • The purpose of this study is to check if LED lighting can be used as general lighting and examine the color rendering property of full color RGB LED lighting. CRI is one of the important properties of evaluating lighting. However the present CRI does not fully evaluate LED lightings. Firstly, the performance of a simple task was compared other than comparing CRI values for different lighting. For experimental preparation three types of lightings were used; standard D65 fluorescent tube, general household fluorescent tube, and RGB LED lighting. All three lightings show high error for Purple-Red. All three lightings show similar error for all hues and prove that color discrimination is not affected by the lighting. This proves that LED could be used as general lighting. Secondly, problems of the conventional CIE CRI method are considered and new models are suggested for the new lighting source. Each of the models was evaluated with visual experiment results obtained by the white light matching experiment. The suggested model is based on the CIE CRI method but replaces the color space model by CIELAB, color difference model by CIEDE2000, and chromatic adaptation model by CAT02.

  • PDF

Adaptive Depth Fusion based on Reliability of Depth Cues for 2D-to-3D Video Conversion (2차원 동영상의 3차원 변환을 위한 깊이 단서의 신뢰성 기반 적응적 깊이 융합)

  • Han, Chan-Hee;Choi, Hae-Chul;Lee, Si-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.12
    • /
    • pp.1-13
    • /
    • 2012
  • 3D video is regarded as the next generation contents in numerous applications. The 2D-to-3D video conversion technologies are strongly required to resolve a lack of 3D videos during the period of transition to the full ripe 3D video era. In 2D-to-3D conversion methods, after the depth image of each scene in 2D video is estimated, stereoscopic video is synthesized using DIBR (Depth Image Based Rendering) technologies. This paper proposes a novel depth fusion algorithm that integrates multiple depth cues contained in 2D video to generate stereoscopic video. For the proper depth fusion, it is checked whether some cues are reliable or not in current scene. Based on the result of the reliability tests, current scene is classified into one of 4 scene types and scene-adaptive depth fusion is applied to combine those reliable depth cues to generate the final depth information. Simulation results show that each depth cue is reasonably utilized according to scene types and final depth is generated by cues which can effectively represent the current scene.

Comparison of 64 Channel 3 Dimensional Volume CT with Conventional 3D CT in the Diagnosis and Treatment of Facial Bone Fractures (얼굴뼈 골절의 진단과 치료에 64채널 3D VCT와 Conventional 3D CT의 비교)

  • Jung, Jong Myung;Kim, Jong Whan;Hong, In Pyo;Choi, Chi Hoon
    • Archives of Plastic Surgery
    • /
    • v.34 no.5
    • /
    • pp.605-610
    • /
    • 2007
  • Purpose: Facial trauma is increasing along with increasing popularity in sports, and increasing exposure to crimes or traffic accidents. Compared to the 3D CT of 1990s, the latest CT has made significant improvement thus resulting in higher accuracy of diagnosis. The objective of this study is to compare 64 channel 3 dimensional volume CT(3D VCT) with conventional 3D CT in the diagnosis and treatment of facial bone fractures. Methods: 45 patients with facial trauma were examined by 3D VCT from Jan. 2006 to Feb. 2007. 64 channel 3D VCT which consists of 64 detectors produce axial images of 0.625 mm slice and it scans 175 mm per second. These images are transformed into 3 dimensional image using software Rapidia 2.8. The axial image is reconstructed into 3 dimensional image by volume rendering method. The image is also reconstructed into coronal or sagittal image by multiplanar reformatting method. Results: Contrasting to the previous 3D CT which formulates 3D images by taking axial images of 1-2 mm, 64 channel 3D VCT takes 0.625 mm thin axial images to obtain full images without definite step ladder appearance. 64 channel 3D VCT is effective in diagnosis of thin linear bone fracture, depth and degree of fracture deviation. Conclusion: In its expense and speed, 3D VCT is superior to conventional 3D CT. Owing to its ability to reconstruct full images regardless of the direction using 2 times higher resolution power and 4 times higher speed of the previous 3D CT, 3D VCT allows for accurate evaluation of the exact site and deviation of fine fractures.

Aria Online: On-Line Game Using Dream3D (아리아온라인: Dream 3D를 이용한 온라인게임)

  • 이헌주;김현빈
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.4
    • /
    • pp.532-541
    • /
    • 2004
  • Computer game has become the core part of multimedia area in our knowledge and information based society Recently, computer game has been evolving into on-line 3D games which give players more realism from the existing on-line 2D games. We are competitive in developing on-line 2D games. However, we have difficulties in maintaining the competitive edge in the area of 3D game technologies owing to the limited technologies. In this paper, we design and develop Aria Online, an on-line 3D prototype game using Dream3D. The on-line game supports simultaneous connections of multi -users on each game server and full flexibility on user's view.

  • PDF