• 제목/요약/키워드: full-3D rendering

검색결과 15건 처리시간 0.025초

A Simplified Graphics System Based on Direct Rendering Manager System

  • Baek, Nakhoon
    • Journal of information and communication convergence engineering
    • /
    • 제16권2호
    • /
    • pp.125-129
    • /
    • 2018
  • In the field of computer graphics, rendering speed is one of the most important factors. Contemporary rendering is performed using 3D graphics systems with windowing system support. Since typical graphics systems, including OpenGL and the DirectX library, focus on the variety of graphics rendering features, the rendering process itself consists of many complicated operations. In contrast, early computer systems used direct manipulation of computer graphics hardware, and achieved simple and efficient graphics handling operations. We suggest an alternative method of accelerated 2D and 3D graphics output, based on directly accessing modern GPU hardware using the direct rendering manager (DRM) system. On the basis of this DRM support, we exchange the graphics instructions and graphics data directly, and achieve better performance than full 3D graphics systems. We present a prototype system for providing a set of simple 2D and 3D graphics primitives. Experimental results and their screen shots are included.

Occlusion-based Direct Volume Rendering for Computed Tomography Image

  • Jung, Younhyun
    • Journal of Multimedia Information System
    • /
    • 제5권1호
    • /
    • pp.35-42
    • /
    • 2018
  • Direct volume rendering (DVR) is an important 3D visualization method for medical images as it depicts the full volumetric data. However, because DVR renders the whole volume, regions of interests (ROIs) such as a tumor that are embedded within the volume maybe occluded from view. Thus, conventional 2D cross-sectional views are still widely used, while the advantages of the DVR are often neglected. In this study, we propose a new visualization algorithm where we augment the 2D slice of interest (SOI) from an image volume with volumetric information derived from the DVR of the same volume. Our occlusion-based DVR augmentation for SOI (ODAS) uses the occlusion information derived from the voxels in front of the SOI to calculate a depth parameter that controls the amount of DVR visibility which is used to provide 3D spatial cues while not impairing the visibility of the SOI. We outline the capabilities of our ODAS and through a variety of computer tomography (CT) medical image examples, compare it to a conventional fusion of the SOI and the clipped DVR.

A Prototype Implementation for 3D Animated Anaglyph Rendering of Multi-typed Urban Features using Standard OpenGL API

  • Lee, Ki-Won
    • 대한원격탐사학회지
    • /
    • 제23권5호
    • /
    • pp.401-408
    • /
    • 2007
  • Animated anaglyph is the most cost-effective method for 3D stereo visualization of virtual or actual 3D geo-based data model. Unlike 3D anaglyph scene generation using paired epipolar images, the main data sets of this study is the multi-typed 3D feature model containing 3D shaped objects, DEM and satellite imagery. For this purpose, a prototype implementation for 3D animated anaglyph using OpenGL API is carried out, and virtual 3D feature modeling is performed to demonstrate the applicability of this anaglyph approach. Although 3D features are not real objects in this stage, these can be substituted with actual 3D feature model with full texture images along all facades. Currently, it is regarded as the special viewing effect within 3D GIS application domains, because just stereo 3D viewing is a part of lots of GIS functionalities or remote sensing image processing modules. Animated anaglyph process can be linked with real-time manipulation process of 3D feature model and its database attributes in real world problem. As well, this approach of feature-based 3D animated anaglyph scheme is a bridging technology to further image-based 3D animated anaglyph rendering system, portable mobile 3D stereo viewing system or auto-stereo viewing system without glasses for multi-viewers.

A Study of Artificial Intelligence Generated 3D Engine Animation Workflow

  • Chenghao Wang;Jeanhun Chung
    • International journal of advanced smart convergence
    • /
    • 제12권4호
    • /
    • pp.286-292
    • /
    • 2023
  • This article is set against the backdrop of the rapid development of the metaverse and artificial intelligence technologies, and aims to explore the possibility and potential impact of integrating AI technology into the traditional 3D animation production process. Through an in-depth analysis of the differences when merging traditional production processes with AI technology, it aims to summarize a new innovative workflow for 3D animation production. This new process takes full advantage of the efficiency and intelligent features of AI technology, significantly improving the efficiency of animation production and enhancing the overall quality of the animations. Furthermore, the paper delves into the creative methods and developmental implications of artificial intelligence technology in real-time rendering engines for 3D animation. It highlights the importance of these technologies in driving innovation and optimizing workflows in the field of animation production, showcasing how they provide new perspectives and possibilities for the future development of the animation industry.

음성인식 기반 인터렉티브 미디어아트의 연구 - 소리-시각 인터렉티브 설치미술 "Water Music" 을 중심으로-

  • 이명학;강성일;김봉화;김규정
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2008년도 학술대회 1부
    • /
    • pp.354-359
    • /
    • 2008
  • 소리-시각 인터랙티브 설치미술인 "Water Music" 은 관객의 음성에 따라서 변하는 물결의 파동을 표현한다. 음정인식 기반 인터페이스 기술을 이용하여 벽면에 비디오 프로젝션 된 시각적 물결이미지로 나타난다. 물결이미지는 동양화의 붓으로 그린 물결과 작은 원형의 입자들을 생성하여 표현된 영상으로 구성된다. 관객은 입김을 불어 넣거나 소리를 냄으로써 화면에서 연속적으로 생성되는 컴퓨터 프로그램 기반 물결의 움직임과 상호 반응할 수 있다. 이러한 공생적인 소리 시각 환경은 관객에게 생각으로 그리고 신체적으로 환영적 공간을 경험하도록 한다. 본 설치작업에서 관객과 상호 반응 할 수 있는 움직이는 물결을 생성하기 위하여 적용된 주요 프로그램은 Visual C++ and DirectX SDK이며, 풀 프레임 3D 렌더링 기술과 파티클 시스템이 사용되었다.

  • PDF

A Fast Volume Rendering Algorithm for Virtual Endoscopy

  • Ra Jong Beom;Kim Sang Hun;Kwon Sung Min
    • 대한의용생체공학회:의공학회지
    • /
    • 제26권1호
    • /
    • pp.23-30
    • /
    • 2005
  • 3D virtual endoscopy has been used as an alternative non-invasive procedure for visualization of hollow organs. However, due to computational complexity, this is a time-consuming procedure. In this paper, we propose a fast volume rendering algorithm based on perspective ray casting for virtual endoscopy. As a pre-processing step, the algorithm divides a volume into hierarchical blocks and classifies them into opaque or transparent blocks. Then, in the first step, we perform ray casting only for sub-sampled pixels on the image plane, and determine their pixel values and depth information. In the next step, by reducing the sub-sampling factor by half, we repeat ray casting for newly added pixels, and their pixel values and depth information are determined. Here, the previously obtained depth information is utilized to reduce the processing time. This step is recursively performed until a full-size rendering image is acquired. Experiments conducted on a PC show that the proposed algorithm can reduce the rendering time by 70- 80% for bronchus and colon endoscopy, compared with the brute-force ray casting scheme. Using the proposed algorithm, interactive volume rendering becomes more realizable in a PC environment without any specific hardware.

Study on full color RGB LED source lighting for general lighting and Improvement of CRI (Color Rendering Index)

  • Park, Yung-Kyung
    • 감성과학
    • /
    • 제15권3호
    • /
    • pp.381-388
    • /
    • 2012
  • The purpose of this study is to check if LED lighting can be used as general lighting and examine the color rendering property of full color RGB LED lighting. CRI is one of the important properties of evaluating lighting. However the present CRI does not fully evaluate LED lightings. Firstly, the performance of a simple task was compared other than comparing CRI values for different lighting. For experimental preparation three types of lightings were used; standard D65 fluorescent tube, general household fluorescent tube, and RGB LED lighting. All three lightings show high error for Purple-Red. All three lightings show similar error for all hues and prove that color discrimination is not affected by the lighting. This proves that LED could be used as general lighting. Secondly, problems of the conventional CIE CRI method are considered and new models are suggested for the new lighting source. Each of the models was evaluated with visual experiment results obtained by the white light matching experiment. The suggested model is based on the CIE CRI method but replaces the color space model by CIELAB, color difference model by CIEDE2000, and chromatic adaptation model by CAT02.

  • PDF

2차원 동영상의 3차원 변환을 위한 깊이 단서의 신뢰성 기반 적응적 깊이 융합 (Adaptive Depth Fusion based on Reliability of Depth Cues for 2D-to-3D Video Conversion)

  • 한찬희;최해철;이시웅
    • 한국콘텐츠학회논문지
    • /
    • 제12권12호
    • /
    • pp.1-13
    • /
    • 2012
  • 3차원 동영상은 다양한 응용분야들에서 차세대 콘텐츠로 큰 주목을 받고 있다. 2D-to-3D 변환은 3차원 동영상의 시대로 넘어가는 과도기 동안에 3차원 동영상 콘텐츠의 부족현상을 해결하기위한 강력한 기술로 여겨지고 있다. 일반적으로 2D-to-3D 변환을 위해서는 2차원 동영상 각 장면의 깊이영상을 추정/생성한 후 깊이 영상 기반 랜더링 (DIBR : Depth Image Based Rendering) 기술을 이용하여 스테레오 동영상을 합성한다. 본 논문은 2차원 동영상 내 존재하는 다양한 변환 단서들을 통합하는 새로운 깊이 융합 기법을 제안한다. 우선, 알맞은 깊이 융합을 위해 몇몇 단서가 현재 장면을 효과적으로 표현할 수 있는 지 아닌지 검사된다. 그 후, 신뢰성 검사의 결과를 기반으로 현재 장면은 4개의 유형 중 하나로 분류된다. 마지막으로 최종 깊이 영상을 생성하기 위해 신뢰할 수 있는 깊이 단서들을 조합하는 장면 적응적 깊이 융합이 수행된다. 실험 결과를 통해 각각의 단서가 장면 유형에 따라 타당하게 활용되었고 최종 깊이 영상이 현재 장면을 효과적으로 표현할 수 있는 단서들에 의해 생성되었음을 관찰할 수 있다.

얼굴뼈 골절의 진단과 치료에 64채널 3D VCT와 Conventional 3D CT의 비교 (Comparison of 64 Channel 3 Dimensional Volume CT with Conventional 3D CT in the Diagnosis and Treatment of Facial Bone Fractures)

  • 정종명;김종환;홍인표;최치훈
    • Archives of Plastic Surgery
    • /
    • 제34권5호
    • /
    • pp.605-610
    • /
    • 2007
  • Purpose: Facial trauma is increasing along with increasing popularity in sports, and increasing exposure to crimes or traffic accidents. Compared to the 3D CT of 1990s, the latest CT has made significant improvement thus resulting in higher accuracy of diagnosis. The objective of this study is to compare 64 channel 3 dimensional volume CT(3D VCT) with conventional 3D CT in the diagnosis and treatment of facial bone fractures. Methods: 45 patients with facial trauma were examined by 3D VCT from Jan. 2006 to Feb. 2007. 64 channel 3D VCT which consists of 64 detectors produce axial images of 0.625 mm slice and it scans 175 mm per second. These images are transformed into 3 dimensional image using software Rapidia 2.8. The axial image is reconstructed into 3 dimensional image by volume rendering method. The image is also reconstructed into coronal or sagittal image by multiplanar reformatting method. Results: Contrasting to the previous 3D CT which formulates 3D images by taking axial images of 1-2 mm, 64 channel 3D VCT takes 0.625 mm thin axial images to obtain full images without definite step ladder appearance. 64 channel 3D VCT is effective in diagnosis of thin linear bone fracture, depth and degree of fracture deviation. Conclusion: In its expense and speed, 3D VCT is superior to conventional 3D CT. Owing to its ability to reconstruct full images regardless of the direction using 2 times higher resolution power and 4 times higher speed of the previous 3D CT, 3D VCT allows for accurate evaluation of the exact site and deviation of fine fractures.

아리아온라인: Dream 3D를 이용한 온라인게임 (Aria Online: On-Line Game Using Dream3D)

  • 이헌주;김현빈
    • 한국멀티미디어학회논문지
    • /
    • 제7권4호
    • /
    • pp.532-541
    • /
    • 2004
  • 컴퓨터게임은 지식정보사회에서의 멀티미디어 분야의 꽃이라고 여겨지고 있다. 기존의 온라인게임들은 2차원 게임들이 주류를 이루었으나 최근 들어 사용자들에게 많은 현실감을 제공할 수 있는 3차원 형태의 온라인게임들이 관심을 끌고 있다. 국내의 게임 기술은 2차원 온라인게임에 있어서는 경쟁력을 확보하고 있으나 기술적인 문제 때문에 3차원 게임기술 분야에 있어서는 선진국에 비하여 다소 뒤쳐져 있는 것이 사실이다. 본 논문에서는 Dream3D를 이용한 3차원 온라인게임인 아리아온라인의 설계 및 개발에 관하여 기술한다. 개발된 게임은 다수의 사용자가 게임 서버에 연결하여 동시에 게임을 진행 할 수 있도록 하는 온라인 게임으로 개발되었다.

  • PDF