• Title/Summary/Keyword: 3D Texture

Search Result 645, Processing Time 0.026 seconds

3D Mesh Simplification from Range Image Considering Texture Mapping (Texture Mapping을 고려한 Rang Image의 3차원 형상 간략화)

  • Kong, Changhwan;Kim, Changhun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.3 no.1
    • /
    • pp.23-28
    • /
    • 1997
  • We reconstruct 3D surface from range image that consists of range map and texture map, and simplify the reconstructed triangular mesh. In this paper, we introduce fast simplification method that is able to glue texture to 3D surface model and adapt to real-time multipled level-of detail. We will verify the efficiency by applying to the scanned data of Korean relics.

  • PDF

Preprocessing Method for Efficient Compression of Patch-based Image (패치 영상의 효율적 압축을 위한 전처리 방법)

  • Lee, Sin-Wook;Lee, Sun-Young;Chang, Eun-Youn;Hur, Nam-Ho;Jang, Euee-S.
    • Journal of Broadcast Engineering
    • /
    • v.13 no.1
    • /
    • pp.109-118
    • /
    • 2008
  • In mapping a texture image into a 3D mesh model for photo-realistic graphic applications, the compression of texture image is as important as geometry of 3D mesh. Typically, the size of the (compressed) texture image of 3D model is comparable to that of the (compressed) 3D mesh geometry. Most 3D model compression techniques are to compress the 3D mesh geometry, rather than to compress the texture image. Well-known image compression standards (i.e., JPEG) have been extensively used for texture image compression. However, such techniques are not so efficient when it comes to compress an image with texture patches, since the patches are little correlated. In this paper, we proposed a preprocessing method to substantially improve the compression efficiency of texture compression. From the experimental results, the proposed method was shown to be efficient in compression with a bit-saving from 23% to 45%.

A Facial Animation System Using 3D Scanned Data (3D 스캔 데이터를 이용한 얼굴 애니메이션 시스템)

  • Gu, Bon-Gwan;Jung, Chul-Hee;Lee, Jae-Yun;Cho, Sun-Young;Lee, Myeong-Won
    • The KIPS Transactions:PartA
    • /
    • v.17A no.6
    • /
    • pp.281-288
    • /
    • 2010
  • In this paper, we describe the development of a system for generating a 3-dimensional human face using 3D scanned facial data and photo images, and morphing animation. The system comprises a facial feature input tool, a 3-dimensional texture mapping interface, and a 3-dimensional facial morphing interface. The facial feature input tool supports texture mapping and morphing animation - facial morphing areas between two facial models are defined by inputting facial feature points interactively. The texture mapping is done first by means of three photo images - a front and two side images - of a face model. The morphing interface allows for the generation of a morphing animation between corresponding areas of two facial models after texture mapping. This system allows users to interactively generate morphing animations between two facial models, without programming, using 3D scanned facial data and photo images.

Design and Implementation of Virtual Aquarium

  • Bak, Seon-Hui;Lee, Heeman
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.12
    • /
    • pp.43-49
    • /
    • 2016
  • This paper presents the design and implementation of virtual aquarium by generating 3D models of fishes that are colored by viewers in an aim to create interaction among viewers and aquarium. The virtual aquarium system is composed of multiple texture extraction modules, a single interface module and a single display module. The texture extraction module recognize the QR code on the canvas to get information of the predefined mapping table and then extract the texture data for the corresponding 3D model. The scanned image is segmented and warp transformed onto the texture image by using the mapping information. The extracted texture is transferred to the interface module to save on the server computer and the interface module sends the fish code and texture information to the display module. The display module generates a fish on the virtual aquarium by using predefined 3D model with the transmitted texture. The fishes on the virtual aquarium have three different swimming methods: self-swimming, autonomous swimming, and leader-following swimming. The three different swimming methods are discussed in this paper. The future study will be the implementation of virtual aquarium based on storytelling to further increase interactions with the viewer.

Fast Intra Mode Decision Algorithm for Depth Map Coding using Texture Information in 3D-AVC (3D-AVC에서 색상 영상 정보를 이용한 깊이 영상의 빠른 화면 내 예측 모드 결정 기법)

  • Kang, Jinmi;Chung, Kidong
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.2
    • /
    • pp.149-157
    • /
    • 2015
  • The 3D-AVC standard aims at improving coding efficiency by applying new techniques for utilizing intra, inter and view predictions. 3D video scenes are rendered with existing texture video and additional depth map. The depth map comes at the expense of increased computational complexity of the encoding process. For real-time applications, reducing the complexity of 3D-AVC is very important. In this paper, we present a fast intra mode decision algorithm to reduce the complexity burden in the 3D video system. The proposed algorithm uses similarity between texture video and depth map. The best intra prediction mode of the depth map is similar to that of the corresponding texture video. The early decision algorithm can be made on the intra prediction of depth map coding by using the coded intra mode of texture video. Adaptive threshold for early termination is also proposed. Experimental results show that the proposed algorithm saves the encoding time on average 29.7% without any significant loss in terms of the bit rate or PSNR value.

Registration of a 3D Scanned model with 2D Image and Texture Mapping (3차원 스캐닝 모델과 2차원 이미지의 레지스트레이션과 텍스쳐 맵핑)

  • Kim Young-Woong;Kim Young-Yil;Jun Cha-Soo;Park Sehyung
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2003.05a
    • /
    • pp.456-463
    • /
    • 2003
  • This paper presents a texture mapping method of a 3D scanned model with 2D images from different views. The texture mapping process consists of two steps Registration of the 3D facet model to the images by interactive points matching, and 3D texture mapping of the image pieces to the corresponding facets. In this paper. some implem entation issues and illustrative examples are described.

  • PDF

A design of low power structures of texture caches for mobile 3D graphics accelerator (모바일 3D 그래픽 가속기를 위한 저전력 텍스쳐 캐쉬 구조 설계)

  • Kim, Young-Sik;Lee, Jae-Young
    • Journal of Korea Game Society
    • /
    • v.6 no.4
    • /
    • pp.63-70
    • /
    • 2006
  • This paper studied various low power structures of texture caches for mobile 3D graphics accelerator to reduce the memory latency of texture data. Also the paper designed the texture cache with the variable threshold values of power mode transition according to the filtering algorithms. In the trace driven simulation, we compared the performance of those structures using Quake game engine as the benchmark. Also the algorithm was proposed and verified by the simulation, which has variable threshold values of power mode transitions according to the selected texture filtering method.

  • PDF

GPU-based dynamic point light particles rendering using 3D textures for real-time rendering (실시간 렌더링 환경에서의 3D 텍스처를 활용한 GPU 기반 동적 포인트 라이트 파티클 구현)

  • Kim, Byeong Jin;Lee, Taek Hee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.123-131
    • /
    • 2020
  • This study proposes a real-time rendering algorithm for lighting when each of more than 100,000 moving particles exists as a light source. Two 3D textures are used to dynamically determine the range of influence of each light, and the first 3D texture has light color and the second 3D texture has light direction information. Each frame goes through two steps. The first step is to update the particle information required for 3D texture initialization and rendering based on the Compute shader. Convert the particle position to the sampling coordinates of the 3D texture, and based on this coordinate, update the colour sum of the particle lights affecting the corresponding voxels for the first 3D texture and the sum of the directional vectors from the corresponding voxels to the particle lights for the second 3D texture. The second stage operates on a general rendering pipeline. Based on the polygon world position to be rendered first, the exact sampling coordinates of the 3D texture updated in the first step are calculated. Since the sample coordinates correspond 1:1 to the size of the 3D texture and the size of the game world, use the world coordinates of the pixel as the sampling coordinates. Lighting process is carried out based on the color of the sampled pixel and the direction vector of the light. The 3D texture corresponds 1:1 to the actual game world and assumes a minimum unit of 1m, but in areas smaller than 1m, problems such as stairs caused by resolution restrictions occur. Interpolation and super sampling are performed during texture sampling to improve these problems. Measurements of the time taken to render a frame showed that 146 ms was spent on the forward lighting pipeline, 46 ms on the defered lighting pipeline when the number of particles was 262144, and 214 ms on the forward lighting pipeline and 104 ms on the deferred lighting pipeline when the number of particle lights was 1,024766.

Study on evaluating the significance of 3D nuclear texture features for diagnosis of cervical cancer (자궁경부암 진단을 위한 3차원 세포핵 질감 특성값 유의성 평가에 관한 연구)

  • Choi, Hyun-Ju;Kim, Tae-Yun;Malm, Patrik;Bengtsson, Ewert;Choi, Heung-Kook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.10
    • /
    • pp.83-92
    • /
    • 2011
  • The aim of this study is to evaluate whether 3D nuclear chromatin texture features are significant in recognizing the progression of cervical cancer. In particular, we assessed that our method could detect subtle differences in the chromatin pattern of seemingly normal cells on specimens with malignancy. We extracted nuclear texture features based on 3D GLCM(Gray Level Co occurrence Matrix) and 3D Wavelet transform from 100 cell volume data for each group (Normal, LSIL and HSIL). To evaluate the feasibility of 3D chromatin texture analysis, we compared the correct classification rate for each of the classifiers using them. In addition to this, we compared the correct classification rates for the classifiers using the proposed 3D nuclear texture features and the 2D nuclear texture features which were extracted in the same way. The results showed that the classifier using the 3D nuclear texture features provided better results. This means our method could improve the accuracy and reproducibility of quantification of cervical cell.

Extended Cartoon Rendering using 3D Texture (3차원 텍스처를 이용한 카툰 렌더링의 만화적 스타일 다양화)

  • Byun, Hae-Won;Jung, Hye-Moon
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.8
    • /
    • pp.123-133
    • /
    • 2011
  • In this paper, we propose a new method for toon shading using 3D texture which renders 3d objects in a cartoon style. The conventional toon shading using 1D texture displays shading tone by computing the relative position and orientation between a light vector and surface normal. The 1D texture alone has limits to express the various tone change according to any viewing condition. Therefore Barla et. al. replaces a 1D texture with a 2D texture whose the second dimension corresponds to the view-dependent effects such as level-of-abstraction, depthof-field. The proposed scheme extends 2D texture to 3D texture by adding one dimension with the geometric information of 3D objects such as curvature, saliency, and coordinates. This approach supports two kinds of extensions for cartoon style diversification. First, we support "shape exaggeration effect" to emphasize silhouette or highlight according to the geometric information of 3D objects. Second, we further incorporate "cartoon specific effect", which is examples of screen tone and out focusing frequently appeared in cartoons. We demonstrate the effectiveness of our approach through examples that include a number of 3d objects rendered in various cartoon style.