• Title/Summary/Keyword: render to texture

Search Result 31, Processing Time 0.024 seconds

Integrating Color, Texture and Edge Features for Content-Based Image Retrieval (내용기반 이미지 검색을 위한 색상, 텍스쳐, 에지 기능의 통합)

  • Ma Ming;Park Dong-Won
    • Science of Emotion and Sensibility
    • /
    • v.7 no.4
    • /
    • pp.57-65
    • /
    • 2004
  • In this paper, we present a hybrid approach which incorporates color, texture and shape in content-based image retrieval. Colors in each image are clustered into a small number of representative colors. The feature descriptor consists of the representative colors and their percentages in the image. A similarity measure similar to the cumulative color histogram distance measure is defined for this descriptor. The co-occurrence matrix as a statistical method is used for texture analysis. An optimal set of five statistical functions are extracted from the co-occurrence matrix of each image, in order to render the feature vector for eachimage maximally informative. The edge information captured within edge histograms is extracted after a pre-processing phase that performs color transformation, quantization, and filtering. The features where thus extracted and stored within feature vectors and were later compared with an intersection-based method. The content-based retrieval system is tested to be effective in terms of retrieval and scalability through experimental results and precision-recall analysis.

  • PDF

Feature Detection and Simplification of 3D Face Data with Facial Expressions

  • Kim, Yong-Guk;Kim, Hyeon-Joong;Choi, In-Ho;Kim, Jin-Seo;Choi, Soo-Mi
    • ETRI Journal
    • /
    • v.34 no.5
    • /
    • pp.791-794
    • /
    • 2012
  • We propose an efficient framework to realistically render 3D faces with a reduced set of points. First, a robust active appearance model is presented to detect facial features in the projected faces under different illumination conditions. Then, an adaptive simplification of 3D faces is proposed to reduce the number of points, yet preserve the detected facial features. Finally, the point model is rendered directly, without such additional processing as parameterization of skin texture. This fully automatic framework is very effective in rendering massive facial data on mobile devices.

A Study on Real-time Graphic Workflow For Achieving The Photorealistic Virtual Influencer

  • Haitao Jiang
    • International journal of advanced smart convergence
    • /
    • v.12 no.1
    • /
    • pp.130-139
    • /
    • 2023
  • With the increasing popularity of computer-generated virtual influencers, the trend is rising especially on social media. Famous virtual influencer characters Lil Miquela and Imma were all created by CGI graphics workflows. The process is typically a linear affair. Iteration is challenging and costly. Development efforts are frequently siloed off from one another. Moreover, it does not provide a real-time interactive experience. In the previous study, a real-time graphic workflow was proposed for the Digital Actor Hologram project while the output graphic quality is less than the results obtained from the CGI graphic workflow. Therefore, a real-time engine graphic workflow for Virtual Influencers is proposed in this paper to facilitate the creation of real-time interactive functions and realistic graphic quality. The real-time graphic workflow is obtained from four processes: Facial Modeling, Facial Texture, Material Shader, and Look-Development. The analysis of performance with real-time graphical workflow for Digital Actor Hologram demonstrates the usefulness of this research result. Our research will be efficient in producing virtual influencers.

Displacement mapping using an image pyramid based multi-layer height map (이미지 피라미드 기반 다층 높이 맵을 사용한 변위 매핑 기법)

  • Chun, Young-Jae;Oh, Kyoung-Su
    • Journal of the Korea Computer Graphics Society
    • /
    • v.14 no.3
    • /
    • pp.11-17
    • /
    • 2008
  • Many methods which represent complex surfaces using height map without a number of vertex have been researched. However, a single layer height map cannot present more complex objects because it has only one height value on each position. In this paper, we introduce the new approach to render more complex objects, which are not generated by single layer height map, using multi layer height map. We store height values of the scene to each texture channel by the ascending order. A pair of ordered height values composes a geometry block and we use this property. For accurate ray search, we store the highest value in odd channels and the lowest value in even channels to generate quad tree height map. Our ray search algorithm shows accurate intersections between viewing ray and height values using quad tree height map. We solve aliasing problems on grazing angles occurred in previous methods and render the result scene on real-time.

  • PDF

Cartoon Character Rendering based on Shading Capture of Concept Drawing (원화의 음영 캡쳐 기반 카툰 캐릭터 렌더링)

  • Byun, Hae-Won;Jung, Hye-Moon
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.8
    • /
    • pp.1082-1093
    • /
    • 2011
  • Traditional rendering of cartoon character cannot revive the feeling of concept drawings properly. In this paper, we propose capture technology to get toon shading model from the concept drawings and with this technique, we provide a new novel system to render 3D cartoon character. Benefits of this system is to cartoonize the 3D character according to saliency to emphasize the form of 3D character and further support the sketch-based user interface for artists to edit shading by post-production. For this, we generate texture automatically by RGB color sorting algorithm to analyze color distribution and rates of selected region. In the cartoon rendering process, we use saliency as a measure to determine visual importance of each area of 3d mesh and we provide a novel cartoon rendering algorithm based on the saliency of 3D mesh. For the fine adjustments of shading style, we propose a user interface that allow the artists to freely add and delete shading to a 3D model. Finally, this paper shows the usefulness of the proposed system through user evaluation.

GPU-based dynamic point light particles rendering using 3D textures for real-time rendering (실시간 렌더링 환경에서의 3D 텍스처를 활용한 GPU 기반 동적 포인트 라이트 파티클 구현)

  • Kim, Byeong Jin;Lee, Taek Hee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.123-131
    • /
    • 2020
  • This study proposes a real-time rendering algorithm for lighting when each of more than 100,000 moving particles exists as a light source. Two 3D textures are used to dynamically determine the range of influence of each light, and the first 3D texture has light color and the second 3D texture has light direction information. Each frame goes through two steps. The first step is to update the particle information required for 3D texture initialization and rendering based on the Compute shader. Convert the particle position to the sampling coordinates of the 3D texture, and based on this coordinate, update the colour sum of the particle lights affecting the corresponding voxels for the first 3D texture and the sum of the directional vectors from the corresponding voxels to the particle lights for the second 3D texture. The second stage operates on a general rendering pipeline. Based on the polygon world position to be rendered first, the exact sampling coordinates of the 3D texture updated in the first step are calculated. Since the sample coordinates correspond 1:1 to the size of the 3D texture and the size of the game world, use the world coordinates of the pixel as the sampling coordinates. Lighting process is carried out based on the color of the sampled pixel and the direction vector of the light. The 3D texture corresponds 1:1 to the actual game world and assumes a minimum unit of 1m, but in areas smaller than 1m, problems such as stairs caused by resolution restrictions occur. Interpolation and super sampling are performed during texture sampling to improve these problems. Measurements of the time taken to render a frame showed that 146 ms was spent on the forward lighting pipeline, 46 ms on the defered lighting pipeline when the number of particles was 262144, and 214 ms on the forward lighting pipeline and 104 ms on the deferred lighting pipeline when the number of particle lights was 1,024766.

Antioxidant Properties and Quality Characteristics of Dasik Supplemented with Longanae Arillus (용안육 다식의 항산화 활성 및 품질 특성)

  • Yang, Eun Young;Han, Young Sil;Sim, Ki Hyeon
    • The Korean Journal of Food And Nutrition
    • /
    • v.31 no.4
    • /
    • pp.485-494
    • /
    • 2018
  • This study was designed to evaluate the quality characteristics and antioxidant properties of Longanae Arillus powder (added in the ratios of 0%, 25%, 50%, 75%, and 100%), which is traditionally used to render the product more suited to the modern consumer's taste for the compound Longanae Arillus Dasiks. As the consumer consumption of the proportions of Longanae Arillus increases, the moisture content and pH of the Dasik supplemented with Longanae Arillus decrease, while at the same time, a soluble solid content increases (p<0.001). The color value showed the decrease in L and b values with the increase in Longanae Arillus content, and the increase in the value of the compound with the addition of Longanae Arillus (p<0.001) can be noted. The mechanical texture of Dasik was increased by the addition of Longanae Arillus considering its hardness, adhesiveness, cohesiveness, springiness, and chewiness (p<0.001). In the sensory evaluation of the Longanae Arillus Dasik showed that people expressed an overall preference at the addition of Longanae Arillus, such as noted as being preferred with 50% in color, flavor, taste, texture, and overall acceptance of the compound (p<0.001). Regarding the antioxidant activity of Longanae Arillus Dasik, the total phenolic, flavonoid contents, DPPH radical scavenging activity, reducing power, and superoxide anion levels were found to increase with the addition of Longanae Arillus (p<0.001) with its consumer use. It is believed that Longanae Arillus Dasik, is most preferred to be added at the concentration of 50% during the Dasik preparation.

Video-based Stained Glass

  • Kang, Dongwann;Lee, Taemin;Shin, Yong-Hyeon;Seo, Sanghyun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.7
    • /
    • pp.2345-2358
    • /
    • 2022
  • This paper presents a method to generate stained-glass animation from video inputs. The method initially segments an input video volume into several regions considered as fragments of glass by mean-shift segmentation. However, the segmentation predominantly results in over-segmentation, causing several tiny segments in a highly textured area. In practice, assembling significantly tiny or large glass fragments is avoided to ensure architectural stability in stained glass manufacturing. Therefore, we use low-frequency components in the segmentation to prevent over-segmentation and subdivide segmented regions that are oversized. The subdividing must be coherent between adjacent frames to prevent temporal artefacts, such as flickering and the shower door effect. To temporally subdivide regions coherently, we obtain a panoramic image from the segmented regions in input frames, subdivide it using a weighted Voronoi diagram, and thereafter project the subdivided regions onto the input frames. To render stained glass fragment for each coherent region, we determine the optimal match glass fragment for the region from a dataset consisting of real stained-glass fragment images and transfer its color and texture to the region. Finally, applying lead came at the boundary of the regions in each frame yields temporally coherent stained-glass animation.

A RENDERING ALGORITHM FOR HYBRID SCENE REPRESENTATION

  • Tien, Yen;Chou, Yun-Fung;Shih, Zen-Chung
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.17-22
    • /
    • 2009
  • In this paper, we discuss two fundamental issues of hybrid scene representation: constructing and rendering. A hybrid scene consists of triangular meshes and point-set models. Consider the maturity of modeling techniques of triangular meshes, we suggest that generate a point-set model from a triangular mesh might be an easier and more economical way. We improve stratified sampling by introducing the concept of priority. Our method has the flexibility that one may easily change the importance criteria by substituting priority functions. While many works were devoted to blend rendering results of point and triangle, our work tries to render point-set models and triangular meshes as individuals. We propose a novel way to eliminate depth occlusion artifacts and to texture a point-set model. Finally, we implement our rendering algorithm with the new features of the shader model 4.0 and turns out to be easily integrated with existing rendering techniques for triangular meshes.

  • PDF

Traveler Guidance System based on 3D Street Modeling

  • Kim, Seung-Jun;Eom, Seong-Eun;Byun, Sung-Cheal;Yang, See-Moon;Ahn, Byung-Ha
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1187-1190
    • /
    • 2004
  • This paper presents a traveler guidance system that offers 3D street information such as road types, signal light systems, street trees, buildings, etc. We consider 5x4 road system of Gangnam(in Seoul, Korea) as a test area and reflect the traveler's car-driving situation. A web server is constructed to serve traveler's driving path by switching 3D animation scenes automatically. To do batch processing of geometric data for the 3D graphical streets construction, we have extracted major street information from present GIS database and created new GIS file formats (SMF files), which contain data sessions for links, nodes, and facilities. With these files, we can render 3D navigation scenes. A number of vector calculations were performed for the geometrical consistence and texture-mapping method was used for the realistic scene generation. Finally, we have verified the effectiveness of the service by operating a test scenario. We have checked whether traveler's 2D path and 3D navigation are exactly reported after setting specific departure and destination. This system offers us well awareness of streets and takes useful role of traveler guidance.

  • PDF