• Title/Summary/Keyword: Real-Time Render

Search Result 51, Processing Time 0.03 seconds

Development of the Real-Time Multiplex Channel Media Player to Heighten the Dramatic Effect of an Advertisement (광고 효과 증대를 위한 실시간 다중 채널 미디어 재생기의 개발)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.1
    • /
    • pp.50-55
    • /
    • 2011
  • This paper describes methodology which enables user in order to play multiplex channel media at realtime to augment a various advertisement effect efficiently. This method implemented from the computer environment where DirectX SDK, DirectShow and MS Visual Studio 2008 etc. are established. This media player have or hide the menu interface for reads the media. The experimental data which are used in the media player is mostly video. We added the area where has the function of Banner Ticker and GIF Animation in the media player in order augmenting an advertisement effect. All medias come to separate with video and audio by Splitter. Then that respectively execute Decoder and Render. Also the media player are possible video mixing using an alpha channel. This paper used VMR-9 of DirectShow for this. The player which sees to use multiplex channel, to remake the various medias simultaneously. Therefore, this player which sees advertisement effect of the form which is various positively in the users, has the advantage which is the possibility to recognize. This paper use tried the media player using experimental data and compare the existing media player and the media player which proposes from functional differences for an advertisement effect.

Real-Time Hierarchical Techniques for Rendering of Translucent Materials and Screen-Space Interpolation (반투명 재질의 렌더링과 화면 보간을 위한 실시간 계층화 알고리즘)

  • Ki, Hyun-Woo;Oh, Kyoung-Su
    • Journal of Korea Game Society
    • /
    • v.7 no.1
    • /
    • pp.31-42
    • /
    • 2007
  • In the natural world, most materials such as skin, marble and cloth are translucent. Their appearance is smooth and soft compared with metals or mirrors. In this paper, we propose a new GPU based hierarchical rendering technique for translucent materials, based on the dipole diffusion approximation, at interactive rates. Information of incident light, position, normal, and irradiance, on the surfaces are stored into 2D textures by rendering from a primary light view. Huge numbers of pixel photons are clustered into quad-tree image pyramids. Each pixel, we select clusters (sets of photons), and then we approximate multiple subsurface scattering term with the clusters. We also introduce a novel hierarchical screen-space interpolation technique by exploiting spatial coherence with early-z culling on the GPU. We also build image pyramids of the screen using mipmap and pixel shader. Each pixel of the pyramids is stores position, normal and spatial similarity of children pixels. If a pixel's the similarity is high, we render the pixel and interpolate the pixel to multiple pixels. Result images show that our method can interactively render deformable translucent objects by approximating hundreds of thousand photons with only hundreds clusters without any preprocessing. We use an image-space approach for entire process on the GPU, thus our method is less dependent to scene complexity.

  • PDF

Stereo-To-Multiview Conversion System Using FPGA and GPU Device (FPGA와 GPU를 이용한 스테레오/다시점 변환 시스템)

  • Shin, Hong-Chang;Lee, Jinwhan;Lee, Gwangsoon;Hur, Namho
    • Journal of Broadcast Engineering
    • /
    • v.19 no.5
    • /
    • pp.616-626
    • /
    • 2014
  • In this paper, we introduce a real-time stereo-to-multiview conversion system using FPGA and GPU. The system is based on two different devices so that it consists of two major blocks. The first block is a disparity estimation block that is implemented on FPGA. In this block, each disparity map of stereoscopic video is estimated by DP(dynamic programming)-based stereo matching. And then the estimated disparity maps are refined by post-processing. The refined disparity map is transferred to the GPU device through USB 3.0 and PCI-express interfaces. Stereoscopic video is also transferred to the GPU device. These data are used to render arbitrary number of virtual views in next block. In the second block, disparity-based view interpolation is performed to generate virtual multi-view video. As a final step, all generated views have to be re-arranged into a single image at full resolution for presenting on the target autostereoscopic 3D display. All these steps of the second block are performed in parallel on the GPU device.

shRNA Mediated RHOXF1 Silencing Influences Expression of BCL2 but not CASP8 in MCF-7 and MDA-MB-231 Cell Lines

  • Ghafouri-Fard, Soudeh;Abdollahi, Davood Zare;Omrani, Mirdavood;Azizi, Faezeh
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.13 no.11
    • /
    • pp.5865-5869
    • /
    • 2012
  • RHOXF1 has been shown to be expressed in embryonic stem cells, adult germline stem cells and some cancer lines. It has been proposed as a candidate gene to encode transcription factors regulating downstream genes in the human testis with antiapoptotic effects. Its expression in cancer cell lines has implied a similar role in the process of tumorigenesis. The human breast cancer cell lines MDA-MB-231 and MCF-7 were cultured in DMEM medium and transfected with a pGFP-V-RS plasmid bearing an RHOXF1 specific shRNA. Quantitative real-time RT-PCR was performed for RHOXF1, CASP8, BCL2 and HPRT genes. Decreased RHOXF1 expression was confirmed in cells after transfection. shRNA knock down of RHOXF1 resulted in significantly decreased BCL2 expression in both cell lines but no change in CASP8 expression. shRNA targeting RHOXF1 was shown to specifically mediate RHOXF1 gene silencing, so RHOXF1 can mediate transcriptional activation of the BCL2 in cancers and may render tumor cells resistant to apoptotic cell death induced by anticancer therapy. shRNA mediated knock down of RHOXF1 can be effective in induction of apoptotic pathway in cancer cells via BCL2 downregulation, so it can have potential therapeutic utility for human breast cancer.

Real-time Soft-shadow using Shadow Atlas (그림자 아틀라스를 이용한 부드러운 그림자 생성 방법)

  • Park, Sun-Yong;Yang, Jin-Suk;Oh, Kyoung-Su
    • Journal of the Korea Computer Graphics Society
    • /
    • v.17 no.2
    • /
    • pp.11-16
    • /
    • 2011
  • In computer graphics, shadows play a very important role as a hint of inter-object distance as well as themselves in terms of realism. To represent shadows, some traditional methods such as shadow mapping and shadow volume have been frequently used for the purpose. However, the rendering results are not natural since they assume the point light. On the contrary, an area light can render soft-shadows, but its computation is too burdensome due to integral over the whole light source surface. Many alternatives have been introduced, back-projection of occluder onto the light source to get visibility of light or filtering of shadow boundary by calculating size of penumbra. But they also have problems of light bleeding or ringing effects because of low order approximation, or low performance. In this paper, we describe a method to improve those problems using shadow atlas.

Development of Mobile Volume Visualization System (모바일 볼륨 가시화 시스템 개발)

  • Park, Sang-Hun;Kim, Won-Tae;Ihm, In-Sung
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.12 no.5
    • /
    • pp.286-299
    • /
    • 2006
  • Due to the continuing technical progress in the capabilities of modeling, simulation, and sensor devices, huge volume data with very high resolution are common. In scientific visualization, various interactive real-time techniques on high performance parallel computers to effectively render such large scale volume data sets have been proposed. In this paper, we present a mobile volume visualization system that consists of mobile clients, gateways, and parallel rendering servers. The mobile clients allow to explore the regions of interests adaptively in higher resolution level as well as specify rendering / viewing parameters interactively which are sent to parallel rendering server. The gateways play a role in managing requests / responses between mobile clients and parallel rendering servers for stable services. The parallel rendering servers visualize the specified sub-volume with rendering contexts from clients and then transfer the high quality final images back. This proposed system lets multi-users with PDA simultaneously share commonly interesting parts of huge volume, rendering contexts, and final images through CSCW(Computer Supported Cooperative Work) mode.

Design of Special Function Unit for Vectorized SIMD Programmable Unified Shader (벡터화된 SIMD 프로그램어블 통합 셰이더를 위한 특수 함수 유닛 설계)

  • Jung, Jin-Ha;Kim, Kyeong-Seob;Yun, Jeong-Hee;Seo, Jang-Won;Choi, Sang-Bang
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.47 no.5
    • /
    • pp.56-70
    • /
    • 2010
  • Rendering technique generating 2 dimensional image to give reality and high performance graphical processor for efficient processing of massive data are necessary to support realistic 3 dimensional graphical image. Recently, graphical hardwares have evolved rapidly. This enables high quality rendering effect that we were unable to process in realtime. Improving shading technique enabled us to render realistic images but still much time is required for this process. Multiple operational units are being integrated in a graphical processor for effective floating point operation using massive data to process almost real looking images. In this paper, we have designed and implemented a special functional unit to support high quality 3 dimensional computer graphic image on programmable integrated shader processor. We have done evaluation through functional level simulation of designed special functional unit. Hardware resource usage rate and execution speed are measured implementing directly on FPGA Virtex-4(xc4vlx200).

A Novel Method for Material Rendering and Real Measurement of Thickness Using Ultrasound (초음파를 이용한 실측 두께 측정과 재질 렌더링)

  • Choi, Taeyoung;Chin, Seongah
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.6
    • /
    • pp.190-197
    • /
    • 2014
  • In this paper, we present a method for optical parameter-based material rendering with measuring the thickness of a material using ultrasonic waves. Thickness is an important element in determining the reflectance and transmittance of a material along with its optical characteristics and plays a crucial role in more realistic object rendering. In studies conducted thus far, thickness has been measured and used for rendering. The proposed method is a novel method attempted for the first time ever to render a material considering the thickness of a material whose thickness cannot be measured by visual assessment, using ultrasonic waves. It was implemented by measuring the sound velocity of the reference sample and applying the results to the thickness measurement of other objects that have the same characteristics. The characteristics of the objects measured are reflected in the quality of the final rendering, thus verifying the importance of thickness in rendering.

A Hierarchical User Interface for Large 3D Meshes in Mobile Systems (모바일 시스템의 대용량 3차원 메쉬를 위한 계층적 사용자 인터페이스)

  • Park, Jiro;Lee, Haeyoung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.19 no.1
    • /
    • pp.11-20
    • /
    • 2013
  • This paper introduces a user interface for large 3D meshes in mobile systems, which have limited memory, screen size and battery power. A large 3D mesh is divided into partitions and simplified in multi-resolutions so a large file is transformed into a number of small data files and saved in a PC server. Only selected small files specified by the user are hierarchically transmitted to the mobile system for 3D browsing and rendering. A 3D preview in a pop-up shows a simplified mesh in the lowest resolution. The next step displays simplified meshes whose resolutions are automatically controlled by the user interactions. The last step is to render a set of detailed original partitions in a selected range. As a result, while minimizing using mobile system resources, our interface enables us to browse and display 3D meshes in mobile systems through real-time interactions. A mobile 3D viewer and a 3D app are also presented to show the utility of the proposed user interface.

GPU-based dynamic point light particles rendering using 3D textures for real-time rendering (실시간 렌더링 환경에서의 3D 텍스처를 활용한 GPU 기반 동적 포인트 라이트 파티클 구현)

  • Kim, Byeong Jin;Lee, Taek Hee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.123-131
    • /
    • 2020
  • This study proposes a real-time rendering algorithm for lighting when each of more than 100,000 moving particles exists as a light source. Two 3D textures are used to dynamically determine the range of influence of each light, and the first 3D texture has light color and the second 3D texture has light direction information. Each frame goes through two steps. The first step is to update the particle information required for 3D texture initialization and rendering based on the Compute shader. Convert the particle position to the sampling coordinates of the 3D texture, and based on this coordinate, update the colour sum of the particle lights affecting the corresponding voxels for the first 3D texture and the sum of the directional vectors from the corresponding voxels to the particle lights for the second 3D texture. The second stage operates on a general rendering pipeline. Based on the polygon world position to be rendered first, the exact sampling coordinates of the 3D texture updated in the first step are calculated. Since the sample coordinates correspond 1:1 to the size of the 3D texture and the size of the game world, use the world coordinates of the pixel as the sampling coordinates. Lighting process is carried out based on the color of the sampled pixel and the direction vector of the light. The 3D texture corresponds 1:1 to the actual game world and assumes a minimum unit of 1m, but in areas smaller than 1m, problems such as stairs caused by resolution restrictions occur. Interpolation and super sampling are performed during texture sampling to improve these problems. Measurements of the time taken to render a frame showed that 146 ms was spent on the forward lighting pipeline, 46 ms on the defered lighting pipeline when the number of particles was 262144, and 214 ms on the forward lighting pipeline and 104 ms on the deferred lighting pipeline when the number of particle lights was 1,024766.