• Title/Summary/Keyword: interactive rendering

Search Result 100, Processing Time 0.031 seconds

Requirements Analysis and Design of an HTML5 Based e-book Viewer System Supporting User Interaction (사용자 인터랙션을 지원하는 HTML5 기반 e-book 뷰어 시스템의 요구사항 분석 및 설계)

  • Choi, Jong Myung;Park, Kyung Woo;Oh, Soo Lyul
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.9 no.2
    • /
    • pp.33-40
    • /
    • 2013
  • E-books have been popular and common in everyday life during past a decade, and the market is expected to grow much more because of the popularity of tablet computing devices such as iPad. With the helps from the devices, people want to read or experience more interactive, fun, and informative e-book contents. In order to meet those needs, we introduce requirements of an e-book viewer system that supports user interaction, 3D modeling view, and augmented reality. We also introduce some design issues of the system and its concept proof prototype system. We determine to adopt HTML5 for e-book content format because it already supports content rendering, multimedia, and user interaction. Furthermore, it is easy to implement e-book viewer because there is already Webkit component for HTML5. We also discuss design issues for integrating an Augmented Reality viewer with Webkit-based e-book viewer. This paper will give e-book viewer developers and contents developers some guidelines for new e-book systems.

Real-Time Water Surface Simulation on GPU (GPU기반 실시간 물 표면 시뮬레이션)

  • Sung, Mankyu;Kwon, DeokHo;Lee, JaeSung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.12
    • /
    • pp.581-586
    • /
    • 2017
  • This paper proposes a GPU based water surface animation and rendering technique for interactive applications such as games. On the water surface, a lot of physical phenomenon occurs including reflection and refraction depending on the viewing direction. When we represent the water surface, not only showing them in real time, but also make them adjusted automatically. In our implementation, we are able to capture the reflection and refraction through render-to-texture technique and then modify the texture coordinates for applying separate DU/DV map. Also, we make the amount of ratio between reflection and refraction change automatically based on Fresnel formula. All proposed method are implemented using OpenGL 3D graphics API.

A Study on Real-Time Lightning Simulation for Smart Device (스마트기기 게임에 적합한 실시간 번개 시뮬레이션 연구)

  • Park, SungBae;Oh, GyuHwan
    • Journal of Korea Game Society
    • /
    • v.13 no.4
    • /
    • pp.35-46
    • /
    • 2013
  • In this paper, we show a real-time lightning simulation for smart device game. Our proposed method uses physically based Dielectric Breakdown Model to similar real world lightning path and we simplify the algorithm for real-time simulation in smart device. In addition, the rendering process can render multiple lightning and can real-time render in smart device. Finally, our lightning can support interactive with user. The simulation method will be effectively useful for a game that needs a real-time simulation as its game element in smart device environment.

Design of Low Cost Real-Time Audience Adaptive Digital Signage using Haar Cascade Facial Measures

  • Lee, Dongwoo;Kim, Daehyun;Lee, Junghoon;Lee, Seungyoun;Hwang, Hyunsuk;Mariappan, Vinayagam;Lee, Minwoo;Cha, Jaesang
    • International Journal of Advanced Culture Technology
    • /
    • v.5 no.1
    • /
    • pp.51-57
    • /
    • 2017
  • Digital signage is becoming part of daily life across a wide range of visual advertisements segments market used in stations, hotels, retail stores, hotels, etc. The current digital signage system used in market is generally works on limited user interactivity with static contents. In this paper, a new approach is proposed using computer vision based dynamic audience adaptive cost-effective digital signage system. The proposed design uses the Camera attached Raspberry Pi Open source platform to employ the real-time audience interaction using computer vision algorithms to extract facial features of the audience. The real-time facial features are extracted using Haar Cascade algorithm which are used for audience gender specific rendering of dynamic digital signage content. The audience facial characterization using Haar Cascade is evaluated on the FERET database with 95% accuracy for gender classification. The proposed system, developed and evaluated with male and female audiences in real-life environments camera embedded raspberry pi with good level of accuracy.

Indirect Illumination Algorithm with Mipmap-based Ray Marching and Denoising (밉맵기반 레이 마칭과 디노이징을 이용한 간접조명 알고리즘)

  • Zhang, Bo;Oh, KyoungSu
    • Journal of Korea Game Society
    • /
    • v.20 no.3
    • /
    • pp.75-84
    • /
    • 2020
  • This paper introduces an interactive indirect illumination algorithm which considers indirect visibility. First, a small number of rays are emitted on hemisphere of the current pixel to obtain the first intersection. If this point is directly illuminated by the light source, its illuminated color is collected. Second, in order to approximate the indirect visibility, a 3D ray marching algorithm, which is based on a hierarchy structure, is used to accelerate the ray-voxel intersection. Third, the indirect images are denoised by an edge-avoiding filtering with a local means replacement method.

Research and development of haptic simulator for Dental education using Virtual reality and User motion

  • Lee, Sang-Hyun
    • International Journal of Advanced Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.52-57
    • /
    • 2018
  • The purpose of this paper is to develop simulations that can be used for virtual education in dentistry. The virtual education to be developed will be developed with clinical training and actual case data of tooth extraction. This development goal is to allow dental students to learn the necessary surgical techniques at the point of their choice, not going into the operating room, away from time, space, and physical limits. I want to develop content using VR. Oculus Rift HMD, Optical Based Outside-in Tracking System, Oculus Touch Motion Controller, and Headset as Input / Output Device. In this configuration, the optimization method is applied convergent, and when the operation of the VR contents is performed, the content data is extracted from the interaction analysis formed in the VR engine, and the data is processed by the content algorithm. It also computes events and dental operations generated within the 3D engine programming and generates corresponding events through data processing according to the input signal. The visualization information is output to the HMD using the rendering information. In addition, the operating room environment was constructed by studying lighting and material for actual operating room environment. We applied the ratio of actual space to virtual space and the ratio between character and actual person to create a spatial composition at a similar rate to actual space.

Development of Interactive 3D Volume Visualization Techniques Using Contour Trees (컨투어 트리를 이용한 삼차원 볼륨 영상의 대화형 시각화 기법 개발)

  • Sohn, Bong-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.11
    • /
    • pp.67-76
    • /
    • 2011
  • This paper describes the development of interactive visualization techniques and a program that allow us to visualize the structure of the volume data and interactively select and visualize the isosurface components using contour tree. The main characteristic of this technique is to provide an algorithm that draws the contour tree in 2D plane in a way that users easily understand the tree, and to provide an algorithm that can efficiently extract an isosurface component utilizing GPU's parallel architecture. The main characteristic of the program we developed through implementing the algorithms is to provide us with an interactive user interface based on the contour tree for extracting an isosurface component and visualization that integrates with previous isosurface and volume rendering techniques. To show the excelland vof our methods, we applied 3D biomedical volume data to our algorithms. The results show that we could interactively select the isosurface components that represent a polypeptide chain, a ventricle and a femur respectively using the user interface based on our contour tree layout method, and extract the isosurface components with 3x-4x higher speed compared to previous methods.

Accelerating GPU-based Volume Ray-casting Using Brick Vertex (브릭 정점을 이용한 GPU 기반 볼륨 광선투사법 가속화)

  • Chae, Su-Pyeong;Shin, Byeong-Seok
    • Journal of the Korea Computer Graphics Society
    • /
    • v.17 no.3
    • /
    • pp.1-7
    • /
    • 2011
  • Recently, various researches have been proposed to accelerate GPU-based volume ray-casting. However, those researches may cause several problems such as bottleneck of data transmission between CPU and GPU, requirement of additional video memory for hierarchical structure and increase of processing time whenever opacity transfer function changes. In this paper, we propose an efficient GPU-based empty space skipping technique to solve these problems. We store maximum density in a brick of volume dataset on a vertex element. Then we delete vertices regarded as transparent one by opacity transfer function in geometry shader. Remaining vertices are used to generate bounding boxes of non-transparent area that helps the ray to traverse efficiently. Although these vertices are independent on viewing condition they need to be reproduced when opacity transfer function changes. Our technique provides fast generation of opaque vertices for interactive processing since the generation stage of the opaque vertices is running in GPU pipeline. The rendering results of our algorithm are identical to the that of general GPU ray-casting, but the performance can be up to more than 10 times faster.

Interactive 3D Visualization of Ceilometer Data (운고계 관측자료의 대화형 3차원 시각화)

  • Lee, Junhyeok;Ha, Wan Soo;Kim, Yong-Hyuk;Lee, Kang Hoon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.2
    • /
    • pp.21-28
    • /
    • 2018
  • We present interactive methods for visualizing the cloud height data and the backscatter data collected from ceilometers in the three-dimensional virtual space. Because ceilometer data is high-dimensional, large-size data associated with both spatial and temporal information, it is highly improbable to exhibit the whole aspects of ceilometer data simply with static, two-dimensional images. Based on the three-dimensional rendering technology, our visualization methods allow the user to observe both the global variations and the local features of the three-dimensional representations of ceilometer data from various angles by interactively manipulating the timing and the view as desired. The cloud height data, coupled with the terrain data, is visualized as a realistic cloud animation in which many clouds are formed and dissipated over the terrain. The backscatter data is visualized as a three-dimensional terrain which effectively represents how the amount of backscatter changes according to the time and the altitude. Our system facilitates the multivariate analysis of ceilometer data by enabling the user to select the date to be examined, the level-of-detail of the terrain, and the additional data such as the planetary boundary layer height. We demonstrate the usefulness of our methods through various experiments with real ceilometer data collected from 93 sites scattered over the country.

IMToon: Image-based Cartoon Authoring System using Image Processing (IMToon: 영상처리를 활용한 영상기반 카툰 저작 시스템)

  • Seo, Banseok;Kim, Jinmo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.2
    • /
    • pp.11-22
    • /
    • 2017
  • This study proposes IMToon(IMage-based carToon) which is an image-based cartoon authoring system using an image processing algorithm. The proposed IMToon allows general users to easily and efficiently produce frames comprising cartoons based on image. The authoring system is designed largely with two functions: cartoon effector and interactive story editor. Cartoon effector automatically converts input images into a cartoon-style image, which consists of image-based cartoon shading and outline drawing steps. Image-based cartoon shading is to receive images of the desired scenes from users, separate brightness information from the color model of the input images, simplify them to a shading range of desired steps, and recreate them as cartoon-style images. Then, the final cartoon style images are created through the outline drawing step in which the outlines of the shaded images are applied through edge detection. Interactive story editor is used to enter text balloons and subtitles in a dialog structure to create one scene of the completed cartoon that delivers a story such as web-toon or comic book. In addition, the cartoon effector, which converts images into cartoon style, is expanded to videos so that it can be applied to videos as well as still images. Finally, various experiments are conducted to verify the possibility of easy and efficient production of cartoons that users want based on images with the proposed IMToon system.