• Title/Summary/Keyword: 물리 기반 렌더링

Search Result 40, Processing Time 0.058 seconds

A Study of 3D Sound Modeling based on Geometric Acoustics Techniques for Virtual Reality (가상현실 환경에서 기하학적 음향 기술 기반의 3차원 사운드 모델링 기술에 관한 연구)

  • Kim, Cheong Ghil
    • Journal of Satellite, Information and Communications
    • /
    • v.11 no.4
    • /
    • pp.102-106
    • /
    • 2016
  • With the popularity of smart phones and the help of high-speed wireless communication technology, high-quality multimedia contents have become common in mobile devices. Especially, the release of Oculus Rift opens a new era of virtual reality technology in consumer market. At the same time, 3D audio technology which is currently used to make computer games more realistic will soon be applied to the next generation of mobile phone and expected to offer a more expansive experience than its visual counterpart. This paper surveys concepts, algorithms, and systems for modeling 3D sound virtual environment applications. To do this, we first introduce an important design principle for audio rendering based on physics-based geometric algorithms and multichannel technologies, and introduce an audio rendering pipeline to a scene graph-based virtual reality system and a hardware architecture to model sound propagation.

Virtual pencil and airbrush rendering algorithm using particle patch (입자 패치 기반 가상 연필 및 에어브러시 가시화 알고리즘)

  • Lee, Hye Rin;Oh, Geon;Lee, Taek Hee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.3
    • /
    • pp.101-109
    • /
    • 2018
  • Recently, the improvement of virtual reality and augmented reality technologies leverages many new technologies like the virtual study room, virtual architecture room. Such virtual worlds require free handed drawing technology such as writing descriptions of formula or drawing blue print of buildings. In nature, lots of view point modifications occur when we walk around inside the virtual world. Especially, we often look some objects from near to far distance in the virtual world. Traditional drawing methods like using fixed size image for drawing unit is not produce acceptable result because they generate blurred and jaggy result as view distance varying. We propose a novel method which robust to the environment that produce lots of magnifications and minimizations like the virtual reality world. We implemented our algorithm both two dimensional and three dimensional devices. Our algorithm does not produce any artifacts, jaggy or blurred result regardless of scaling factor.

An Implementation Method of Virtual Environment Physical Properties (가상물체의 물리적 속성 구현 방법)

  • Im, Chang-Hyuck;Lee, Min-Geun;Lee, Myeong-Won
    • Journal of the Korea Computer Graphics Society
    • /
    • v.13 no.1
    • /
    • pp.25-32
    • /
    • 2007
  • Computer graphics technology has advanced such that all objects can be represented within a computer display. However, because computer displays have a finite resolution, the variety of objects that can be realistically represented together in the same view is restricted by the difference in their relative size. In addition, objects cannot be rendered according to their physical properties in terms of real length units in current computer graphics technology. To solve these problems, we have defined a method that allows objects to be described using real-world physical property units, such as metric units, in a computer graphics system, and developed a 3D browser based on X3D, which implements the concept of relative proportion properties.

  • PDF

Vision-Based Haptic Interaction Method for Telemanipulation: Macro and Micro Applications (원격조작을 위한 영상정보 기반의 햅틱인터렉션 방법: 매크로 및 마이크로 시스템 응용)

  • Kim, Jung-Sik;Kim, Jung
    • Proceedings of the KSME Conference
    • /
    • 2008.11a
    • /
    • pp.1594-1599
    • /
    • 2008
  • Haptic rendering is a process that provides force feedback during interactions between a user and an object. This paper presents a haptic rendering technique for a telemanipulation system of deformable objects using image processing and physically based modeling techniques. The interaction forces between an instrument driven by a haptic device and a deformable object are inferred in real time based on a continuum mechanics model of the object, which consists of a boundary element model and ${\alpha}$ priori knowledge of the object's mechanical properties. Macro- and micro-scale experimental systems, equipped with a telemanipulation system and a commercial haptic display, were developed and tested using silicone (macro-scale) and zebrafish embryos (micro-scale). The experimental results showed the effectiveness of the algorithm in different scales: two experimental systems applied the same algorithm provided haptic feedback regardless of the system scale.

  • PDF

Key-Frame Based Real-Time Fluid Simulations (키-프레임 기반 실시간 유체 시뮬레이션)

  • Ryu, Ji-Hyun;Park, Sang-Hun
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.11
    • /
    • pp.1515-1528
    • /
    • 2006
  • Systems for physically based fluid animation have developed rapidly in the visual special effects industry and can make very high quality images. However, in the real-time application fields such as computer game, the simulation speed is more critical issue than image quality. This paper presents a real-time method for animating fluid using programmable graphics pipeline. We show that once two key-frames are given, the technique can interactively generate a sequence of images changing from the source key-frame to the target.

  • PDF

Augmented Reality Based Tangible Interface For Digital Lighting of CAID System (CAID 시스템의 디지털 라이팅을 위한 증강 현실 기반의 실체적 인터페이스에 관한 연구)

  • Hwang, Jung-Ah;Nam, Tek-Jin
    • Archives of design research
    • /
    • v.20 no.3 s.71
    • /
    • pp.119-128
    • /
    • 2007
  • With the development of digital technologies, CAID became an essential part in the industrial design process. Creating photo-realistic images from a virtual scene with 3D models is one of the specialized task for CAID users. This task requires a complex interface of setting the positions and the parameters of camera and lights for optimal rendering results. However, the user interface of existing CAID tools are not simple for designers because the task is mostly accomplished in a parameter setting dialogue window. This research address this interface issues, in particular the issues related to lighting, by developing and evaluating TLS(Tangible Lighting Studio) that uses Augmented Reality and Tangible User Interface. The interface of positioning objects and setting parameters become tangible and distributed in the workspace to support more intuitive rendering task. TLS consists of markers, and physical controller, and a see-through HMD(Head Mounted Display). The user can directly control the lighting parameters in the AR workspace. In the evaluation experiment, TLS provide higher effectiveness, efficiency and user satisfaction compared to existing GUI(Graphic User Interface) method. It is expected that the application of TLS can be expanded to photography education and architecture simulation.

  • PDF

Real-Time Water Surface Simulation on GPU (GPU기반 실시간 물 표면 시뮬레이션)

  • Sung, Mankyu;Kwon, DeokHo;Lee, JaeSung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.12
    • /
    • pp.581-586
    • /
    • 2017
  • This paper proposes a GPU based water surface animation and rendering technique for interactive applications such as games. On the water surface, a lot of physical phenomenon occurs including reflection and refraction depending on the viewing direction. When we represent the water surface, not only showing them in real time, but also make them adjusted automatically. In our implementation, we are able to capture the reflection and refraction through render-to-texture technique and then modify the texture coordinates for applying separate DU/DV map. Also, we make the amount of ratio between reflection and refraction change automatically based on Fresnel formula. All proposed method are implemented using OpenGL 3D graphics API.

B-spline Volume BRDF Representation and Application in Physically-based Rendering (물리기반 렌더링에서의 비스플라인 볼륨 BRDF 표현과 응용)

  • Lee, Joo-Haeng;Park, Hyung-Jun
    • Korean Journal of Computational Design and Engineering
    • /
    • v.13 no.6
    • /
    • pp.469-477
    • /
    • 2008
  • Physically-based rendering is an image synthesis technique based on simulation of physical interactions between light and surface materials. Since generated images are highly photorealistic, physically-based rendering has become an indispensable tool in advanced design visualization for manufacturing and architecture as well as in film VFX and animations. Especially, BRDF (bidirectional reflectance distribution function) is critical in realistic visualization of materials since it models how an incoming light is reflected on the surface in terms of intensity and outgoing angles. In this paper, we introduce techniques to represent BRDF as B-spline volumes and to utilize them in physically-based rendering. We show that B-spline volume BRDF (BVB) representation is suitable for measured BRDFs due to its compact size without quality loss in rendering. Moreover, various CAGD techniques can be applied to B-spline volume BRDFs for further controls such as refinement and blending.

QoS Support on Real-Time Image Based Virtual Reality using Active Network Technology in Heterogeneous Networks (이기종 네트워크에서 능동 네트워크를 이용한 실시간 영상기반 가상환경 QoS 지원)

  • 박정민;박용진;박종일;원유집;지정훈
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.10e
    • /
    • pp.334-336
    • /
    • 2002
  • 본 논문은 능동네트워크 기술을 적용한 가상환경시스템에 대하여 논한다. 가상환경에서는 실시간 영상이 사용되면 대용량 실시간 영상 데이터의 전송에 따른 네트워크부하와, 데이터 렌더링에 따른 시스템 부하가 문제점으로 지적된다. 본 시스템에서는 능동네트워크기술을 적용하여 네트워크의 중간 라우터에서 실시간 영상 데이터의 프레임 레이트(Frame Rate)를 동적으로 변화시켜 이러한 문제점을 해결하였다. 가상환경상의 객체들 사이의 물리적인 거리를 고려한 인지모델에 따른 전송기법을 채택하고, 또한 멀티캐스트 그룹에 가입하고 있는 시스템의 네트워크대역폭 정보를 라우터에서 관리하여 각 시스템에 적합한 형태로 실시간 영상 데이터를 전송하는 방법을 사용하여 가상환경의 몰입도를 높힘과 통시에 네트워크의 부담을 감소시켰다. 이러한 시스템을 통하여 대용량의 영상 데이터의 전송을 실시간적으로 전송해야 하는 가상환경을 매우 효과적으로 지원한 수 있다.

  • PDF

Airflow visualization and an interactive method for segmentation of 3D nasal airway (상호작용 기반 3차원 비강 모델 분할 및 가시화)

  • Seo, An-Na;Heo, Go-Eun;Kim, S.K.;Kim, Jee-In
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06c
    • /
    • pp.320-322
    • /
    • 2012
  • 코 내부의 복잡한 기하학적 형상으로 인해 nasal airway의 분리는 많은 어려움을 겪고 있다. 본 논문은 velocimetry of nasal airflow 와 코 수술 계획을 위하여 3차원 공간에서 nasal airway를 interactive semiautomatic으로 분리하고 시각화하는 방법을 제안한다. 제안하는 방법은 ROI(Region-Of-Interest)와 multi-seed 3d region growing(MS3RG)기법을 적용하여 비강을 분리하며 볼륨렌더링 기법을 이용하여 분리된 영역을 3차원 공간에서 직관적으로 확인 할 수 있다. 또한 분리된 3차원 비강 모델은 유동흐름 실험을 위하여 3차원 프린터를 통해 실제 모형으로 제작 가능하다. 그리하여 CT dataset(512*512*175)을 가지고 매뉴얼 세그멘테이션에서 5시간 정도 걸리던 작업을 반자동 세그멘테이션 방법을 이용할 경우 최대 3분 이내에 분리 작업을 완료할 수 있으며 수치해석 실험 및 물리 실험에 이용할 수 있다.