• Title/Summary/Keyword: sound rendering

Search Result 37, Processing Time 0.53 seconds

Real-time 3D Audio Downmixing System based on Sound Rendering for the Immersive Sound of Mobile Virtual Reality Applications

  • Hong, Dukki;Kwon, Hyuck-Joo;Kim, Cheong Ghil;Park, Woo-Chan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.12
    • /
    • pp.5936-5954
    • /
    • 2018
  • Eight out of the top ten the largest technology companies in the world are involved in some way with the coming mobile VR revolution since Facebook acquired Oculus. This trend has allowed the technology related with mobile VR to achieve remarkable growth in both academic and industry. Therefore, the importance of reproducing the acoustic expression for users to experience more realistic is increasing because auditory cues can enhance the perception of the complicated surrounding environment without the visual system in VR. This paper presents a audio downmixing system for auralization based on hardware, a stage of sound rendering pipelines that can reproduce realiy-like sound but requires high computation costs. The proposed system is verified through an FPGA platform with the special focus on hardware architectural designs for low power and real-time. The results show that the proposed system on an FPGA can downmix maximum 5 sources in real-time rate (52 FPS), with 382 mW low power consumptions. Furthermore, the generated 3D sound with the proposed system was verified with satisfactory results of sound quality via the user evaluation.

'EVE-SoundTM' Toolkit for Interactive Sound in Virtual Environment (가상환경의 인터랙티브 사운드를 위한 'EVE-SoundTM' 툴킷)

  • Nam, Yang-Hee;Sung, Suk-Jeong
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.273-280
    • /
    • 2007
  • This paper presents a new 3D sound toolkit called $EVE-Sound^{TM}$ that consists of pre-processing tool for environment simplification preserving sound effect and 3D sound API for real-time rendering. It is designed so that it can allow users to interact with complex 3D virtual environments by audio-visual modalities. $EVE-Sound^{TM}$ toolkit would serve two different types of users: high-level programmers who need an easy-to-use sound API for developing realistic 3D audio-visually rendered applications, and the researchers in 3D sound field who need to experiment with or develop new algorithms while not wanting to re-write all the required code from scratch. An interactive virtual environment application is created with the sound engine constructed using $EVE-Sound^{TM}$ toolkit, and it shows the real-time audio-visual rendering performance and the applicability of proposed $EVE-Sound^{TM}$ for building interactive applications with complex 3D environments.

Cache simulation for measuring cache performance suitable for sound rendering (사운드 렌더링에 적합한 캐시 성능 측정을 위한 캐시 시뮬레이션)

  • Joo, Yejong;Hong, Dukki;Chung, Woonam;Park, Woo-Chan
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.3
    • /
    • pp.123-133
    • /
    • 2017
  • Cache performance is an important factor in hardware system. We proceed with a cache simulation to analyze the cache performance suitable for sound rendering. In addition, we introduce hardware models based on ray tracing used in geometric method and studies to improve cache performance. Cache simulation is performed on various conditions for cache size, way and block. Various simulations can be found to influence the cache hit rate. We compare cache simulation results with actual hardware performance to analyze cache performance suitable for sound rendering.

Adaptive depth control algorithm for sound tracing (사운드 트레이싱을 위한 적응형 깊이 조절 알고리즘)

  • Kim, Eunjae;Yun, Juwon;Chung, Woonam;Kim, Youngsik;Park, Woo-Chan
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.5
    • /
    • pp.21-30
    • /
    • 2018
  • In this paper, we use Sound-tracing, a 3D sound technology based on ray-tracing that uses geometric method as auditory technology to enhance realism. The Sound-tracing is costly in the sound propagation stage. In order to reduce the sound propagation cost, we propose a method to calculate the average effective frame number of previous frames using the frame coherence property and to adjust the depth according to the space based on the calculated number. Experimental results show that the path loss rate is 0.72% and the traversal & Intersection test calculation amount is decreased by 85.13% and the frame rate is increased by 4.48% when the sound source is indoors, compared with the result of the case without depth control. When the sound source was outdoors, the path loss was 0% and the traversal & Intersection test calculation amount is decreased by 25.01% and the frame rate increased by 7.85%. This allowed the rendering performance to be increased while minimizing the path loss rate.

A Multichannel System for Virtual 3-D Sound Rendering (입체음장재현을 위한 멀티채널시스템)

  • Lee Chanjoo;Park Youngjin;Oh Si-Hwan;Kim Yoonsun
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.223-226
    • /
    • 2000
  • Currently a multichannel system for virtual 3-D sound rendering is under development. Robust sound image formation and smooth real time interactivity are main design Points. The system utilizes VBAP algorithm as virtual sound image positioning. Overall system settings can be easily configured. We developed software, RIMA. as a driving Program of the system. At this stage, it is possible to position virtual sound images at arbitrary positions in three-dimensional space. The characteristics of the system are discussed. The system has been applied to the KAIST Bicycle Simulator to generate the virtual sound field.

  • PDF

A study on multichannel 3D sound rendering

  • Kim, Sun-Min;Park, Young-Jin
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.117.2-117
    • /
    • 2001
  • In this paper, 3D sound rendering using multichannel speakers is studied. Virtual 3D sound technology has mainly been researched with binaural system. The conventional binaural sound systems reproduce the desired sound at two arbitrary points using two speakers in 3DD space. However, it is hard to implement the localization of virtual source at back/front and top/below positions because the HRTF of an individual is unique just like the fingerprint. Most of all, the HRTF is highly sensitive to the elevation change. Multichannel sound systems have mainly been used to reproduce the sound field picked up over a certain volume rather than at specific points. Moreover, multichannel speakers arranged in 3-D space produce a much better performance of ...

  • PDF

Real-Time 3D Sound Rendering System Implementation for Virtual Reality Environment (VR 환경을 위한 실시간 음장 재현 시스템 구현)

  • Chae, Soo-Bok;Bhang, Seung-Beum;Hwang, Shin;Ko, Hee-Dong;Kim, Soon-Hyob
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.11a
    • /
    • pp.222-227
    • /
    • 2000
  • 본 논문은 VR시스템에서 실시간으로 3D Sound를 Rendering하기 위한 음장재현 시스템구현에 관한 것이다. 2개의 Speaker 또는 헤드폰을 사용하여 음장을 제어할 수 있다. 음장 제어는 레이 트레이싱(Ray Tracing)기법을 이용하여 음장을 시뮬레이션하고 가상 공간의 음장 파라미터를 추출하여 음원에 적용하면서 실시간으로 음장효과를 렌더링한다. 이 시스템은 펜티엄-II 333MHz, 128M RAM, SoundBlaster SoundCard를 사용한 시스템에서 구현하였다. 최종적으로 청취자는 2개의 스피커 또는 헤드폰을 이용하여 3D 음장을 경험하게 된다.

  • PDF

A Study on Real -Tine 3B Sound Rendering System for Virtual Reality Environment (VR 환경에 적합한 실시간 음장 재현 시스템에 관한 연구)

  • Chae SooBok;Bhang SungBeum;Hwang Shin;Ko HeeDong;Kim SoonHyob
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.353-356
    • /
    • 2000
  • 본 논문은 VR시스템에서 실시간으로 3D Sound를 Rendering하기 위한 음장재현 시스템구현에 관한 것이다. 2개의 Speaker 또는 헤드폰을 사용하여 음장을 제어할 수 있다. 음장 제어는 레이 트레이싱(Ray Tracing)기법을 이용하여 음장을 시뮬레이션하고 가상 공간의 음장 파라미터를 추출하여 음원에 적용하면서 실시간으로 음장효과를 렌더링한다. 이 시스템은 펜티엄-II 333MHz, 128M RAM, Sound Blaster Sound Card를 사용한 시스템에서 구현하였다. 최종적으로 청취자는 2개의 스피커 또는 헤드폰을 이용하여 3D 음장을 경험하게 된 다.

  • PDF

Ambisonic Rendering for Diffuse Sound Field Simulations based on Geometrical Acoustics (기하음향 기반 확산 음장 시뮬레이션을 위한 앰비소닉 렌더링 기법)

  • Pilsun Eu;Franz Zotter;Jae-hyoun Yoo;Jung-Woo Choi
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.11a
    • /
    • pp.26-29
    • /
    • 2022
  • The diffuse sound field plays a crucial role in the perceptual quality of the auralization of virtual scenes. Diffuse Rain is a geometrical scattering model which enables the simulation of diffuse fields that is compatible with acoustic ray tracing, but is often computationally expensive. We develop a novel method that can reduce this cost by rendering the large number of Diffuse Rain data in Ambisonics format. The proposed method is evaluated in a shoebox scene simulation run on MATLAB, in reference to a more faithful method of rendering the Diffuse Rain data ray-by-ray. The EDC and IACC of the binaural output show that the simulated diffuse field can be rendered in Ambisonics with only minimal deviations in energy decay and spatial quality, even with 1st-order Ambisonics.

  • PDF