• Title/Summary/Keyword: sound spatialization

Search Result 4, Processing Time 0.018 seconds

Sound Researches in Computer Graphics Community: Part I. Sound Synthesis and Spatialization (컴퓨터 그래픽스 커뮤니티에 소개된 사운드 관련 연구들: Part I. 사운드 합성과 공간화)

  • Yoo, Min-Joon;Lee, In-Kwon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.15 no.1
    • /
    • pp.25-34
    • /
    • 2009
  • Sound is very important element to enhance and reinforce reality and immersion of users in virtual reality and computer animation. Recently, significant researches about sound modeling are presented in computer graphics community. In this article, main subjects are explained and major researches are reviewed based on the sound papers presented in computer graphics community. Specially, several papers about following two subjects are reviewed in this paper: 1) synthesing sound using physically-based laws and generating sound synchronized with graphics. 2) spatializing sound and modeling sonic environment. Many research about sound modeling have been focused on more efficient modeling of real physical law and generate realistic sound with limited resources. Based on this concept, various papers are introduced and the relationship between researches about sound and graphics is discussed.

  • PDF

CONCERT HALL ACOUSTICS - Physics, Physiology and Psychology fusing Music and Hall - (콘서트홀 음향 - 음악과 홀을 융합시키는 물리학, 생리학, 심리학 -)

  • 안도요이찌
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1992.06a
    • /
    • pp.3-8
    • /
    • 1992
  • The theory of subjective preference with temporal and spatial factors which include sound signals arriving at both ears is described. Then, auditory evoked potentials which may relate to a primitive subjective response namely subjective preference are discussed. According to such fundamental phenomena, a workable model of human auditory-brain system is proposed. For eample, important subjective attributes, such as loudness, coloration, threshold of preception of a reflection and echo distrubance as well as subjective preference in relation to the initial time delay gap between the direct sound and the first reflection, and the subsequent reverberation time are well described by the autocorrelation function of source signals. Speech clarity, subjective diffuseness as well as subjective preference are related to the magnitude of inter-aural crosscorrelation function (IACC). Even the caktail party effects may be eplained by spatialization of human brain, i.e., independence of temporal and spatial factors.

  • PDF

Headphone-based multi-channel 3D sound generation using HRTF (HRTF를 이용한 헤드폰 기반의 다채널 입체음향 생성)

  • Kim Siho;Kim Kyunghoon;Bae Keunsung;Choi Songin;Park Manho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.1
    • /
    • pp.71-77
    • /
    • 2005
  • In this paper we implement a headphone-based 5.1 channel 3-dimensional (3D) sound generation system using HRTF (Head Related Transfer Function). Each mono sound source in the 5.1 channel signal is localized on its virtual location by binaural filtering with corresponding HRTFs, and reverberation effect is added for spatialization. To reduce the computational burden, we reduce the number of taps in the HRTF impulse response and model the early reverberation effect with several tens of impulses extracted from the whole impulse sequences. We modified the spectrum of HRTF by weighing the difference of front-back spec01m to reduce the front-back confusion caused by non-individualized HRTF DB. In informal listening test we can confirm that the implemented 3D sound system generates live and rich 3D sound compared with simple stereo or 2 channel down mixing.

A 3D Audio Broadcasting Terminal for Interactive Broadcasting Services (대화형 방송을 위한 3차원 오디오 방송단말)

  • Park Gi Yoon;Lee Taejin;Kang Kyeongok;Hong Jinwoo
    • Journal of Broadcast Engineering
    • /
    • v.10 no.1 s.26
    • /
    • pp.22-30
    • /
    • 2005
  • We implement an interactive 3D audio broadcasting terminal which synthesizes an audio scene according to the request of a user. Audio scene structure is described by the MPEG-4 AudioBIFS specifications. The user updates scene attributes and the terminal synthesizes the corresponding sound images in the 3D space. The terminal supports the MPEG-4 Audio top nodes and some visual nodes. Instead of using sensor nodes and route elements, we predefine node type-specific user interfaces to support BIFS commands for field replacement. We employ sound spatialization, directivity/shape modeling, and reverberation effects for 3D audio rendering and realistic feedback to user inputs. We also introduce a virtual concert program as an application scenario of the interactive broadcasting terminal.