• 제목/요약/키워드: Virtual sound

검색결과 244건 처리시간 0.026초

Cross-talk Cancellation Algorithm for 3D Sound Reproduction

  • Kim, Hyoun-Suk;Kim, Poong-Min;Kim, Hyun-Bin
    • ETRI Journal
    • /
    • 제22권2호
    • /
    • pp.11-19
    • /
    • 2000
  • If the right and left signals of a binaural sound recording are reproduced through loudspeakers instead of a headphone, they are inevitably mixed during their transmission to the ears of the listener. This degrades the desired realism in the sound reproduction system, which is commonly called 'cross-talk.' A 'cross-talk canceler' that filters binaural signals before they are sent to the sound sources is needed to prevent cross-talk. A cross-talk canceler equalizes the resulting sound around the listener's ears as if the original binaural signal sound is reproduced next to the ears of listener. A cross-talk canceler is also a solution to the problem-how binaural sound is distributed to more than 2 channels that drive sound sources. This paper presents an effective way of building a cross-talk canceler in which geometric information, including locations of the listener and multiple loudspeakers, is divided into angular information and distance information. The presented method makes a database in an off-line way using an adaptive filtering technique and Head Related Transfer Functions. Though the database is mainly concerned about the situation where loudspeakers are located on a standard radius from the listener, it can be used for general radius cases after a distance compensation process, which requires a small amount of computation. Issues related to inverting a system to build a cross-talk canceler are discussed and numerical results explaining the preferred configuration of a sound reproduction system for stereo loudspeakers are presented.

  • PDF

Interactive sound experience interface based on virtual concert hall (가상 콘서트홀 기반의 인터랙티브 음향 체험 인터페이스)

  • Cho, Hye-Seung;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • 제36권2호
    • /
    • pp.130-135
    • /
    • 2017
  • In this paper, we propose an interface for interactive sound experience in the virtual concert hall. The proposed interface consists of two systems, called 'virtual acoustic position' and 'virtual active listening'. To provide these systems, we applied an artificial reverberation algorithm, multi-channel source separation and head-related transfer function. The proposed interface was implemented by using Unity. The interface provides the virtual concert hall to user through Oculus Rift, one of the virtual reality headsets. Moreover, we used Leap Motion as a control device to allow a user experience the system with free-hand. And user can experience the sound of the system through headphones.

Listener Auditory Perception Enhancement using Virtual Sound Source Design for 3D Auditory System

  • Kang, Cheol Yong;Mariappan, Vinayagam;Cho, Juphil;Lee, Seon Hee
    • International journal of advanced smart convergence
    • /
    • 제5권4호
    • /
    • pp.15-20
    • /
    • 2016
  • When a virtual sound source for 3D auditory system is reproduced by a linear loudspeaker array, listeners can perceive not only the direction of the source, but also its distance. Control over perceived distance has often been implemented via the adjustment of various acoustic parameters, such as loudness, spectrum change, and the direct-to-reverberant energy ratio; however, there is a neglected yet powerful cue to the distance of a nearby virtual sound source that can be manipulated for sources that are positioned away from the listener's median plane. This paper address the problem of generating binaural signals for moving sources in closed or in open environments. The proposed perceptual enhancement algorithm composed of three main parts is developed: propagation, reverberation and the effect of the head, torso and pinna. For propagation the effect of attenuation due to distance and molecular air-absorption is considered. Related to the interaction of sounds with the environment, especially in closed environments is reverberation. The effects of the head, torso and pinna on signals that arrive at the listener are also objectives of the consideration. The set of HRTF that have been used to simulate the virtual sound source environment for 3D auditory system. Special attention has been given to the modelling and interpolation of HRTFs for the generation of new transfer functions and definition of trajectories, definition of closed environment, etc. also be considered for their inclusion in the program to achieve realistic binaural renderings. The evaluation is implemented in MATLAB.

3-D Sound-Field Creation Implementing the Virtual Reality Ship Handling Simulator(I): HRTF Modeling (가상 현실 선박 조종 시뮬레이터 구현을 위한 3차원 음장생성(I) : 머리전달함수 모델링)

  • 임정빈
    • Journal of the Korean Institute of Navigation
    • /
    • 제22권3호
    • /
    • pp.17-25
    • /
    • 1998
  • This paper describes elemental technologies for the creation of three-dimensional(3-D) sound-field to implement the next-generation Ship Handling Simulator with human -computer interaction, known as Virtual Reality. In the virtual reality system, Head-Related Transfer Functions(HRTF's) are used to generate 3-D sound environmental context. Where, the HRTF's are impulse response characterizing the acoustical transformation in a space. This work is divided into two parts, the part Ⅰis mainly for the model constructions of the HRTF's, the part Ⅱis for the control of 3-D sound-field by using the HRTF's . In this paper, as first part, we search for the theory to formulate models of the HRTF's which reduce the dimensionalityof the formulation without loss of any directional information . Using model HRTF's we report results from psychophysical tests used to asses the validity of the proposed modleing method.

  • PDF

Objective and Subjective Test of a Virtual Sound Reproduction Using a Headphone (헤드폰을 이용한 가상음향 재현의 주관적, 객관적 평가)

  • 최원재;김상명
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 한국소음진동공학회 2003년도 춘계학술대회논문집
    • /
    • pp.611-616
    • /
    • 2003
  • The headphone is regarded as the most effective means for reproducing 3-dimentional virtual sound due to its channel separation property. However, there still exist several serious problems in headphone reproduction, such as, 'front-back confusion' and in-head localization'. These well-known problems are in general assessed by the subjective test that is based on human judgment. In this paper, an objective test is conducted in parallel with the subject test in order to validate the objective reproduction performance. Such a combined approach may be a more scientific and systematic approach to the reproduction performance.

  • PDF

A study on multichannel 3D sound rendering

  • Kim, Sun-Min;Park, Young-Jin
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.117.2-117
    • /
    • 2001
  • In this paper, 3D sound rendering using multichannel speakers is studied. Virtual 3D sound technology has mainly been researched with binaural system. The conventional binaural sound systems reproduce the desired sound at two arbitrary points using two speakers in 3DD space. However, it is hard to implement the localization of virtual source at back/front and top/below positions because the HRTF of an individual is unique just like the fingerprint. Most of all, the HRTF is highly sensitive to the elevation change. Multichannel sound systems have mainly been used to reproduce the sound field picked up over a certain volume rather than at specific points. Moreover, multichannel speakers arranged in 3-D space produce a much better performance of ...

  • PDF

Improvement of virtual speaker localization characteristics using grouped HRTF (머리전달함수의 그룹화를 이용한 가상 스피커의 정위감 개선)

  • Seo, Bo-Kug;Cha, Hyung-Tai
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • 제16권6호
    • /
    • pp.671-676
    • /
    • 2006
  • A convolution with HRTF DB and the original sound is generally used to make the method of sound image localization for virtual speaker realization. But it can decline localization by the confusion between up and down or front and back directions due to the non-individual HRTF depending on each listener. In this paper, we study a virtual speaker using a new HRTF, which is grouping the HRTF around the virtual speaker to improve localization between up and down or front and back directions. To effective HRTF grouping, we decide the location and number of HRTF using informal listening test. A performance test result of virtual speaker using the grouped HRTF shows that the proposed method improves the front-back and up-down sound localization characteristics much better than the conventional methods.

3-D Sound-Field Creation Implementing the Virtual Reality Ship Handling Simulator(II): Sound-Field Control (가상현실 선박조종 시뮬레이터 구현을 위한 3차원 음장 생성(II): 음장제어)

  • 임정빈
    • Journal of the Korean Institute of Navigation
    • /
    • 제22권3호
    • /
    • pp.27-34
    • /
    • 1998
  • The paper is the second part on the 3-D sound-field creation implementing the Virtual Reality Ship Handling Simyulator(VRSHS). As mentioned in the previous part Ⅰ, the spatial impression, which arose from reproduced 3-D sound-field , give natural sound environmental context to a listener. This spatial impression is due to the reverberation by reflections and ,is can be obtain by using Head-Related Transfer Function(HRTF). In this work, we foumulate early and late reverberation models of the HRTF's with theoretical control factors based on the sound-energy distribution in an irregularly shaped enclosures. Using the reverberation models, we report results from psychophysical tests used to asses the validity of the proposed 3-D soud-field control method.

  • PDF

Amplitude Panning Algorithm for Virtual Sound Source Rendering in the Multichannel Loudspeaker System (다채널 스피커 환경에서 가상 음원을 생성하기 위한 레벨 패닝 알고리즘)

  • Jeon, Se-Woon;Park, Young-Cheol;Lee, Seok-Pil;Youn, Dae-Hee
    • The Journal of the Acoustical Society of Korea
    • /
    • 제30권4호
    • /
    • pp.197-206
    • /
    • 2011
  • In this paper, we proposes the virtual sound source panning algorithm in the multichannel system. Recently, High-definition (HD) and Ultrahigh-definition (UHD) video formats are accepted for the multimedia applications and they provide the high-quality resolution pixels and the wider view angle. The audio format also needs to generate the wider sound field and more immersive sound effects. However, the conventional stereo system cannot satisfy the desired sound quality in the latest multimedia system. Therefore, the various multichannel systems that can make more improved sound field generation are proposed. In the mutichannel system, the conventional panning algorithms have acoustic problems about directivity and timbre of the virtual sound source. To solve these problems in the arbitrary positioned multichannel loudspeaker system, we proposed the virtual sound source panning algorithm using multiple vectors base nonnegative amplitude panning gains. The proposed algorithm can be easily controlled by the gain control function to generate an accurate localization of the virtual sound source and also it is available for the both symmetric and asymmetric loudspeakers format. Its performance of sound localization is evaluated by subjective tests comparing with conventional amplitude panning algorithms, e.g. VBAP and MDAP, in the symmetric and asymmetric formats.

A Study of 3D Sound Modeling based on Geometric Acoustics Techniques for Virtual Reality (가상현실 환경에서 기하학적 음향 기술 기반의 3차원 사운드 모델링 기술에 관한 연구)

  • Kim, Cheong Ghil
    • Journal of Satellite, Information and Communications
    • /
    • 제11권4호
    • /
    • pp.102-106
    • /
    • 2016
  • With the popularity of smart phones and the help of high-speed wireless communication technology, high-quality multimedia contents have become common in mobile devices. Especially, the release of Oculus Rift opens a new era of virtual reality technology in consumer market. At the same time, 3D audio technology which is currently used to make computer games more realistic will soon be applied to the next generation of mobile phone and expected to offer a more expansive experience than its visual counterpart. This paper surveys concepts, algorithms, and systems for modeling 3D sound virtual environment applications. To do this, we first introduce an important design principle for audio rendering based on physics-based geometric algorithms and multichannel technologies, and introduce an audio rendering pipeline to a scene graph-based virtual reality system and a hardware architecture to model sound propagation.