• Title/Summary/Keyword: Virtual sound

Search Result 244, Processing Time 0.023 seconds

The Design and Study of Virtual Sound Field in Music Production

  • Wang, Yan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.7
    • /
    • pp.83-91
    • /
    • 2017
  • In this paper, we propose a thorough solution for adjusting virtual sound field with different kinds of devices and software in preliminary procedure and late stage of music processing. The basic process of music production includes composing, arranging and recording at pre-production stage as well as sound mixing and mastering at post-production stage. At the initial stage of music creation, it should be checked whether the design of virtual sound field, the choice of the tone and the instrument used in the arrangement match the virtual sound field required for the final work. In later recording, mixing and mastering, elaborate adjustments should be done to the virtual sound field. This study also analyzed how to apply the parameter of the effectors to the design and adjustment of the virtual sound field, making it the source of our creation.

A Multichannel System for Virtual 3-D Sound Rendering (입체음장재현을 위한 멀티채널시스템)

  • Lee Chanjoo;Park Youngjin;Oh Si-Hwan;Kim Yoonsun
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.223-226
    • /
    • 2000
  • Currently a multichannel system for virtual 3-D sound rendering is under development. Robust sound image formation and smooth real time interactivity are main design Points. The system utilizes VBAP algorithm as virtual sound image positioning. Overall system settings can be easily configured. We developed software, RIMA. as a driving Program of the system. At this stage, it is possible to position virtual sound images at arbitrary positions in three-dimensional space. The characteristics of the system are discussed. The system has been applied to the KAIST Bicycle Simulator to generate the virtual sound field.

  • PDF

Virtual Sound Localization algorithm for Surround Sound Systems (서라운드시스템을 위한 가상 음상정위 알고리즘)

  • Lee Sin-Lyul;Han Ki-Young;Lee Seung-Rae;Sung Koeng-Mo
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.81-84
    • /
    • 2004
  • In this paper, we propose a virtual sound localization algorithm which improves the sound localization accuracy and sound color preservation for two channel and multi-channel surround speaker layouts. In conventional CPP laws, the sound direction is different from the panning angle and the sound color is different from real sound source especially when the speakers are spread out widely. To overcome this drawback, we design a virtual sound localization algorithm using directional psychoacoustic criteria (DPC) and sound color compensator (SCC). The analysis results show that in the case of the proposed system, the sound direction is the same as the panning angle in the audible frequency range and the sound color is less deviated from a real sound source than the conventional CPP law. In addition, its performance is verified by means of subjective tests using a real sound source.

  • PDF

Development of three-dimensional sound effects system for virtual reality (가상환경용 3차원 입체음향 시스템 개발)

  • Yang, Si-Young;Kim, Dong-Hyung;Jeong, Je-Chang
    • Journal of Broadcast Engineering
    • /
    • v.13 no.5
    • /
    • pp.574-585
    • /
    • 2008
  • 3D sound is of central importance for the virtual reality system, and is becoming increasingly important for the auditory displays and for the human-computer interaction. In this paper, we propose a novel real-time 3D sound representation system for virtual reality. At first, we propose a calculation method of the impulse response for virtual space. To transmit the information of the virtual space, we propose an enhanced DXF file type that contains the material information. And then, we implement the multi-channel sound panning system. we perform the experiment based on computer simulation and prove the utility of the proposed method.

Auditory Interaction Design By Impact Sound Synthesis for Virtual Environment (충돌음 합성에 의한 가상환경의 청각적 인터랙션 디자인)

  • Nam, Yang-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.5
    • /
    • pp.1-8
    • /
    • 2013
  • Focused on the fact that sound is one of the important sensory cues delivering situations such as impact, this paper proposes an auditory interaction design approach for virtual environment. Based on a few sampling of basic material sound for various materials such as steel, rubber, and glass, the proposed method enables design transformations of the basic sound by allowing modification of mode gains that characterize natural sound for the material. In real-time virtual environment, it also provides simulation of modified sound according to the change of impact situation's perceptual properties such as colliding objects' size, hardness, contacting area, and speed. The test results on cognition experiment for discriminating objects' materials and impact situation by sound showed the feasibility of proposed auditory interaction design method.

'EVE-SoundTM' Toolkit for Interactive Sound in Virtual Environment (가상환경의 인터랙티브 사운드를 위한 'EVE-SoundTM' 툴킷)

  • Nam, Yang-Hee;Sung, Suk-Jeong
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.273-280
    • /
    • 2007
  • This paper presents a new 3D sound toolkit called $EVE-Sound^{TM}$ that consists of pre-processing tool for environment simplification preserving sound effect and 3D sound API for real-time rendering. It is designed so that it can allow users to interact with complex 3D virtual environments by audio-visual modalities. $EVE-Sound^{TM}$ toolkit would serve two different types of users: high-level programmers who need an easy-to-use sound API for developing realistic 3D audio-visually rendered applications, and the researchers in 3D sound field who need to experiment with or develop new algorithms while not wanting to re-write all the required code from scratch. An interactive virtual environment application is created with the sound engine constructed using $EVE-Sound^{TM}$ toolkit, and it shows the real-time audio-visual rendering performance and the applicability of proposed $EVE-Sound^{TM}$ for building interactive applications with complex 3D environments.

A Study on "A Midsummer Night's Palace" Using VR Sound Engineering Technology

  • Seok, MooHyun;Kim, HyungGi
    • International Journal of Contents
    • /
    • v.16 no.4
    • /
    • pp.68-77
    • /
    • 2020
  • VR (Virtual Reality) contents make the audience perceive virtual space as real through the virtual Z axis which creates a space that could not be created in 2D due to the space between the eyes of the audience. This visual change has led to the need for technological changes to sound and sound sources inserted into VR contents. However, studies to increase immersion in VR contents are still more focused on scientific and visual fields. This is because composing and producing VR sounds require professional views in two areas: sound-based engineering and computer-based interactive sound engineering. Sound-based engineering is difficult to reflect changes in user interaction or time and space by directing the sound effects, script sound, and background music according to the storyboard organized by the director. However, it has the advantage of producing the sound effects, script sound, and background music in one track and not having to go through the coding phase. Computer-based interactive sound engineering, on the other hand, is produced in different files, including the sound effects, script sound, and background music. It can increase immersion by reflecting user interaction or time and space, but it can also suffer from noise cancelling and sound collisions. Therefore in this study, the following methods were devised and utilized to produce sound for VR contents called "A Midsummer Night" so as to take advantage of each sound-making technology. First, the storyboard is analyzed according to the user's interaction. It is to analyze sound effects, script sound, and background music which is required according to user interaction. Second, the sounds are classified and analyzed as 'simultaneous sound' and 'individual sound'. Thirdly, work on interaction coding for sound effects, script sound, and background music that were produced from the simultaneous sound and individual time sound categories is done. Then, the contents are completed by applying the sound to the video. By going through the process, sound quality inhibitors such as noise cancelling can be removed while allowing sound production that fits to user interaction and time and space.

Obstacle Avoidance of a Moving Sound Following Robot using Active Virtual Impedance (능동 가상 임피던스를 이용한 이동 음원 추종 로봇의 장애물 회피)

  • Han, Jong-Ho;Park, Sook-Hee;Noh, Kyung-Wook;Lee, Dong-Hyuk;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.2
    • /
    • pp.200-210
    • /
    • 2014
  • An active virtual impedance algorithm is newly proposed to track a sound source and to avoid obstacles while a mobile robot is following the sound source. The tracking velocity of a mobile robot to the sound source is determined by virtual repulsive and attraction forces to avoid obstacles and to follow the sound source, respectively. Active virtual impedance is defined as a function of distances and relative velocities to the sound source and obstacles from the mobile robot, which is used to generate the tracking velocity of the mobile robot. Conventional virtual impedance methods have fixed coefficients for the relative distances and velocities. However, in this research the coefficients are dynamically adjusted to elaborate the obstacle avoidance performance in multiple obstacle environments. The relative distances and velocities are obtained using a microphone array consisting of three microphones in a row. The geometrical relationships of the microphones are utilized to estimate the relative position and orientation of the sound source against the mobile robot which carries the microphone array. Effectiveness of the proposed algorithm has been demonstrated by real experiments.

Sound Quality Evaluation for Laundry Noise by a Virtual Laundry Noise Considering the Effect of Various Noise Sources in a Drum Washing Machine (소음원의 영향이 고려된 가상 세탁음 제작을 통한 드럼 세탁기의 음질 인덱스 구축)

  • Jeong, Jae-Eun;Yang, In-Hyung;Fawazi, Noor;Jeong, Un-Chang;Lee, Jung-Youn;Oh, Jae-Eung
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.22 no.6
    • /
    • pp.564-573
    • /
    • 2012
  • The objective of this study is to determine the effect for the sound quality according to the noise source and to build the sound quality index of the laundry noise. In order to compare laundry noise among the influence of noise sources, we made virtual laundry noises by synthesizing an actual laundry noise and each noise source such as a dropping noise, water noise, motor noise and circulation pump noise. We conducted a listening test by customers using virtual laundry noises. As a result of listening test, we found that the dropping noise has a decisive effect on the sound quality of the laundry noise. We conducted the multi regression analysis of sound quality for the laundry noise using the statistical data processing. It is verified to the reliability of the multi regression index by comparison with listening results and index results of other actual laundry noises. This study is expected to provide a guide line for improvement of the laundry noise.