• Title/Summary/Keyword: Spatial Audio

Search Result 90, Processing Time 0.027 seconds

Digital Bit Stream Wireless Communication System Using an Infrared Spatial Coupler for Audio/Video Signals (A/V용 적외선 송수신장치를 이용한 디지털 비트스트림 무선 통신 시스템)

  • 예창희;이광순;최덕규;송규익
    • Proceedings of the IEEK Conference
    • /
    • 2001.06a
    • /
    • pp.309-312
    • /
    • 2001
  • In this paper, we proposed a system for bit stream wireless communication using audio/video infrared transceiver and implemented a circuit. The proposed transmitter system converted bit stream into analog signal format that is similar to NTSC. Then the analog signal can be transmitted by infrared spatial coupler for A/V signals. And the receiver system recover the bit stream by inverse process of transmitter.

  • PDF

Standardization of MPEG-I Immersive Audio and Related Technologies (MPEG-I Immersive Audio 표준화 및 기술 동향)

  • Jang, D.Y.;Kang, K.O.;Lee, Y.J.;Yoo, J.H.;Lee, T.J.
    • Electronics and Telecommunications Trends
    • /
    • v.37 no.3
    • /
    • pp.52-63
    • /
    • 2022
  • Immersive media, also known as spatial media, has become essential with the decrease in face-to-face activities in the COVID-19 pandemic era. Teleconference, metaverse, and digital twin have been developed with high expectations as immersive media services, and the demand for hyper-realistic media is increasing. Under these circumstances, MPEG-I Immersive Media is being standardized as a technologies of navigable virtual reality, which is expected to be launched in the first half of 2024, and the Audio Group is working to standardize the immersive audio technology. Following this trend, this article introduces the trend in MPEG-I immersive audio standardization. Further, it describes the features of the immersive audio rendering technology, focusing on the structure and function of the RM0 base technology, which was chosen after evaluating all the technologies proposed in the January 2022 "MPEG Audio Meeting."

A Study on Setting the Minimum and Maximum Distances for Distance Attenuation in MPEG-I Immersive Audio

  • Lee, Yong Ju;Yoo Jae-hyoun;Jang, Daeyoung;Kang, Kyeongok;Lee, Taejin
    • Journal of Broadcast Engineering
    • /
    • v.27 no.7
    • /
    • pp.974-984
    • /
    • 2022
  • In this paper, we introduce the minimum and maximum distance setting methods used in geometric distance attenuation processing, which is one of spatial sound reproduction methods. In general, sound attenuation by distance is inversely proportional to distance, that is 1/r law, but when the relative distance between the user and the audio object is very short or long, exceptional processing might be performed by setting the minimum distance or the maximum distance. While MPEG-I Immersive Audio's RM0 uses fixed values for the minimum and maximum distances, this study proposes effective methods for setting the distances considering the signal gain of an audio object. Proposed methods were verified through simulation of the proposed methods and experiments using RM0 renderer.

A Real Time 6 DoF Spatial Audio Rendering System based on MPEG-I AEP (MPEG-I AEP 기반 실시간 6 자유도 공간음향 렌더링 시스템)

  • Kyeongok Kang;Jae-hyoun Yoo;Daeyoung Jang;Yong Ju Lee;Taejin Lee
    • Journal of Broadcast Engineering
    • /
    • v.28 no.2
    • /
    • pp.213-229
    • /
    • 2023
  • In this paper, we introduce a spatial sound rendering system that provides 6DoF spatial sound in real time in response to the movement of a listener located in a virtual environment. This system was implemented using MPEG-I AEP as a development environment for the CfP response of MPEG-I Immersive Audio and consists of an encoder and a renderer including a decoder. The encoder serves to offline encode metadata such as the spatial audio parameters of the virtual space scene included in EIF and the directivity information of the sound source provided in the SOFA file and deliver them to the bitstream. The renderer receives the transmitted bitstream and performs 6DoF spatial sound rendering in real time according to the position of the listener. The main spatial sound processing technologies applied to the rendering system include sound source effect and obstacle effect, and other ones for the system processing include Doppler effect, sound field effect and etc. The results of self-subjective evaluation of the developed system are introduced.

Towards Low Complexity Model for Audio Event Detection

  • Saleem, Muhammad;Shah, Syed Muhammad Shehram;Saba, Erum;Pirzada, Nasrullah;Ahmed, Masood
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.9
    • /
    • pp.175-182
    • /
    • 2022
  • In our daily life, we come across different types of information, for example in the format of multimedia and text. We all need different types of information for our common routines as watching/reading the news, listening to the radio, and watching different types of videos. However, sometimes we could run into problems when a certain type of information is required. For example, someone is listening to the radio and wants to listen to jazz, and unfortunately, all the radio channels play pop music mixed with advertisements. The listener gets stuck with pop music and gives up searching for jazz. So, the above example can be solved with an automatic audio classification system. Deep Learning (DL) models could make human life easy by using audio classifications, but it is expensive and difficult to deploy such models at edge devices like nano BLE sense raspberry pi, because these models require huge computational power like graphics processing unit (G.P.U), to solve the problem, we proposed DL model. In our proposed work, we had gone for a low complexity model for Audio Event Detection (AED), we extracted Mel-spectrograms of dimension 128×431×1 from audio signals and applied normalization. A total of 3 data augmentation methods were applied as follows: frequency masking, time masking, and mixup. In addition, we designed Convolutional Neural Network (CNN) with spatial dropout, batch normalization, and separable 2D inspired by VGGnet [1]. In addition, we reduced the model size by using model quantization of float16 to the trained model. Experiments were conducted on the updated dataset provided by the Detection and Classification of Acoustic Events and Scenes (DCASE) 2020 challenge. We confirm that our model achieved a val_loss of 0.33 and an accuracy of 90.34% within the 132.50KB model size.

Personal Monitor & TV Audio System by Using Loudspeaker Array (스피커 배열을 이용한 개인용 모니터와 TV의 오디오 시스템)

  • Lee, Chan-Hui;Chang, Ji-Ho;Park, Jin-Young;Kim, Yang-Hann
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.18 no.7
    • /
    • pp.701-710
    • /
    • 2008
  • Including a TV set and a monitor, personal audio system is raising a great interest. In this study, we applied a method to make a good bright zone around the user and dark zone to other region by maximizing the ratio of sound energy between the bright and dark zone. It has been well known as acoustic contrast control. We have attempted to use a line loudspeaker array system to localize the sound in our listening zone. It depends on the size of the zone and array parameters, for example, array size, loudspeaker unit spacing, wave length of sound. We have considered these parameters as spatial variables and studied the effects. And we have found that each spatial variable has its own characteristic and shows very different effect. Genetic algorithms are introduced to find out the optimum value of spatial variables. As a result, we can improve the result of the acoustic contrast control by optimum value of spatial variables.

Salience of Envelope Interaural Time Difference of High Frequency as Spatial Feature (공간감 인자로서의 고주파 대역 포락선 양이 시간차의 유효성)

  • Seo, Jeong-Hun;Chon, Sang-Bae;Sung, Koeng-Mo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.6
    • /
    • pp.381-387
    • /
    • 2010
  • Both timbral features and spatial features are important in the assessment of multichannel audio coding systems. The prediction model, extending the ITU-R Rec. BS. 1387-1 to multichannel audio coding systems, with the use of spatial features such as ITDDist (Interaural Time Difference Distortion), ILDDist (Interaural Level Difference Distortion), and IACCDist (InterAural Cross-correlation Coefficient Distortion) was proposed by Choi et al. In that model, ITDDistswere only computed for low frequency bands (below 1500Hz), and ILDDists were computed only for high frequency bands (over 2500Hz) according to classical duplex theory. However, in the high frequency range, information in temporal envelope is also important in spatial perception, especially in sound localization. A new model to compute the ITD distortions of temporal envelopes in high frequency components is introduced in this paper to investigate the role of such ITD on spatial perception quantitatively. The computed ITD distortions of temporal envelopes in high frequency components were highly correlated with perceived sound quality of multichannel audio sounds.

An Efficient Representation Method for ICLD with Robustness to Spectral Distortion

  • Beack, Seung-Kwon;Seo, Jeong-Il;Kang, Kyung-Ok;Hanh, Min-Soo
    • ETRI Journal
    • /
    • v.27 no.3
    • /
    • pp.330-333
    • /
    • 2005
  • The Inter-Channel Level Difference (ICLD) is a cue parameter to estimate spectral information in a binaural cue coding that has been recently in the spotlight as a multichannel audio signal compression technique. Even though the ICLD is an essential parameter, it is generally distorted by quantization. In this paper, a new modified ICLE representation method to minimize the quantization distortion is proposed by adopting a flexible determination of the reference channel and the unidirectional quantization. Our experimental result confirms that the proposed method improves the multichannel audio output quality even with the reduced bit-rate.

  • PDF

Acoustic Event Detection in Multichannel Audio Using Gated Recurrent Neural Networks with High-Resolution Spectral Features

  • Kim, Hyoung-Gook;Kim, Jin Young
    • ETRI Journal
    • /
    • v.39 no.6
    • /
    • pp.832-840
    • /
    • 2017
  • Recently, deep recurrent neural networks have achieved great success in various machine learning tasks, and have also been applied for sound event detection. The detection of temporally overlapping sound events in realistic environments is much more challenging than in monophonic detection problems. In this paper, we present an approach to improve the accuracy of polyphonic sound event detection in multichannel audio based on gated recurrent neural networks in combination with auditory spectral features. In the proposed method, human hearing perception-based spatial and spectral-domain noise-reduced harmonic features are extracted from multichannel audio and used as high-resolution spectral inputs to train gated recurrent neural networks. This provides a fast and stable convergence rate compared to long short-term memory recurrent neural networks. Our evaluation reveals that the proposed method outperforms the conventional approaches.

Sound Enhancement of low Sample rate Audio Using LMS in DWT Domain (DWT영역에서 LMS를 이용한 저 샘플링 비율 오디오 신호의 음질 향상)

  • 백수진;윤원중;박규식
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.1
    • /
    • pp.54-60
    • /
    • 2004
  • In order to mitigate the problems in storage space and network bandwidth for the full CD quality audio, current digital audio is always restricted by sampling rate and bandwidth. This restriction normally results in low sample rate audio or calls for the data compression scheme such as MP3. However, they can only reproduce a lower frequency range than a regular CD quality because of the Nyquist sampling theory. Consequently they lose rich spatial information embedded in high frequency. The propose of this paper is to propose efficient high frequency enhancement of low sample rate audio using n adaptive filtering and DWT analysis and synthesis. The proposed algorithm uses the LMS adaptive algorithm to estimate the missing high frequency contents in DWT domain and it then reconstructs the spectrally enhanced audio by using the DWT synthesis procedure. Several experiments with real speech and audio are performed and compared with other algorithm. From the experimental results of spectrogram and sonic test, we confirm that the proposed algorithm outperforms the other algorithm and reasonably works well for the most of audio cases.