• Title/Summary/Keyword: Sound perception

Search Result 269, Processing Time 0.023 seconds

Investigating the Effects of Hearing Loss and Hearing Aid Digital Delay on Sound-Induced Flash Illusion

  • Moradi, Vahid;Kheirkhah, Kiana;Farahani, Saeid;Kavianpour, Iman
    • Journal of Audiology & Otology
    • /
    • v.24 no.4
    • /
    • pp.174-179
    • /
    • 2020
  • Background and Objectives: The integration of auditory-visual speech information improves speech perception; however, if the auditory system input is disrupted due to hearing loss, auditory and visual inputs cannot be fully integrated. Additionally, temporal coincidence of auditory and visual input is a significantly important factor in integrating the input of these two senses. Time delayed acoustic pathway caused by the signal passing through digital signal processing. Therefore, this study aimed to investigate the effects of hearing loss and hearing aid digital delay circuit on sound-induced flash illusion. Subjects and Methods: A total of 13 adults with normal hearing, 13 with mild to moderate hearing loss, and 13 with moderate to severe hearing loss were enrolled in this study. Subsequently, the sound-induced flash illusion test was conducted, and the results were analyzed. Results: The results showed that hearing aid digital delay and hearing loss had no detrimental effect on sound-induced flash illusion. Conclusions: Transmission velocity and neural transduction rate of the auditory inputs decreased in patients with hearing loss. Hence, the integrating auditory and visual sensory cannot be combined completely. Although the transmission rate of the auditory sense input was approximately normal when the hearing aid was prescribed. Thus, it can be concluded that the processing delay in the hearing aid circuit is insufficient to disrupt the integration of auditory and visual information.

Evaluation of the Field Application of "Spontaneous Acoustic Field Reproduction System(SAFRS)" to Propose Soundscape (사운드스케이프 조성을 위한 능동형 음장조성시스템의 현장적용 평가)

  • Kook, Chan;Jang, Gil-Soo;Jeon, Ji-Hyeon
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.17 no.4 s.121
    • /
    • pp.289-297
    • /
    • 2007
  • SAFRS is the system designed to reproduce the harmonic sound into the space according to the environmental factors. Here the harmonic sound means the sound which judged by the subjective tests, and the methods were suggested in the former studies. In this research, SAFRS was applied into the Square of D University to evaluate and verify the effectiveness of the system and a few evaluations were carried out as follows; 1) sound perception, frequency, volume and harmony with the space, 2) images of the square and acoustic environment and 3) acoustic environment with existing sounds, fountain sound, produced sound by SAFRS, and both of them respectively. The results can be summarized as follows; 1) According to the evaluation on acoustic environment, no relationships were shown between the cognition of sounds produced by SAFRS and the frequency or volume, but inverse proportion was shown between the volume and special harmony. 2) As the result of image evaluation. the relationship between space and sound image was shown proportional only except the evaluation on main road at night time, it means that the sound proposal with the visual contents matching with the sounds would be more effective than the proposal of sound only. 3) Results of evaluation on acoustic environment showed that the cognition effect at night time was shown higher than that of day time when only the acoustic element was given and the effect was increased when the visual elements matches with the acoustic elements if both of them were given. It confirmed the harmony between visual and acoustic elements was very important.

Sound Source Localization using HRTF database

  • Hwang, Sung-Mok;Park, Young-Jin;Park, Youn-Sik
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.751-755
    • /
    • 2005
  • We propose a sound source localization method using the Head-Related-Transfer-Function (HRTF) to be implemented in a robot platform. In conventional localization methods, the location of a sound source is estimated from the time delays of wave fronts arriving in each microphone standing in an array formation in free-field. In case of a human head this corresponds to Interaural-Time-Delay (ITD) which is simply the time delay of incoming sound waves between the two ears. Although ITD is an excellent sound cue in stimulating a lateral perception on the horizontal plane, confusion is often raised when tracking the sound location from ITD alone because each sound source and its mirror image about the interaural axis share the same ITD. On the other hand, HRTFs associated with a dummy head microphone system or a robot platform with several microphones contain not only the information regarding proper time delays but also phase and magnitude distortions due to diffraction and scattering by the shading object such as the head and body of the platform. As a result, a set of HRTFs for any given platform provides a substantial amount of information as to the whereabouts of the source once proper analysis can be performed. In this study, we introduce new phase and magnitude criteria to be satisfied by a set of output signals from the microphones in order to find the sound source location in accordance with the HRTF database empirically obtained in an anechoic chamber with the given platform. The suggested method is verified through an experiment in a household environment and compared against the conventional method in performance.

  • PDF

An Acoustical Study of Korean 's' (국어 'ㅅ' 음가에 대한 음향학적 연구)

  • Mun Seung-Jae
    • MALSORI
    • /
    • no.33_34
    • /
    • pp.11-22
    • /
    • 1997
  • The degrees of aspiration in Korean [ㅅ] and [ㅆ] were measured in terms of VOT. The measurements were compared to the aspiration in Korean stops and affricates. It was shown that [ㅅ] should be classified as an 'aspirated' sound with Korean aspirated stops and affricates [$p^h, {\;}t^h, {\;}k^h, {\;}t{\int}$], contrary to the traditional classification of the sound as unaspirated. [ㅆ] was confirmed to be in the same group as other Korean 'tense' sounds. It was pointed out that there was a gap in the typology of Korean consonants. The gap was created by the lack of the unaspirated counterpart of [ㅅ]. It was suggested that an extinct Korean sound [$\triangle$] be considered as a possible candidate for the gap. Also a perception test was suggested for the further acoustical analysis of Korean [ㅅ] and [ㅆ].

  • PDF

Implementation of Muscular Sense into both Color and Sound Conversion System based on Wearable Device (웨어러블 디바이스 기반 근감각-색·음 변환 시스템의 구현)

  • Bae, Myungjin;Kim, Sungill
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.3
    • /
    • pp.642-649
    • /
    • 2016
  • This paper presents a method for conversion of muscular sense into both visual and auditory senses based on synesthetic perception. Muscular sense can be defined by rotation angles, direction changes and motion degrees of human body. Synesthetic interconversion can be made by learning, so that it can be possible to create intentional synesthetic phenomena. In this paper, the muscular sense was converted into both color and sound signals which comprise the great majority of synesthetic phenomena. The measurement of muscular sense was performed by using the AHRS(attitude heading reference system). Roll, yaw and pitch signals of the AHRS were converted into three basic elements of color as well as sound, respectively. The proposed method was finally applied to a wearable device, Samsung gear S, successfully.

Development of High Power Telephone for Hearing Impaired Person (난청인용 고출력 전화기 개발)

  • Lee, S.M.;Kim, I.Y.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1998 no.11
    • /
    • pp.173-174
    • /
    • 1998
  • We developed the high power telephone for hearing impaired person (HIP) who can't communicate with others by general telephone. The general telephone can't delivered enough sound for HIP to understand telephone speech. In this study, we developed the method of telephone speech amplification proper to HIP and effective howling suppression which occurred as a side effect of amplification. In our new telephone, speech sound is divieded to 3 band pass filter path, amplified respectively fit to HIP's hearing ability, and monitored howling in time domain. The result of test of our telephone showed that we can amplify the sound as much as 40dB, which is very useful to HIP, and make HIP increase the perception of telephone speech.

  • PDF

An study on the Effects of Visual and Aural Information on Environmental Sound Amenity Evaluation (시각 및 청각 정보가 환경음의 쾌적성 평가에 미치는 영향에 관한 연구)

  • Shin, Hoon;Baek, Kun-Jong;Song, Min-Jeong;Jang, Gil-Soo
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.17 no.9
    • /
    • pp.813-818
    • /
    • 2007
  • This study aims to know the effect of road traffic noise perception when the visual and aural information is added in a laboratory experiment. ME (magnitude estimation) and SD (semantic differential method) evaluation on the effect of visual and aural effect were carried out by 43 university students. As the result, up to 10 % of psychological reduction effect was shown under the 65 dB(A). As the noise level, it was analyzed that the vision affected about 7 dB(A) and sound affected 5 dB(A). However, if these two are given simultaneously, mainly sound affects to reduce the annoyance of noise and the vision next. Compared with the urban central circumstances, this effect (2 dB(A) under 65 dB(A) noise) was shown smaller than field test.

The study on the development of vehicle warning sounds for improving Emotional Quality (감성품질 향상을 위한 자동차 경고음 개발)

  • Park, Dong-Chul;Hong, Seok-Gwan;Jung, Hae-Yang
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2011.10a
    • /
    • pp.540-543
    • /
    • 2011
  • The vehicle's warning sounds are one of the important parts of overall vehicle quality, so the study of these sounds is increasing. In this study, the experiment of reproducing performance related to speaker position was studied. Based on the result of jury test, sound analysis method was developed to reflect the perception of warning sound according to vehicle speed and the development process of warning sounds was suggested.

  • PDF

Acoustic Event Detection in Multichannel Audio Using Gated Recurrent Neural Networks with High-Resolution Spectral Features

  • Kim, Hyoung-Gook;Kim, Jin Young
    • ETRI Journal
    • /
    • v.39 no.6
    • /
    • pp.832-840
    • /
    • 2017
  • Recently, deep recurrent neural networks have achieved great success in various machine learning tasks, and have also been applied for sound event detection. The detection of temporally overlapping sound events in realistic environments is much more challenging than in monophonic detection problems. In this paper, we present an approach to improve the accuracy of polyphonic sound event detection in multichannel audio based on gated recurrent neural networks in combination with auditory spectral features. In the proposed method, human hearing perception-based spatial and spectral-domain noise-reduced harmonic features are extracted from multichannel audio and used as high-resolution spectral inputs to train gated recurrent neural networks. This provides a fast and stable convergence rate compared to long short-term memory recurrent neural networks. Our evaluation reveals that the proposed method outperforms the conventional approaches.

Enhancement of the 3D Sound's Performance using Perceptual Characteristics and Loudness (지각 특성 및 라우드니스를 이용한 입체음향의 성능 개선)

  • Koo, Kyo-Sik;Cha, Hyung-Tai
    • Journal of Broadcast Engineering
    • /
    • v.16 no.5
    • /
    • pp.846-860
    • /
    • 2011
  • The binaural auditory system of human has ability to differentiate the direction and the distance of the sound sources by using the information which are inter-aural intensity difference(IID), inter-aural time difference(ITD) and/or the spectral shape difference(SSD). These information is generated from the acoustical transfer of a sound source to pinna, the outer ears. We can create a virtual sound system using the information which is called Head related transfer function(HRTF). However the performance of 3D sound is not always satisfactory because of non-individual characteristics of the HRTF. In this paper, we propose the algorithm that uses human's auditory characteristics for accurate perception. To achieve this, excitation energy of HRTF, global masking threshold and loudness are applied to the proposed algorithm. Informal listening test shows that the proposed method improves the sound localization characteristics much better than conventional methods.