• Title/Summary/Keyword: Auditory-perception

Search Result 160, Processing Time 0.032 seconds

Sensitive Period of Auditory Perception and Linguistic Discrimination

  • Cha, Kyung-Whan;Jo, Hannah
    • Phonetics and Speech Sciences
    • /
    • v.6 no.1
    • /
    • pp.59-67
    • /
    • 2014
  • The purpose of this study is to scientifically examine Kuhl's (2011), originally Johnson and Newport's (1989) critical period graph, from a perspective of auditory perception and linguistic discrimination. This study utilizes two types of experiments (auditory perception and linguistic phoneme discrimination) with five different age groups (5 years, 6-8 years, 9-13 years, 15-17 years, and 20-26 years) of Korean English learners. Auditory perception is examined via ultrasonic sounds that are commonly used in the medical field. In addition, each group is measured in terms of their ability to discriminate minimal pairs in Chinese. Since almost all Korean students already have some amount of English exposure, the researchers selected phonemes in Chinese, an unexposed foreign language for all of the subject groups. The results are almost completely in accordance with Kuhl's critical period graph for auditory perception and linguistic discrimination; a sensitive age is found at 8. The results show that the auditory capability of kindergarten children is significantly better than that of other students, measured by their ability to perceive ultrasonic sounds and to distinguish ten minimal pairs in Chinese. This finding strongly implies that human auditory ability is a key factor for the sensitive period of language acquisition.

Modeling of Distance Localization by Using an Extended Auditory Parallax Model (확장된 음향적 시차 모델을 이용한 음상 거리정위의 모델화)

  • 김해영
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.1
    • /
    • pp.30-39
    • /
    • 2004
  • This study aims at establishing a digital signal processing technique to control 3-D sound localization, especially focusing our ores on the role of information provided by Head-Related Transfer Function (HRTF). In order to clarify the cues to control the auditory distance perception, two conventional models named Hirsch-Tahara model and auditory parallax model were examined. As a result, it was shown that both models have limitations to universally explain the auditory distance perception. Hence, the auditory parallax model was extended so as to apply in broader cases of auditory distance perception. The results of the experiment by simulating HRTFs based on the extended parallax model showed that the cues provided by the new model were almost sufficient to control the perception of auditory distance from an actual sound source located within about 2m.

Modeling of distance localization using by an extended auditory parallax model (확장폭주각 모델을 이용한 음상거리정위의 모델화)

  • KIM Hae-Young;SUZUKI Yoiti;TAKANE Shouichi;SONE Toshio
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.141-146
    • /
    • 1999
  • This study aims at establishing an digital signal processing technique to control 3-D sound localization, especially focusing our eyes on the role of information provided by Head-Related Transfer Function(HRTF). In order to clarify the cues to control the auditory distance perception, two conventional models named Hirsch-Tahara model and auditory parallax model were examined. As a result, it was shown that both models have limitations to universally explain the auditory distance perception. Hence, the auditory parallax model was extended so as to apply in broader cases of auditory distance perception. The results of the experiment by simulating HRTFs based on the extented parallax model showed that the cues provided by the new model were almost sufficient to control the perception of auditory distance from an actual sound source located within about 2 m.

  • PDF

SPATIAL EXPLANATIONS OF SPEECH PERCEPTION: A STUDY OF FRICATIVES

  • Choo, Won;Mark Huckvale
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.399-403
    • /
    • 1996
  • This paper addresses issues of perceptual constancy in speech perception through the use of a spatial metaphor for speech sound identity as opposed to a more conventional characterisation with multiple interacting acoustic cues. This spatial representation leads to a correlation between phonetic, acoustic and auditory analyses of speech sounds which can serve as the basis for a model of speech perception based on the general auditory characteristics of sounds. The correlations between the phonetic, perceptual and auditory spaces of the set of English voiceless fricatives /f $\theta$ s $\int$ h / are investigated. The results show that the perception of fricative segments may be explained in terms of 2-dimensional auditory space in which each segment occupies a region. The dimensions of the space were found to be the frequency of the main spectral peak and the 'peakiness' of spectra. These results support the view that perception of a segment is based on its occupancy of a multi-dimensional parameter space. In this way, final perceptual decisions on segments can be postponed until higher level constraints can also be met.

  • PDF

Familarity of Sounds as a Cue of Auditory Distance Perception

  • Min, Yoon-Ki
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.3E
    • /
    • pp.19-24
    • /
    • 2000
  • The present research examined the contribution of sounds′ familiarity to auditory distance perception, while attempting to control the influences of unavoidable physical characteristics among sounds. Different vocal "styles" ("shouts", "whispers" and "a normal conversation") of man and woman were recorded digitally and presented from a stationary loudspeaker to blindfolded listeners in a semi anechoic chamber. Playback levels were adjusted to remove extraneous sound level cues. The results showed that the shouting voice was judged as appearing farthest, the whispering voice closest, and the conversational voice was intermediate. The findings suggested that the perception of auditory distance may be affected by past experience (or familiarity).

  • PDF

A Study on the Human Auditory Scaling (인간의 청각 척도에 관한 고찰)

  • Yang, Byung-Gon
    • Speech Sciences
    • /
    • v.2
    • /
    • pp.125-134
    • /
    • 1997
  • Human beings can perceive various aspects of sound including loudness, pitch, length, and timber. Recently many studies were conducted to clarify complex auditory scales of the human ear. This study critically reviews some of these scales (decibel, sone, phon for loudness perception; mel and bark for pitch) and proposes to apply the scales to normalize acoustic correlates of human speech. One of the most important aspects of human auditory perception is the nonlinearity which should be incorporated into the linear speech analysis and synthesis system. Further studies using more sophisticated equipment are desirable to refine these scales, through the analysis of human auditory perception of complex tones or speech. This will lead scientists to develop better speech recognition and synthesis devices.

  • PDF

Temporal-perceptual Judgement of Visuo-Auditory Stimulation (시청각 자극의 시간적 인지 판단)

  • Yu, Mi;Lee, Sang-Min;Piao, Yong-Jun;Kwon, Tae-Kyu;Kim, Nam-Gyun
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.24 no.1 s.190
    • /
    • pp.101-109
    • /
    • 2007
  • In situations of spatio-temporal perception about visuo-auditory stimulus, researches propose optimal integration hypothesis that perceptual process is optimized to the interaction of the senses for the precision of perception. So, when the visual information considered generally dominant over any other sense is ambiguous, the information of the other sense like auditory stimulus influences the perceptual process in interaction with visual information. Thus, we performed two different experiments to certain the conditions of the interacting senses and influence of the condition. We consider the interaction of the visuo-auditory stimulation in the free space, the color of visual stimulus and sex difference of testee with normal people. In first experiment, 12 participants were asked to judge the change in the frequency of audio-visual stimulation using a visual flicker and auditory flutter stimulation in the free space. When auditory temporal cues were presented, the change in the frequency of the visual stimulation was associated with a perceived change in the frequency of the auditory stimulation as the results of the previous studies using headphone. In second experiment, 30 male and 30 female were asked to judge the change in the frequency of audio-visual stimulation using a color of visual flicker and auditory flutter stimulation. In the color condition using red and green. Both male and female testees showed same perceptual tendency. male and female testees showed same perceptual tendency however, in case of female, the standard deviation is larger than that of male. This results implies that audio-visual asymmetry effects are influenced by the cues of visual and auditory information, such as the orientation between auditory and visual stimulus, the color of visual stimulus.

Modeling of the Time-frequency Auditory Perception Characteristics Using Continuous Wavelet Transform (연속 웨이브렛 변환을 이용한 청각계의 시간-주파수 인지 특성 모델링)

  • 이상권;박기성;서진성
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.8
    • /
    • pp.81-87
    • /
    • 2001
  • The human auditory system is appropriate for the "constant Q"system. The STFT (Short Time Fourier Transform) is not suitable for the auditory perception model since it has constant bandwidth. In this paper, the CWT (continuous wavelet transform) is employed for the auditory filter model. In the CWT, the frequency resolution can be adjusted for auditory sensation models. The proposed CWT is applied to the modeling of the JNVF. In addition, other signal processing methods such as STFT, VER-FFT and VFR-STFT are discussed. Among these methods, the model of JNVF (Just Noticeable Variation in Frequency) by using the CWT fits in with the JNVF of auditory model although it requires quite a long time.

  • PDF

Implementation of the Perception Process in Human‐Vehicle Interactive Models(HVIMs) Considering the Effects of Auditory Peripheral Cues (청각 주변 자극의 효과를 고려한 효율적 차량-운전자 상호 연동 모델 구현 방법론)

  • Rah, Chong-Kwan;Park, Min-Yong
    • Journal of the Ergonomics Society of Korea
    • /
    • v.25 no.3
    • /
    • pp.67-75
    • /
    • 2006
  • HVIMs consists of simulated driver models implemented with series of mathematical functions and computerized vehicle dynamic models. To effectively model the perception process, as a part of driver models, psychophysical nonlinearity should be considered not only for the single-modal stimulus but for the stimulus of multiple modalities and interactions among them. A series of human factors experiments were conducted using the primary sensory of visual and auditory modalities to find out the effects of auditory cues in visual velocity estimation tasks. The variations of auditory cues were found to enhance/reduce the perceived intensity of velocity as the level changed. These results indicate that the conventional psychophysical power functions could not applied for the perception process of the HVIMs with multi-modal stimuli. 'Ruled surfaces' in a 3-D coordinate system(with the intensities of both kinds of stimuli and the ratio of enhancement, respectively for each coordinate) were suggested to model the realistic perception process of multi-modal HVIMs.

The Auditory and Visual Information Impacts on the Traffic Noise Perception by the using Electroencephalogram (뇌파 측정에 의한 친환경 시.청각 정보의 교통소음 인지도 영향 평가)

  • Park, Sa-Keun;Jang, Gil-Soo;Kook, Chan;Song, Min-Jeong;Shin, Hoon
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2006.11a
    • /
    • pp.41-47
    • /
    • 2006
  • In this study, the influences of environmentally friendly visual and auditory information on traffic noise perception were surveyed by the using electroencephalogram Green rural region image and CBD image in urban city were used as visual informations. And traffic noise, signal and environmental music were used to detect the impact on electroencephalogram variance. It was revealed that green rural region image caused a-wave ratio increase about 10% and environmental music increased $\alpha$-wave ratio approximately $40{\sim}50%$. The results of this study improved that environmentally friendly visual and auditory information had an effect on decreasing traffic noise loudness to some extents.

  • PDF