• Title/Summary/Keyword: Auditory Information

Search Result 311, Processing Time 0.023 seconds

Aurally Relevant Analysis by Synthesis - VIPER a New Approach to Sound Design -

  • Daniel, Peter;Pischedda, Patrice
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2003.05a
    • /
    • pp.1009-1009
    • /
    • 2003
  • VIPER a new tool for the VIsual PERception of sound quality and for sound design will be presented. Requirement for the visualization of sound quality is a signal analysis modeling the information processing of the ear. The first step of the signal processing implemented in VIPER, calculates an auditory spectrogram by a filter bank adapted to the time- and frequency resolution of the human ear. The second step removes redundant information by extracting time- and frequency contours from the auditory spectrogram in analogy to contours of the visual system. In a third step contours and/or auditory spectrogram can be resynthesised confirming that only aurally relevant information were extracted. The visualization of the contours in VIPER allows intuitively to grasp the important components of a signal. Contributions of parts of a signal to the overall quality can be easily auralized by editing and resynthesising the contours or the underlying auditory spectrogram. Resynthesis of time contours alone allows e.g. to auralize impulsive components separately from the tonal components. Further processing of the contours determines tonal parts in form of tracks. Audible differences between two versions of a sound can be visually inspected in VIPER through the help of auditory distance spectrograms. Applications are shown for the sound design of several interior noises of cars.

  • PDF

Adaptive Noise Suppression system based on Human Auditory Model (인간의 청각모델에 기초한 잡음환경에 적응된 잡음억압 시스템)

  • Choi, Jae-Seung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.05a
    • /
    • pp.421-424
    • /
    • 2008
  • This paper proposes an adaptive noise suppression system based on human auditory model to enhance speech signal that is degraded by various background noises. The proposed system detects voiced and unvoiced sections for each frame and implements the adaptive auditory process, then reduces the noise speech signal using neural network including amplitude component and phase component. Base on measuring signal-to-noise ratios, experiments confirm that the proposed system is effective for speech signal that is degraded by various noises.

  • PDF

Modeling of distance localization using by an extended auditory parallax model (확장폭주각 모델을 이용한 음상거리정위의 모델화)

  • KIM Hae-Young;SUZUKI Yoiti;TAKANE Shouichi;SONE Toshio
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.141-146
    • /
    • 1999
  • This study aims at establishing an digital signal processing technique to control 3-D sound localization, especially focusing our eyes on the role of information provided by Head-Related Transfer Function(HRTF). In order to clarify the cues to control the auditory distance perception, two conventional models named Hirsch-Tahara model and auditory parallax model were examined. As a result, it was shown that both models have limitations to universally explain the auditory distance perception. Hence, the auditory parallax model was extended so as to apply in broader cases of auditory distance perception. The results of the experiment by simulating HRTFs based on the extented parallax model showed that the cues provided by the new model were almost sufficient to control the perception of auditory distance from an actual sound source located within about 2 m.

  • PDF

Different Types of Encoding and Processing in Auditory Sensory Memory according to Stimulus Modality (자극양식에 따른 청감각기억에서의 여러가지 부호화방식과 처리방식)

  • Kim, Jeong-Hwan;Lee, Man-Young
    • The Journal of the Acoustical Society of Korea
    • /
    • v.9 no.4
    • /
    • pp.77-85
    • /
    • 1990
  • This study investigated Greene and Corwder(1984)'s modified PAS model, according to which, in a short-term memory recall task, the recency and suffix effects existing in auditory and visual conditions are mediated by the same mechanisms. It also investigated whether the auditory information and mouthed information are encoded by the same codes. Though the experimental manipulation of the phonological nature, the presence of differential recall effect of consonant-and vowel-varied stimuli in auditory and mouthing conditions which has been supposed to interact with the recency and suffix effects, was investigated. The result shows that differential recall effect between consonant and vowel exists only in the auditory condition, but not in the mouthing condition. Thus, this result supported Turner.

  • PDF

The Analysis of Sound Attributes on Sensibility Dimensions (소리의 청각적 속성에 따른 감성차원 분석)

  • Han Kwang-Hee;Lee Ju-Hwan
    • Science of Emotion and Sensibility
    • /
    • v.9 no.1
    • /
    • pp.9-17
    • /
    • 2006
  • As is commonly said, music is 'language of emotions.' It is because sound is a plentiful modality to communicate the human sensibility information. However, most researches of auditory displays were focused on improving efficiency on user's performance data such as performance time and accuracy. Recently, many of researchers in auditory displays acknowledge that individual preference and sensible satisfaction may be a more important factor than the performance data. On this ground, in the present study we constructed the sound sensibility dimensions ('Pleasure', 'Complexity', and 'Activity') and systematically examined the attributes of sound on the sensibility dimensions and analyzed the meanings. As a result, sound sensibility dimensions depended on each sound attributes , and some sound attributes interact with one another. Consequently, the results of the present study will provide the useful possibilities of applying the affective influence in the field of auditory displays needing the applications of the sensibility information according to the sound attributes.

  • PDF

Content Based Classification of Audio Signal using Discriminant Function (식별함수를 이용한 오디오신호의 내용기반 분류)

  • Kim, Young-Sub;Lee, Kwang-Seok;Koh, Si-Young;Hur, Kang-In
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.06a
    • /
    • pp.201-204
    • /
    • 2007
  • In this paper, we research the content-based analysis and classification according to the composition of the feature parameters pool for the auditory signals to implement the auditory indexing and searching system. Auditory data is classified to the primitive various auditory types. we described the analysis and feature extraction method for the feature parameters available to the auditory data classification. And we compose the feature parameters pool in the indexing group unit, then compare and analysis the auditory data centering around the including level and indexing criterion into the audio categories. Based on this result, we composit feature vectors of audio data according to the classification categories, then experiment the classification using discrimination function.

  • PDF

The effects of visual and auditory information as A tool of emotional value assessment (감성 가치 평가를 위한 시각적, 청각적 매체의 효용)

  • Kim Myung-Suk;Lee Eun-Chang
    • Journal of Science of Art and Design
    • /
    • v.1
    • /
    • pp.95-123
    • /
    • 1999
  • The goal of this research is a visual and auditory tool development enabling designers to have the same emotional value with users in the process of user centered design. Through the research, we intend to show the aid measure for making cognitive gaps narrow between users and designer in the process of transforming and understanding the emotional needs as a verbal image. because In the business practice of design, most of tools and techniques for assessment and analysis of emotional needs are those used usually in the marketing fields. So the information generated and transformed from users to designers have a form of physical words. When the designer's understanding of the emotional needs is considered as a product mediated communication process, the morphologic and cognitive information gaps become obvious. This difference could be a false basis in designing with emotional user needs. So the alternative needs assessment sub-tools of visual and auditory information form was embodied mainly for designer's cognitive gaps and inter-cultural emotional needs assessment. As the method of embodiment, Firstly, adjectives related to emotion were classified in their cognitive dimension. Secondly, visual and auditory data were extracted, and then the relativity verified. Finally, the practicality and effectiveness were tested through the database generation. In view of the results so far achieved, 1. We could find being of the big information cognitive gaps in the verbal assessment of emotional needs between designers and users. 2. With the visual and auditory assessment tool, we could make the big cognitive gaps narrower than we expected. 3. Also, we could find the chance that the fidelity, recognition, and friendliness of design for emotional user needs would become better.

  • PDF

Modeling of Distance Localization by Using an Extended Auditory Parallax Model (확장된 음향적 시차 모델을 이용한 음상 거리정위의 모델화)

  • 김해영
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.1
    • /
    • pp.30-39
    • /
    • 2004
  • This study aims at establishing a digital signal processing technique to control 3-D sound localization, especially focusing our ores on the role of information provided by Head-Related Transfer Function (HRTF). In order to clarify the cues to control the auditory distance perception, two conventional models named Hirsch-Tahara model and auditory parallax model were examined. As a result, it was shown that both models have limitations to universally explain the auditory distance perception. Hence, the auditory parallax model was extended so as to apply in broader cases of auditory distance perception. The results of the experiment by simulating HRTFs based on the extended parallax model showed that the cues provided by the new model were almost sufficient to control the perception of auditory distance from an actual sound source located within about 2m.

Auditory Model Design for Objective Audio Quality Measurement

  • Dongil Seo;Park, Se-Hyoung;Ryu, Seung-wan;Jaeho Shin
    • Proceedings of the IEEK Conference
    • /
    • 2002.07c
    • /
    • pp.1717-1720
    • /
    • 2002
  • Objective quality measurement schemes that in- corporate properties of the human auditory system. The basilar membrane(BM) acts as a spectrum analyzer, spatially decomposing the signal into frequency components. Each filterbank is an implementation of the ERB, gam-machirp function. This filterbank is level-dependent asymmetric compensation filters. And for the validation of the auditory model, we calculate the CPD. Quality measurement is obtained from the result.

  • PDF

Auditory Representations for Robust Speech Recognition in Noisy Environments (잡음 환경에서의 음성 인식을 위한 청각 표현)

  • Kim, Doh-Suk;Lee, Soo-Young;Kil, Rhee-M.
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.5
    • /
    • pp.90-98
    • /
    • 1996
  • An auditory model is proposed for robust speech recognition in noisy environments. The model consists of cochlear bandpass filters and nonlinear stages, and represents frequency and intensity information efficiently even in noisy environments. Frequency information of the signal is obtained by zero-crossing intervals, and intensity information is also incorporated by peak detectors and saturating nonlinearities. Also, the robustness of the zero-crossings in estimating frequency is verified by the developed analytic relationship of the variance of the level-crossing interval perturbations as a function of the crossing level values. The proposed auditory model is computationally efficient and free from many unknown parameters compared with other auditory models. Speaker-independent speech recognition experiments demonstrate the robustness of the proposed method.

  • PDF