• Title/Summary/Keyword: Auditory model

Search Result 160, Processing Time 0.026 seconds

A Study on Speech Recognition Using Auditory Model and Recurrent Network (청각모델과 회귀회로망을 이용한 음성인식에 관한 연구)

  • Kim, Dong-Jun;Lee, Jae-Hyuk;Yoon, Tae-Sung;Park, Sang-Hui
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1990 no.05
    • /
    • pp.51-55
    • /
    • 1990
  • In this study, a peripheral auditory model used as a frequency feature extractor and a recurrent network which has recurrent links on input nodes is constructed in order to show the reliability of the recurrent network as a recognizer by executing recognition tests for 4 Korean placenames and syllables. As a result of this study, a refined weight compensation method is proposed and, using this method, it is possible to improve the system operation. The recurrent network in this study reflects well time information of temporal speech signal.

  • PDF

Adaptive Noise Subtraction in Auditory Evoked Field (적응 필터를 이용한 청각 자극에 의한 뇌자도 신호에서 노이즈 제거)

  • 이동훈;안창범
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.52 no.10
    • /
    • pp.606-610
    • /
    • 2003
  • Noise subtraction using reference channel data has been used to improve signal-to-noise ratio in magnetoencephalography. In this paper, an adaptive noise subtraction model is proposed and parameters for the model are optimized. A criterion to determine an optimal update period for the filter coefficients is proposed based on the ratio of peak amplitude of evoked field (N100m) divided by the output standard deviation. Experiments are carried out using a 40 channel MEG system. From the experiments, the proposed noise subtraction method shows superior performances over existing non-adaptive methods. Two-dimensional topographic map is shown for a diagnosis with a cubic spline interpolation.

A Study on Speech Recognition Using Auditory Model and Recurrent Network (청각모델과 회귀회로망을 이용한 음성인식에 관한 연구)

  • 김동준;이재혁
    • Journal of Biomedical Engineering Research
    • /
    • v.11 no.1
    • /
    • pp.157-162
    • /
    • 1990
  • In this study, a peripheral auditory model is used as a frequency feature extractor and a recurrent network which has recurrent links on input nodes is constructed in order to show the reliability of the recurrent network as a recognizer by executing recognition tests for 4 Korean place names and syllables. In the case of using the general learning rule, it is found that the weights are diverged for a long sequence because of the characteristics of the node function in the hidden and output layers. So, a refined weight compensation method is proposed and, using this method, it is possible to improve the system operation and to use long data. The recognition results are considerably good, even if time worping and endpoint detection are omitted and learning patterns and test patterns are made of average length of data. The recurrent network used in this study reflects well time information of temporal speech signal.

  • PDF

Glottal Weighted Cepstrum for Robust Speech Recognition (잡음에 강한 음성 인식을 위한 성문 가중 켑스트럼에 관한 연구)

  • 전선도;강철호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.5
    • /
    • pp.78-82
    • /
    • 1999
  • This paper is a study on weighted cepstrum used broadly for robust speech recognition. Especially, we propose the weighted function of asymmetric glottal pulse shape. which is used for weighted cepstrum extracted by PLP(Perceptual Linear Predictive) based on auditory model. Also, we analyze this glottal weighted cepstrum from the glottal pulse of glottal model in connection with the cepstrum. And we obtain speech features analyzed by both the glottal model and the auditory model. The isolated-word recognition rate is adopted for the test of proposed method in the car moise and street environment. And the performance of glottal weighted cepstrum is compared with both that of weighted cepstrum extracted by LP(Linear Prediction) and that of weighted cepstrum extracted by PLP. The result of computer simulation shows that recognition rate of the proposed glottal weighted cepstrum is better than those of other weighted cepstrums.

  • PDF

Prediction of the Exposure to 1763MHz Radiofrequency Radiation Based on Gene Expression Patterns

  • Lee, Min-Su;Huang, Tai-Qin;Seo, Jeong-Sun;Park, Woong-Yang
    • Genomics & Informatics
    • /
    • v.5 no.3
    • /
    • pp.102-106
    • /
    • 2007
  • Radiofrequency (RF) radiation at the frequency of mobile phones has been not reported to induce cellular responses in in vitro and in vivo models. We exposed HEI-OC1, conditionally-immortalized mouse auditory cells, to RF radiation to characterize cellular responses to 1763 MHz RF radiation. While we could not detect any differences upon RF exposure, whole-genome expression profiling might provide the most sensitive method to find the molecular responses to RF radiation. HEI-OC1 cells were exposed to 1763 MHz RF radiation at an average specific absorption rate (SAR) of 20 W/kg for 24 hr and harvested after 5 hr of recovery (R5), alongside sham-exposed samples (S5). From the whole-genome profiles of mouse neurons, we selected 9 differentially-expressed genes between the S5 and R5 groups using information gain-based recursive feature elimination procedure. Based on support vector machine (SVM), we designed a prediction model using the 9 genes to discriminate the two groups. Our prediction model could predict the target class without any error. From these results, we developed a prediction model using biomarkers to determine the RF radiation exposure in mouse auditory cells with perfect accuracy, which may need validation in in vivo RF-exposure models.

Comparison of Speech Onset Detection Characteristics of Adaptation Algorithms for Cochlear Implant Speech Processor (인공와우 어음처리방식을 위한 적응효과 알고리즘의 음성개시점 검출 특성 비교)

  • Choi, Sung-Jin;Kim, Jin-Ho;Kim, Kyung-Hwan
    • Journal of Biomedical Engineering Research
    • /
    • v.29 no.1
    • /
    • pp.25-31
    • /
    • 2008
  • It is well known that temporal information, i.e speech onset, about input speech can be represented to the response nerve signal of auditory nerve better depending on the adaptation effect occurred in the auditory nerve synapse. In addition, the performance of a speech processor of cochlear implant can be improved by the adaptation effect. In this paper, we observed the emphasis characteristic of speech onset in the recently proposed adaptation algorithm, analyzed the characteristic of performance change according to the variation of parameters and compared with transient emphasis spectral maxima (TESM) is the previous typical strategy. When observing false peaks which are generated everywhere except speech onset, in the case of the proposed model, the false peak were generated much less than in the case of the TESM and it is more distinguishable under noise.

Emotion recognition from speech using Gammatone auditory filterbank

  • Le, Ba-Vui;Lee, Young-Koo;Lee, Sung-Young
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06a
    • /
    • pp.255-258
    • /
    • 2011
  • An application of Gammatone auditory filterbank for emotion recognition from speech is described in this paper. Gammatone filterbank is a bank of Gammatone filters which are used as a preprocessing stage before applying feature extraction methods to get the most relevant features for emotion recognition from speech. In the feature extraction step, the energy value of output signal of each filter is computed and combined with other of all filters to produce a feature vector for the learning step. A feature vector is estimated in a short time period of input speech signal to take the advantage of dependence on time domain. Finally, in the learning step, Hidden Markov Model (HMM) is used to create a model for each emotion class and recognize a particular input emotional speech. In the experiment, feature extraction based on Gammatone filterbank (GTF) shows the better outcomes in comparison with features based on Mel-Frequency Cepstral Coefficient (MFCC) which is a well-known feature extraction for speech recognition as well as emotion recognition from speech.

Quasi-Optimal Linear Recursive DOA Tracking of Moving Acoustic Source for Cognitive Robot Auditory System (인지로봇 청각시스템을 위한 의사최적 이동음원 도래각 추적 필터)

  • Han, Seul-Ki;Ra, Won-Sang;Whang, Ick-Ho;Park, Jin-Bae
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.3
    • /
    • pp.211-217
    • /
    • 2011
  • This paper proposes a quasi-optimal linear DOA (Direction-of-Arrival) estimator which is necessary for the development of a real-time robot auditory system tracking moving acoustic source. It is well known that the use of conventional nonlinear filtering schemes may result in the severe performance degradation of DOA estimation and not be preferable for real-time implementation. These are mainly due to the inherent nonlinearity of the acoustic signal model used for DOA estimation. This motivates us to consider a new uncertain linear acoustic signal model based on the linear prediction relation of a noisy sinusoid. Using the suggested measurement model, it is shown that the resultant DOA estimation problem is cast into the NCRKF (Non-Conservative Robust Kalman Filtering) problem [12]. NCRKF-based DOA estimator provides reliable DOA estimates of a fast moving acoustic source in spite of using the noise-corrupted measurement matrix in the filter recursion and, as well, it is suitable for real-time implementation because of its linear recursive filter structure. The computational efficiency and DOA estimation performance of the proposed method are evaluated through the computer simulations.

Music classification system through emotion recognition based on regression model of music signal and electroencephalogram features (음악신호와 뇌파 특징의 회귀 모델 기반 감정 인식을 통한 음악 분류 시스템)

  • Lee, Ju-Hwan;Kim, Jin-Young;Jeong, Dong-Ki;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.2
    • /
    • pp.115-121
    • /
    • 2022
  • In this paper, we propose a music classification system according to user emotions using Electroencephalogram (EEG) features that appear when listening to music. In the proposed system, the relationship between the emotional EEG features extracted from EEG signals and the auditory features extracted from music signals is learned through a deep regression neural network. The proposed system based on the regression model automatically generates EEG features mapped to the auditory characteristics of the input music, and automatically classifies music by applying these features to an attention-based deep neural network. The experimental results suggest the music classification accuracy of the proposed automatic music classification framework.

Near-Infrared Laser Stimulation of the Auditory Nerve in Guinea Pigs

  • Guan, Tian;Wang, Jian;Yang, Muqun;Zhu, Kai;Wang, Yong;Nie, Guohui
    • Journal of the Optical Society of Korea
    • /
    • v.20 no.2
    • /
    • pp.269-275
    • /
    • 2016
  • This study has investigated the feasibility of 980-nm low-energy pulsed near-infrared laser stimulation to evoke auditory responses, as well as the effects of radiant exposure and pulse duration on auditory responses. In the experiments, a hole was drilled in the basal turn of the cochlea in guinea pigs. An optical fiber with a 980-nm pulsed infrared laser was inserted into the hole, orientating the spiral ganglion cells in the cochlea. To model deafness, the tympanic membrane was mechanically damaged. Acoustically evoked compound action potentials (ACAPs) were recorded before and after deafness, and optically evoked compound action potentials (OCAPs) were recorded after deafness. Similar spatial selectivity between optical and acoustical stimulation was found. In addition, OCAP amplitudes increased with radiant exposure, indicating a photothermal mechanism induced by optical stimulation. Furthermore, at a fixed radiant exposure, OCAP amplitudes decreased as pulse duration increased, suggesting that optical stimulation might be governed by the time duration over which the energy is delivered. Thus, the current experiments have demonstrated that a 980-nm pulsed near-infrared laser with low energy can evoke auditory neural responses similar to those evoked by acoustical stimulation. This approach could be used to develop optical cochlear implants.