• Title/Summary/Keyword: Speaker identification

Search Result 152, Processing Time 0.03 seconds

Perceptual Experiment on Number Production for Speaker Identification

  • Yang, Byung-Gon
    • Speech Sciences
    • /
    • v.8 no.1
    • /
    • pp.7-19
    • /
    • 2001
  • The acoustic parameters of nine Korean numbers were analyzed by Praat, a speech analysis software, and synthesized by SenSynPPC, a Klatt formant synthesizer. The overall intensity, pitch and formant values of the numbers were modified dynamically by a step of 1 dB, 1 Hz and 2.5% respectively. The study explored the sensitivity of listeners to changes in the three acoustic parameters. Twelve subjects (male and female) listened to 390 pairs of synthesized numbers and judged whether the given pair sounded the same or different. Results showed that subjects perceived the same sound quality within the range of 6.6 dB of intensity variation, 10.5 Hz of pitch variation and 5.9% of the first three formant variations. The male and female groups showed almost the same perceptual ranges. Also, an asymmetrical structure of high and low boundary was observed. The ranges may be applicable to the development of a speaker identification system while the method of synthesis modification may apply to its evaluation data.

  • PDF

Text-independent Speaker Identification Using Soft Bag-of-Words Feature Representation

  • Jiang, Shuangshuang;Frigui, Hichem;Calhoun, Aaron W.
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.14 no.4
    • /
    • pp.240-248
    • /
    • 2014
  • We present a robust speaker identification algorithm that uses novel features based on soft bag-of-word representation and a simple Naive Bayes classifier. The bag-of-words (BoW) based histogram feature descriptor is typically constructed by summarizing and identifying representative prototypes from low-level spectral features extracted from training data. In this paper, we define a generalization of the standard BoW. In particular, we define three types of BoW that are based on crisp voting, fuzzy memberships, and possibilistic memberships. We analyze our mapping with three common classifiers: Naive Bayes classifier (NB); K-nearest neighbor classifier (KNN); and support vector machines (SVM). The proposed algorithms are evaluated using large datasets that simulate medical crises. We show that the proposed soft bag-of-words feature representation approach achieves a significant improvement when compared to the state-of-art methods.

Robust Speaker Identification Using Linear Transformation Optimized for Diagonal Covariance GMM (대각공분산 GMM에 최적인 선형변환을 이용한 강인한 화자식별)

  • Kim, Min-Seok;Yang, Il-Ho;Yu, Ha-Jin
    • MALSORI
    • /
    • no.65
    • /
    • pp.67-80
    • /
    • 2008
  • We have been building a text-independent speaker recognition system that is robust to unknown channel and noise environments. In this paper, we propose a linear transformation to obtain robust features. The transformation is optimized to maximize the distances between the Gaussian mixtures. We use rotation of the axes, to cope with the problem of scaling the transformation matrix. The proposed transformation is similar to PCA or LDA, but can achieve better result in some special cases where PCA and LDA can not work properly. We use YOHO database to evaluate the proposed method and compare the result with PCA and LDA. The results show that the proposed method outperforms all the baseline, PCA and LDA.

  • PDF

An Analysis of Phonetic Parameters for Individual Speakers (개별화자 음성의 특징 파라미터 분석)

  • Ko, Do-Heung
    • Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.177-189
    • /
    • 2000
  • This paper investigates how individual speakers' speech can be distinguished using acoustic parameters such as amplitude, pitch, and formant frequencies. Word samples from fifteen male speakers in their 20's in three different regions were recorded in two different modes (i.e., casual and clear speech) in quiet settings, and were analyzed with a Praat macro scrip. In order to determine individual speakers' acoustical values, the total duration of voicing segments was measured in five different timepoints. Results showed that a high correlation coefficient between $F_1\;and\;F_2$ in formant frequency was found among the speakers although there was little correlation coefficient between amplitude and pitch. Statistical grouping shows that individual speakers' voices were not reflected in regional dialects for both casual and clear speech. In addition, the difference of maximum and minimum in amplitude was about 10 dB which indicates a perceptually audible degree. These acoustic data can give some meaningful guidelines for implementing algorithms of speaker identification and speaker verification.

  • PDF

Speaker Indexing using Vowel Based Speaker Identification Model (모음 기반 하자 식별 모델을 이용한 화자 인덱싱)

  • Kum Ji Soo;Park Chan Ho;Lee Hyon Soo
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.151-154
    • /
    • 2002
  • 본 논문에서는 음성 데이터에서 동일한 화자의 음성 구간을 찾아내는 화자 인덱싱(Speaker Indexing) 기술 중 사전 화자 모델링 과정을 통한 인덱싱 방법을 제안하고 실험하였다. 제안한 인덱싱 방법은 문장 독립(Text Independent) 화자 식별(Speaker Identification)에 사용할 수 있는 모음(Vowel)에 대해 특징 파라미터를 추출하고, 이를 바탕으로 화자별 모델을 구성하였다. 인덱싱은 음성 구간에서 모음의 위치를 검출하고, 구성한 화자 모델과의 거리 계산을 통하여 가장 가까운 모델을 식별된 결과로 한다. 그리고 식별된 결과는 화자 구간 변화와 음성 데이터의 특성을 바탕으로 필터링 과정을 거쳐 최종적인 인덱싱 결과를 얻는다. 화자 인덱싱 실험 대상으로 방송 뉴스를 녹음하여 10명의 화자 모델을 구성하였고, 인덱싱 실험을 수행한 결과 $91.8\%$의 화자 인덱싱 성능을 얻었다.

  • PDF

The Speaker Identification Using Incremental Learning (Incremental Learning을 이용한 화자 인식)

  • Sim, Kwee-Bo;Heo, Kwang-Seung;Park, Chang-Hyun;Lee, Dong-Wook
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.5
    • /
    • pp.576-581
    • /
    • 2003
  • Speech signal has the features of speakers. In this paper, we propose the speaker identification system which use the incremental learning based on neural network. Recorded speech signal through the Mic is passed the end detection and is divided voiced signal and unvoiced signal. The extracted 12 order cpestrum are used the input data for neural network. Incremental learning is the learning algorithm that the learned weights are remembered and only the new weights, that is created as adding new speaker, are trained. The architecture of neural network is extended with the number of speakers. So, this system can learn without the restricted number of speakers.

Context-Independent Speaker Recognition in URC Environment (지능형 서비스 로봇을 위한 문맥독립 화자인식 시스템)

  • Ji, Mi-Kyong;Kim, Sung-Tak;Kim, Hoi-Rin
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.2
    • /
    • pp.158-162
    • /
    • 2006
  • This paper presents a speaker recognition system intended for use in human-robot interaction. The proposed speaker recognition system can achieve significantly high performance in the Ubiquitous Robot Companion (URC) environment. The URC concept is a scenario in which a robot is connected to a server through a broadband connection allowing functions to be performed on the server side, thereby minimizing the stand-alone function significantly and reducing the robot client cost. Instead of giving a robot (client) on-board cognitive capabilities, the sensing and processing work are outsourced to a central computer (server) connected to the high-speed Internet, with only the moving capability provided by the robot. Our aim is to enhance human-robot interaction by increasing the performance of speaker recognition with multiple microphones on the robot side in adverse distant-talking environments. Our speaker recognizer provides the URC project with a basic interface for human-robot interaction.

  • PDF

Performance Improvement of Speaker Recognition System Using Genetic Algorithm (유전자 알고리즘을 이용한 화자인식 시스템 성능 향상)

  • 문인섭;김종교
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.8
    • /
    • pp.63-67
    • /
    • 2000
  • This paper deals with text-prompt speaker recognition based on dynamic time warping (DTW). The Genetic Algorithm was applied to the creation of reference patterns for suitable reflection of the speaker characteristics, one of the most important determinants in the fields of speaker recognition. In order to overcome the weakness of text-dependent and text-independent speaker recognition, the text-prompt type was suggested. Performed speaker identification and verification in close and open set respectively, hence the Genetic algorithm-based reference patterns had been proven to have better performance in both recognition rate and speed than that of conventional reference patterns.

  • PDF

Perception of English Consonants in Different Prosodic Positions by Korean Learners of English

  • Jang, Mi
    • Phonetics and Speech Sciences
    • /
    • v.6 no.1
    • /
    • pp.11-19
    • /
    • 2014
  • The focus of this study was to investigate whether there is a position effect on identification accuracy of L2 consonants by Korean listeners and to examine how Korean listeners perceive the phonetic properties of initial and final consonants produced by a Korean learner of English and an English native speaker. Most studies examining L2 learners' perception of L2 sounds have focused on the segmental level but very few studies have examined the role of prosodic position in L2 learners' perception. In the present study, an identification test was conducted for English consonants /p, t, k, f, ɵ, s, ʃ/ in CVC prosodic structures. The results revealed that Korean listeners identified syllable-initial consonants more accurately than syllable-final consonants. The perceptual accuracy in syllable initial consonants may be attributable to the enhanced phonetic properties in the initial consonants. A significant correlation was found between error rates and F2 onset/offset for stops and fricatives, and between perceptual accuracy and RMS burst energy for stops. However, the identification error patterns were found to be different across consonant types and between the different language speakers. In the final position, Korean listeners had difficulty in identifying /p/, /f/, /ɵ/, and /s/ when they were produced by a Korean speaker and showed more errors in /p/, /t/, /f/, /ɵ/, and /s/ when they were spoken by an English native speaker. Comparing to the perception of English consonants spoken by a Korean speaker, greater error rates and diverse error patterns were found in the perception of consonants produced by an English native speaker. The present study provides the evidence that prosodic position plays a crucial role in the perception of L2 segments.

Speaker Identification Using Higher-Order Statistics In Noisy Environment (고차 통계를 이용한 잡음 환경에서의 화자식별)

  • Shin, Tae-Young;Kim, Gi-Sung;Kwon, Young-Uk;Kim, Hyung-Soon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.6
    • /
    • pp.25-35
    • /
    • 1997
  • Most of speech analysis methods developed up to date are based on second order statistics, and one of the biggest drawback of these methods is that they show dramatical performance degradation in noisy environments. On the contrary, the methods using higher order statistics(HOS), which has the property of suppressing Gaussian noise, enable robust feature extraction in noisy environments. In this paper we propose a text-independent speaker identification system using higher order statistics and compare its performance with that using the conventional second-order-statistics-based method in both white and colored noise environments. The proposed speaker identification system is based on the vector quantization approach, and employs HOS-based voiced/unvoiced detector in order to extract feature parameters for voiced speech only, which has non-Gaussian distribution and is known to contain most of speaker-specific characteristics. Experimental results using 50 speaker's database show that higher-order-statistics-based method gives a better identificaiton performance than the conventional second-order-statistics-based method in noisy environments.

  • PDF