• Title/Summary/Keyword: Speaker recognition systems

Search Result 86, Processing Time 0.027 seconds

An Emotion Recognition Technique using Speech Signals (음성신호를 이용한 감정인식)

  • Jung, Byung-Wook;Cheun, Seung-Pyo;Kim, Youn-Tae;Kim, Sung-Shin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.4
    • /
    • pp.494-500
    • /
    • 2008
  • In the field of development of human interface technology, the interactions between human and machine are important. The research on emotion recognition helps these interactions. This paper presents an algorithm for emotion recognition based on personalized speech signals. The proposed approach is trying to extract the characteristic of speech signal for emotion recognition using PLP (perceptual linear prediction) analysis. The PLP analysis technique was originally designed to suppress speaker dependent components in features used for automatic speech recognition, but later experiments demonstrated the efficiency of their use for speaker recognition tasks. So this paper proposed an algorithm that can easily evaluate the personal emotion from speech signals in real time using personalized emotion patterns that are made by PLP analysis. The experimental results show that the maximum recognition rate for the speaker dependant system is above 90%, whereas the average recognition rate is 75%. The proposed system has a simple structure and but efficient to be used in real time.

An Implementation of Speaker Verification System Based on Continuants and Multilayer Perceptrons

  • Lee, Tae-Seung;Park, Sung-Won;Lim, Sang-Seok;Hwang, Byong-Won
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.216-219
    • /
    • 2003
  • Among the techniques to protect private information by adopting biometrics, speaker verification is expected to be widely used due to advantages in convenient usage and inexpensive implementation cost Speaker verification should achieve a high degree of the reliability in the verification nout the flexibility in speech text usage, and the efficiency in verification system complexity. Continuants have excellent speaker-discriminant power and the modest number of phonemes in the category, and multilayer perceptrons (MLPs) have superior recognition ability and fast operation speed. In consequence, the two provide viable ways for speaker verification system to obtain the above properties. This paper implements a system to which continuants and MLPs are applied, and evaluates the system using a Korean speech database. The results of the experiment prove that continuants and MLPs enable the system to acquire the three properties.

  • PDF

Recording Support System for Off-Line Conference using Face and Speaker Recognition (얼굴 인식 및 화자 정보를 이용한 오프라인 회의 기록 지원 시스템)

  • Son, Yun-Sik;Jeong, Jin-U;Park, Han-Mu;Gye, Seung-Cheol;Yun, Jong-Hyeok;Jeong, Nak-Cheon;O, Se-Man
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.11a
    • /
    • pp.33-37
    • /
    • 2007
  • 최근 멀티미디어 서비스는 동영상 압축 기술 및 네트워크의 발달을 기반으로 하여 다양한 응용 서비스를 제공하고 있으며, 이 중 화상 회의 시스템은 이 두 가지 기술이 효과적으로 사용된 대표적인 예이다. 원격 사용자간의 원활한 의사전달을 위해 고려된 화상회의 시스템은 효과적인 응용 서비스로 분류되고 있지만, 이러한 서비스 제공을 위한 기술을 이용하여 빈도가 훨씬 많은 일반적인 회의를 지원하는 응용서비스는 드문 편이다. 본 논문에서는 얼굴 정보와 화자 정보를 기반으로 오프라인 회의를 보조하는 시스템을 제안한다. 제안된 시스템은 소규모의 마이크와 캠을 이용하여 화자의 위치를 파악하고 캠에서 얻어진 정보를 이용하여 얼굴 영역 정보를 분석하고 인식한 후 화자 정보를 추출하여 발언자들을 추적 하여 기록하는 기능을 제공한다.

  • PDF

Performance Analysis of Speech Parameters and a New Decision Logic for Speaker Recognition (화자인식을 위한 음성 요소들의 성능분석 및 새로운 판단 논리)

  • Lee, Hyuk-Jae;Lee, Byeong-Gi
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.26 no.7
    • /
    • pp.146-156
    • /
    • 1989
  • This paper discusses how to choose speech parameters and decision logics to improve the performance of speaker recognition systems. It also considers the influence of the reference patterns on the speaker recognition. It is observed from the performance analysis based on LPSs, PARCOR coefficients and LPC-cepstrum coefficients that LPC-cepstrum coefficients are superior to the others in speaker recognition without regard to the reference patterns. In order to improve the recognition performance, a new decision logic is proposed based on a generalized-distance concept. It differs from the existing methods in that it considers the statistics of customer and impostors at the same time. It turns out from a speaker verification test that the proposed decision logic ferforms better than the existing ones.

  • PDF

A New Speaker Adaptation Technique using Maximum Model Distance

  • Tahk, Min-Jea
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.154.2-154
    • /
    • 2001
  • This paper presented a adaptation approach based on maximum model distance (MMD) method. This method shares the same framework as they are used for training speech recognizers with abundant training data. The MMD method could adapt to all the models with or without adaptation data. If large amount of adaptation data is available, these methods could gradually approximate the speaker-dependent ones. The approach is evaluated through the phoneme recognition task on the TIMIT corpus. On the speaker adaptation experiments, up to 65.55% phoneme error reduction is achieved. The MMD could reduce phoneme error by 16.91% even when ...

  • PDF

A New Speaker Adaptation Technique using Maximum Model Distance

  • Lee, Man-Hyung;Hong, Suh-Il
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.99.1-99
    • /
    • 2001
  • This paper presented an adaptation approach based on maximum model distance (MMD) method. This method shares the same framework as they are used for training speech recognizers with abundant training data. The MMD method could adapt to all the models with or without adaptation data. If large amount of adaptation data is available, these methods could gradually approximate the speaker-dependent ones. The approach is evaluated through the phoneme recognition task on the TIMIT corpus. On the speaker adaptation experiments, up to 65.55% phoneme error reduction is achieved. The MMD could reduce phoneme error by 16.91% even when only one adaptation utterance is used.

  • PDF

Speaker Detection and Recognition for a Welfare Robot

  • Sugisaka, Masanori;Fan, Xinjian
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.835-838
    • /
    • 2003
  • Computer vision and natural-language dialogue play an important role in friendly human-machine interfaces for service robots. In this paper we describe an integrated face detection and face recognition system for a welfare robot, which has also been combined with the robot's speech interface. Our approach to face detection is to combine neural network (NN) and genetic algorithm (GA): ANN serves as a face filter while GA is used to search the image efficiently. When the face is detected, embedded Hidden Markov Model (EMM) is used to determine its identity. A real-time system has been created by combining the face detection and recognition techniques. When motivated by the speaker's voice commands, it takes an image from the camera, finds the face inside the image and recognizes it. Experiments on an indoor environment with complex backgrounds showed that a recognition rate of more than 88% can be achieved.

  • PDF

Design of Intelligent Emotion Recognition Model

  • Kim, Yi-gon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.7
    • /
    • pp.611-614
    • /
    • 2001
  • Voice is one of the most efficient communication media and it includes several kinds of factors about speaker, context emotion and so on. Human emotion is expressed is expressed in the speech, the gesture, the physiological phenomena(the breath, the beating of the pulse, etc). In this paper, the emotion recognition method model using neuro-fuzzy in order to have cognizance of emotion from voice signal is presented and simulated.

  • PDF

Performance Evaluation of English Word Pronunciation Correction System (한국인을 위한 외국어 발음 교정 시스템의 개발 및 성능 평가)

  • Kim Mu Jung;Kim Hyo Sook;Kim Sun Ju;Kim Byoung Gi;Ha Jin-Young;Kwon Chul Hong
    • MALSORI
    • /
    • no.46
    • /
    • pp.87-102
    • /
    • 2003
  • In this paper, we present an English pronunciation correction system for Korean speakers and show some of experimental results on it. The aim of the system is to detect mispronounced phonemes in spoken words and to give appropriate correction comments to users. There are several English pronunciation correction systems adopting speech recognition technology, however, most of them use conventional speech recognition engines. From this reason, they could not give phoneme based correction comments to users. In our system, we build two kinds of phoneme models: standard native speaker models and Korean's error models. We also design recognition network based on phonemes to detect Koreans' common mispronunciations. We get 90% detection rate in insertion/deletion/replacement of phonemes, but we cannot get high detection rate in diphthong split and accents.

  • PDF

Performance Improvement of Speaker Recognition by MCE-based Score Combination of Multiple Feature Parameters (MCE기반의 다중 특징 파라미터 스코어의 결합을 통한 화자인식 성능 향상)

  • Kang, Ji Hoon;Kim, Bo Ram;Kim, Kyu Young;Lee, Sang Hoon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.6
    • /
    • pp.679-686
    • /
    • 2020
  • In this thesis, an enhanced method for the feature extraction of vocal source signals and score combination using an MCE-Based weight estimation of the score of multiple feature vectors are proposed for the performance improvement of speaker recognition systems. The proposed feature vector is composed of perceptual linear predictive cepstral coefficients, skewness, and kurtosis extracted with lowpass filtered glottal flow signals to eliminate the flat spectrum region, which is a meaningless information section. The proposed feature was used to improve the conventional speaker recognition system utilizing the mel-frequency cepstral coefficients and the perceptual linear predictive cepstral coefficients extracted with the speech signals and Gaussian mixture models. In addition, to increase the reliability of the estimated scores, instead of estimating the weight using the probability distribution of the convectional score, the scores evaluated by the conventional vocal tract, and the proposed feature are fused by the MCE-Based score combination method to find the optimal speaker. The experimental results showed that the proposed feature vectors contained valid information to recognize the speaker. In addition, when speaker recognition is performed by combining the MCE-based multiple feature parameter scores, the recognition system outperformed the conventional one, particularly in low Gaussian mixture cases.