• Title/Summary/Keyword: Speaker recognition

Search Result 555, Processing Time 0.026 seconds

Development of Half-Mirror Interface System and Its Application for Ubiquitous Environment (유비쿼터스 환경을 위한 하프미러형 인터페이스 시스템 개발과 응용)

  • Kwon Young-Joon;Kim Dae-Jin;Lee Sang-Wan;Bien Zeungnam
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.12
    • /
    • pp.1020-1026
    • /
    • 2005
  • In the era of ubiquitous computing, human-friendly man-machine interface is getting more attention due to its possibility to offer convenient services. For this, in this paper, we introduce a 'Half-Mirror Interface System (HMIS)' as a novel type of human-friendly man-machine interfaces. Basically, HMIS consists of half-mirror, USB-Webcam, microphone, 2ch-speaker, and high-speed processing unit. In our HMIS, two principal operation modes are selected by the existence of the user in front of it. The first one, 'mirror-mode', is activated when the user's face is detected via USB-Webcam. In this mode, HMIS provides three basic functions such as 1) make-up assistance by magnifying an interested facial component and TTS (Text-To-Speech) guide for appropriate make-up, 2) Daily weather information provider via WWW service, 3) Health monitoring/diagnosis service using Chinese medicine knowledge. The second one, 'display-mode' is designed to show decorative pictures, family photos, art paintings and so on. This mode is activated when the user's face is not detected for a time being. In display-mode, we also added a 'healing-window' function and 'healing-music player' function for user's psychological comfort and/or relaxation. All these functions are accessible by commercially available voice synthesis/recognition package.

A comparative study of filter methods based on information entropy

  • Kim, Jung-Tae;Kum, Ho-Yeun;Kim, Jae-Hwan
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.40 no.5
    • /
    • pp.437-446
    • /
    • 2016
  • Feature selection has become an essential technique to reduce the dimensionality of data sets. Many features are frequently irrelevant or redundant for the classification tasks. The purpose of feature selection is to select relevant features and remove irrelevant and redundant features. Applications of the feature selection range from text processing, face recognition, bioinformatics, speaker verification, and medical diagnosis to financial domains. In this study, we focus on filter methods based on information entropy : IG (Information Gain), FCBF (Fast Correlation Based Filter), and mRMR (minimum Redundancy Maximum Relevance). FCBF has the advantage of reducing computational burden by eliminating the redundant features that satisfy the condition of approximate Markov blanket. However, FCBF considers only the relevance between the feature and the class in order to select the best features, thus failing to take into consideration the interaction between features. In this paper, we propose an improved FCBF to overcome this shortcoming. We also perform a comparative study to evaluate the performance of the proposed method.

ACOUSTIC FEATURES DIFFERENTIATING KOREAN MEDIAL LAX AND TENSE STOPS

  • Shin, Ji-Hye
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.53-69
    • /
    • 1996
  • Much research has been done on the rues differentiating the three Korean stops in word initial position. This paper focuses on a more neglected area: the acoustic cues differentiating the medial tense and lax unaspirated stops. Eight adult Korean native speakers, four males and four females, pronounced sixteen minimal pairs containing the two series of medial stops with different preceding vowel qualities. The average duration of vowels before lax stops is 31 msec longer than before their tense counterparts (70 msec for lax vs 39 msec for tense). In addition, the average duration of the stop closure of tense stops is 135 msec longer than that of lax stops (69 msec for lax vs 204msec for tense). THESE DURATIONAL DIFFERENCES ARE 50 LARGE THAT THEY MAY BE PHONOLOGICALLY DETERMINED, NOT PHONETICALLY. Moreover, vowel duration varies with the speaker's sex. Female speakers have 5 msec shorter vowel duration before both stops. The quality of voicing, tense or lax, is also a cue to these two stop types, as it is in initial position, but the relative duration of the stops appears to be much more important cues. The duration of stops changes the stop perception while that of preceding vowel does not. The consequences of these results for the phonological description of Korean as well as the synthesis and automatic recognition of Korean will be discussed.

  • PDF

Speech sound and personality impression (말소리와 성격 이미지)

  • Lee, Eunyung;Yuh, Heaok
    • Phonetics and Speech Sciences
    • /
    • v.9 no.4
    • /
    • pp.59-67
    • /
    • 2017
  • Regardless of their intention, listeners tend to assess speakers' personalities based on the sounds of the speech they hear. Assessment criteria, however, have not been fully investigated to indicate whether there is any relationship between the acoustic cue of produced speech sounds and perceived personality impression. If properly investigated, the potential relationship between these two will provide crucial insights on the aspects of human communications and further on human-computer interaction. Since human communications have distinctive characteristics of simultaneity and complexity, this investigation would be the identification of minimum essential factors among the sounds of speech and perceived personality impression. The purpose of this study, therefore, is to identify significant associations between the speech sounds and perceived personality impression of speaker by the listeners. Twenty eight subjects participated in the experiment and eight acoustic parameters were extracted by using Praat from the recorded sounds of the speech. The subjects also completed the Neo-five Factor Inventory test so that their personality traits could be measured. The results of the experiment show that four major factors(duration average, pitch difference value, pitch average and intensity average) play crucial roles in defining the significant relationship.

An Analysis of Acoustic Features Caused by Articulatory Changes for Korean Distant-Talking Speech

  • Kim Sunhee;Park Soyoung;Yoo Chang D.
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.2E
    • /
    • pp.71-76
    • /
    • 2005
  • Compared to normal speech, distant-talking speech is characterized by the acoustic effect due to interfering sound and echoes as well as articulatory changes resulting from the speaker's effort to be more intelligible. In this paper, the acoustic features for distant-talking speech due to the articulatory changes will be analyzed and compared with those of the Lombard effect. In order to examine the effect of different distances and articulatory changes, speech recognition experiments were conducted for normal speech as well as distant-talking speech at different distances using HTK. The speech data used in this study consist of 4500 distant-talking utterances and 4500 normal utterances of 90 speakers (56 males and 34 females). Acoustic features selected for the analysis were duration, formants (F1 and F2), fundamental frequency, total energy and energy distribution. The results show that the acoustic-phonetic features for distant-talking speech correspond mostly to those of Lombard speech, in that the main resulting acoustic changes between normal and distant-talking speech are the increase in vowel duration, the shift in first and second formant, the increase in fundamental frequency, the increase in total energy and the shift in energy from low frequency band to middle or high bands.

Speaker Adaptation in VQ and HMM Based Speech Recognition (VQ와 HMM을 이용한 음성인식에서 화자적응에 관한 연구)

  • 이대룡
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1991.06a
    • /
    • pp.54-57
    • /
    • 1991
  • 본 논무에서는 HMM과 VQ를 이용한 고립단어에 대한 화자종속 및 화자독립 음성인식시스템을 만들고 여기에 화자적응을 하는 방법에 대한 연구를 했다. 화자적응방법에는 크게 VQ코드북을 적응시키는 방법과 HMM패러미터블 적응시키는 방법이 있다. 코드북적응을 하는 방법으로서 기존코드북에 대해 새로운화자의 적응음성을 양자화한 뒤 각 코드벡터에 해당하는 적응음성의 평균을 구해서 새로운 화자의 코드북을 구해주는 방법과 기준코드북에 대해 새로운화자의 적응음성을 양자화할 때 HMM의 각 상태에서 각각의 코드벡터를 발생할 확률을 거리오차의 계산에서 고려해 비록 거리오차는 크지만 그 코드벡터를 발생할 확률이 매우 높으면 적응음성이 그 코드벡터에 index되게해서 각 코드벡터에 해당하는 모든 적응음성데이타의 평균을 새로운 코드북으로 하는 두가지 알고리즘을 제안한다. 이렇게 함으로써 기존의 기준코드북을 초기 코드북으로해서 LBG알고리즘을 사용해서 적응음성데이타에 대한 새로운 코드북을 만드는 방법에 비해 5-10배의 계산시간을 감소하게 된다. 이 새로운 코드북으로 적응음성데이타를 다시 index해서 이 index된 음성렬로 HMM패러미터를 적응했다. 제안된 알고리즘이 코드북적응을 하는 경우에 기존의 적응방법에 비해 5-10배의 계산 시간을 단축하면서 인식률에서는 더 나은결과를 얻었다. 또 같은 적응방법에 대해서 화자종속모델 보다는 화자독립모델에 대해서 화자적응하는 것이 더 나은 인식결과를 보여주었다.

  • PDF

Real-Time Implementation of Speaker Dependent Speech Recognition Hardware Module Using the TMS320C32 DSP : VR32 (TMS320C32 DSP를 이용한 실시간 화자종속 음성인식 하드웨어 모듈(VR32) 구현)

  • Chung, Ik-Joo;Chung, Hoon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.17 no.4
    • /
    • pp.14-22
    • /
    • 1998
  • 본 연구에서는 Texas Instruments 사의 저가형 부동소수점 디지털 신호 처리기 (Digital Singnal Processor, DSP)인 TMS320C32를 이용하여 실시간 화자종속 음성인식 하 드웨어 모듈(VR32)을 개발하였다. 하드웨어 모듈의 구성은 40MHz의 TMS320C32 DSP, 14bit 코덱인 TLC32044(또는 8bit μ-law PCM 코덱), EPROM과 SRAM 등의 메모리와 호 스트 인터페이스를 위한 로직 회로로 이루어졌다. 뿐만 아니라 이 하드웨어 모듈을 PC사에 서 평가해보기 위한 PC 인터페이스용 보드 및 소프트웨어도 개발하였다. 음성인식 알고리 즘의 구성은 에너지와 ZCR을 기반으로 한 끝점검출(Endpoint Detection) 침 10차 가중 LPC 켑스터럼(Weighted LPC Cepstrum) 분석이 실시간으로 이루어지며 이후 Dynamic Time Warping(DTW)를 통하여 최고 유사 단어를 결정하고 다시 검증과정을 거쳐 최종 인식을 수행한다. 끝점검출의 경우 적응 문턱값(Adaptive threshold)을 이용하여 잡음에 강인한 끝 점검출이 가능하며 DTW 알고리즘의 경우 C 및 어셈블리를 이용한 최적화를 통하여 계산 속도를 대폭 개선하였다. 현재 인식률은 일반 사무실 환경에서 통상 단축다이얼 용도로 사 용할 수 있는 30 단어에 대하여 95% 이상으로 매우 높은 편이며, 특히 배경음악이나 자동 차 소음과 같은 잡음환경에서도 잘 동작한다.

  • PDF

Implementation of HMM-Based Speech Recognizer Using TMS320C6711 DSP

  • Bae Hyojoon;Jung Sungyun;Son Jongmok;Kwon Hongseok;Kim Siho;Bae Keunsung
    • Proceedings of the IEEK Conference
    • /
    • summer
    • /
    • pp.391-394
    • /
    • 2004
  • This paper focuses on the DSP implementation of an HMM-based speech recognizer that can handle several hundred words of vocabulary size as well as speaker independency. First, we develop an HMM-based speech recognition system on the PC that operates on the frame basis with parallel processing of feature extraction and Viterbi decoding to make the processing delay as small as possible. Many techniques such as linear discriminant analysis, state-based Gaussian selection, and phonetic tied mixture model are employed for reduction of computational burden and memory size. The system is then properly optimized and compiled on the TMS320C6711 DSP for real-time operation. The implemented system uses 486kbytes of memory for data and acoustic models, and 24.5kbytes for program code. Maximum required time of 29.2ms for processing a frame of 32ms of speech validates real-time operation of the implemented system.

  • PDF

Design of Model to Recognize Emotional States in a Speech

  • Kim Yi-Gon;Bae Young-Chul
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.6 no.1
    • /
    • pp.27-32
    • /
    • 2006
  • Verbal communication is the most commonly used mean of communication. A spoken word carries a lot of informations about speakers and their emotional states. In this paper we designed a model to recognize emotional states in a speech, a first phase of two phases in developing a toy machine that recognizes emotional states in a speech. We conducted an experiment to extract and analyse the emotional state of a speaker in relation with speech. To analyse the signal output we referred to three characteristics of sound as vector inputs and they are the followings: frequency, intensity, and period of tones. Also we made use of eight basic emotional parameters: surprise, anger, sadness, expectancy, acceptance, joy, hate, and fear which were portrayed by five selected students. In order to facilitate the differentiation of each spectrum features, we used the wavelet transform analysis. We applied ANFIS (Adaptive Neuro Fuzzy Inference System) in designing an emotion recognition model from a speech. In our findings, inference error was about 10%. The result of our experiment reveals that about 85% of the model applied is effective and reliable.

Modified Weighting Model Rank Method for Improving the Performance of Real-Time Text-Independent Speaker Recognition System (실시간 문맥독립 화자인식 시스템의 성능향상을 위한 수정된 가중모델순위 결정방법)

  • Kim Min-Joung;Oh Se-Jin;Suk Su-Young;Chung Ho-Youl;Chung Hyun-Yeol
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.107-110
    • /
    • 2002
  • 현재까지 개발된 화자식별 시스템 중 가중모델순위(Weighting Model Rank; WMR)방법을 이용한 화자인식 시스템이 비교적 높은 인식성능을 나타내고 있다. WMR 방법은 각 화자에 대한 프레임 유사도의 순위에 따라 지수함수 가중치로 대치시키는 방법을 사용하고 있으나, 이 방법은 유사도 본래의 변별력이 전체 계산에서 고려되지 않는 문제가 있었다. 이를 해결하기 위해 본 논문에서는 각 화자의 프레임 유사도와 지수함수를 이용한 가중치를 곱한 값을 이용하여 전체 스코어를 계산하도록 하는 수정된 가중모델 순위방법(Modified Weighting Model Rank; MWMR)을 제안한다. 제안한 방법의 유효성을 확인하기 위하여 316명의 화자를 대상으로 하여 인식실험을 실시한 결과, 학습 프레임이 10,000일 경우, MWMR 방법에서 $98.1\%$의 화자 인식률을 얻어 WMR 방법에 비해 약 $2.0\%$의 향상된 인식결과를 보여 제안한 방법의 유효성을 확인할 수 있었다.

  • PDF