• 제목/요약/키워드: speaker verification (SV)

검색결과 4건 처리시간 0.026초

깊은 신경망 특징 기반 화자 검증 시스템의 성능 비교 (Performance Comparison of Deep Feature Based Speaker Verification Systems)

  • 김대현;성우경;김홍국
    • 말소리와 음성과학
    • /
    • 제7권4호
    • /
    • pp.9-16
    • /
    • 2015
  • In this paper, several experiments are performed according to deep neural network (DNN) based features for the performance comparison of speaker verification (SV) systems. To this end, input features for a DNN, such as mel-frequency cepstral coefficient (MFCC), linear-frequency cepstral coefficient (LFCC), and perceptual linear prediction (PLP), are first compared in a view of the SV performance. After that, the effect of a DNN training method and a structure of hidden layers of DNNs on the SV performance is investigated depending on the type of features. The performance of an SV system is then evaluated on the basis of I-vector or probabilistic linear discriminant analysis (PLDA) scoring method. It is shown from SV experiments that a tandem feature of DNN bottleneck feature and MFCC feature gives the best performance when DNNs are configured using a rectangular type of hidden layers and trained with a supervised training method.

화자확인에서 일정한 결과를 얻기 위한 빠른 순시 확률비 테스트 방법 (Fast Sequential Probability Ratio Test Method to Obtain Consistent Results in Speaker Verification)

  • 김은영;서창우;전성채
    • 말소리와 음성과학
    • /
    • 제2권2호
    • /
    • pp.63-68
    • /
    • 2010
  • A new version of sequential probability ratio test (SPRT) which has been investigated in utterance-length control is proposed to obtain uniform response results in speaker verification (SV). Although SPRTs can obtain fast responses in SV tests, differences in the performance may occur depending on the compositions of consonants and vowels in the sentences used. In this paper, a fast sequential probability ratio test (FSPRT) method that shows consistent performances at all times regardless of the compositions of vocalized sentences for SV will be proposed. In generating frames, the FSPRT will first conduct SV test processes with only generated frames without any overlapping and if the results do not satisfy discrimination criteria, the FSPRT will sequentially use frames applied with overlapping. With the progress of processes as such, the test will not be affected by the compositions of sentences for SV and thus fast response outcomes and even consistent performances can be obtained. Experimental results show that the FSPRT has better performance to the SPRT method while requiring less complexity with equal error rates (EER).

  • PDF

화자확인에서 특징벡터의 순시 정보와 선형 변환의 효과적인 적용 (Effective Combination of Temporal Information and Linear Transformation of Feature Vector in Speaker Verification)

  • 서창우;조미화;임영환;전성채
    • 말소리와 음성과학
    • /
    • 제1권4호
    • /
    • pp.127-132
    • /
    • 2009
  • The feature vectors which are used in conventional speaker recognition (SR) systems may have many correlations between their neighbors. To improve the performance of the SR, many researchers adopted linear transformation method like principal component analysis (PCA). In general, the linear transformation of the feature vectors is based on concatenated form of the static features and their dynamic features. However, the linear transformation which based on both the static features and their dynamic features is more complex than that based on the static features alone due to the high order of the features. To overcome these problems, we propose an efficient method that applies linear transformation and temporal information of the features to reduce complexity and improve the performance in speaker verification (SV). The proposed method first performs a linear transformation by PCA coefficients. The delta parameters for temporal information are then obtained from the transformed features. The proposed method only requires 1/4 in the size of the covariance matrix compared with adding the static and their dynamic features for PCA coefficients. Also, the delta parameters are extracted from the linearly transformed features after the reduction of dimension in the static features. Compared with the PCA and conventional methods in terms of equal error rate (EER) in SV, the proposed method shows better performance while requiring less storage space and complexity.

  • PDF

α-특징 지도 스케일링을 이용한 원시파형 화자 인증 (α-feature map scaling for raw waveform speaker verification)

  • 정지원;심혜진;김주호;유하진
    • 한국음향학회지
    • /
    • 제39권5호
    • /
    • pp.441-446
    • /
    • 2020
  • 본 논문은 심층 신경망을 이용한 화자 인증(Speaker Verification, SV) 시스템에서, 심층 신경망 내부에 존재하는 각 특징 지도(Feature Map)들의 분별력을 강화하기 위해 기존 특징 지도 스케일링(Feature Map Scaling, FMS) 기법을 확장한 α-FMS 기법을 제안한다. 기존의 FMS 기법은 특징 지도로부터 스케일 벡터를 구한 뒤, 이를 특징 지도에 더하거나 곱하거나 혹은 두 방식을 차례로 적용한다. 하지만 FMS 기법은 동일한 스케일 벡터를 덧셈과 곱셈 연산에 중복으로 사용할 뿐만 아니라, 스케일 벡터 자체도 sigmoid 비선형 활성 함수를 이용하여 계산되기 때문에 덧셈을 수행할 경우 그 값의 범위가 제한된다는 한계가 존재한다. 본 연구에서는 이러한 한계점을 극복하기 위해 별도의 α라는 학습 파라미터를 특징 지도에 원소 단위로 더한 뒤, 스케일 벡터를 곱하는 방식으로 α-FMS 기법을 설계하였다. 이 때, 제안한 α-FMS 기법은 스칼라 α를 학습하여 특징 지도의 모든 필터에 동일 값을 적용하는 방식과 벡터 α를 학습하여 특징 지도의 각 필터에 서로 다른 값을 적용하는 방식을 각각 적용 후 그 성능을 비교하였다. 두 방식의 α-FMS 모두 심층 심경망 내부의 잔차 연결이 적용된 각 블록 뒤에 적용하였다. 제안한 기법들의 유효성을 검증하기 위해 RawNet2 학습세트를 이용하여 학습시킨 뒤, VoxCeleb1 평가세트를 이용하여 성능을 평가한 결과, 각각 동일 오류율 2.47 %, 2.31 %를 확인하였다.