• 제목/요약/키워드: Speech Data Classification

검색결과 116건 처리시간 0.023초

A Study on the Optimal Mahalanobis Distance for Speech Recognition

  • Lee, Chang-Young
    • 음성과학
    • /
    • 제13권4호
    • /
    • pp.177-186
    • /
    • 2006
  • In an effort to enhance the quality of feature vector classification and thereby reduce the recognition error rate of the speaker-independent speech recognition, we employ the Mahalanobis distance in the calculation of the similarity measure between feature vectors. It is assumed that the metric matrix of the Mahalanobis distance be diagonal for the sake of cost reduction in memory and time of calculation. We propose that the diagonal elements be given in terms of the variations of the feature vector components. Geometrically, this prescription tends to redistribute the set of data in the shape of a hypersphere in the feature vector space. The idea is applied to the speech recognition by hidden Markov model with fuzzy vector quantization. The result shows that the recognition is improved by an appropriate choice of the relevant adjustable parameter. The Viterbi score difference of the two winners in the recognition test shows that the general behavior is in accord with that of the recognition error rate.

  • PDF

자동 대소문자 식별을 이용한 영어 음성인식 결과의 가독성 향상 (Readability Enhancement of English Speech Recognition Output Using Automatic Capitalisation Classification)

  • 김지환
    • 대한음성학회지:말소리
    • /
    • 제61호
    • /
    • pp.101-111
    • /
    • 2007
  • A modified speech recogniser have been proposed for automatic capitalisation generation to improve the readability of English speech recognition output. In this modified speech recogniser, every word in its vocabulary is duplicated: once in a de-caplitalised form and again in the capitalised forms. In addition its language model is re-trained on mixed case texts. In order to evaluate the performance of the proposed system, experiments of automatic capitalisation generation were performed for 3 hours of Broadcast News(BN) test data using the modified HTK BN transcription system. The proposed system produced an F-measure of 0.7317 for automatic capitalisation generation with an SER of 48.55, a precision of 0.7736 and a recall of 0.6942.

  • PDF

Adaptive Speech Streaming Based on Packet Loss Prediction Using Support Vector Machine for Software-Based Multipoint Control Unit over IP Networks

  • Kang, Jin Ah;Han, Mikyong;Jang, Jong-Hyun;Kim, Hong Kook
    • ETRI Journal
    • /
    • 제38권6호
    • /
    • pp.1064-1073
    • /
    • 2016
  • An adaptive speech streaming method to improve the perceived speech quality of a software-based multipoint control unit (SW-based MCU) over IP networks is proposed. First, the proposed method predicts whether the speech packet to be transmitted is lost. To this end, the proposed method learns the pattern of packet losses in the IP network, and then predicts the loss of the packet to be transmitted over that IP network. The proposed method classifies the speech signal into different classes of silence, unvoiced, speech onset, or voiced frame. Based on the results of packet loss prediction and speech classification, the proposed method determines the proper amount and bitrate of redundant speech data (RSD) that are sent with primary speech data (PSD) in order to assist the speech decoder to restore the speech signals of lost packets. Specifically, when a packet is predicted to be lost, the amount and bitrate of the RSD must be increased through a reduction in the bitrate of the PSD. The effectiveness of the proposed method for learning the packet loss pattern and assigning a different speech coding rate is then demonstrated using a support vector machine and adaptive multirate-narrowband, respectively. The results show that as compared with conventional methods that restore lost speech signals, the proposed method remarkably improves the perceived speech quality of an SW-based MCU under various packet loss conditions in an IP network.

오디오 신호를 이용한 음란 동영상 판별 (Classification of Phornographic Videos Using Audio Information)

  • 김봉완;최대림;방만원;이용주
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2007년도 한국음성과학회 공동학술대회 발표논문집
    • /
    • pp.207-210
    • /
    • 2007
  • As the Internet is prevalent in our life, harmful contents have been increasing on the Internet, which has become a very serious problem. Among them, pornographic video is harmful as poison to our children. To prevent such an event, there are many filtering systems which are based on the keyword based methods or image based methods. The main purpose of this paper is to devise a system that classifies the pornographic videos based on the audio information. We use Mel-Cepstrum Modulation Energy (MCME) which is modulation energy calculated on the time trajectory of the Mel-Frequency cepstral coefficients (MFCC) and MFCC as the feature vector and Gaussian Mixture Model (GMM) as the classifier. With the experiments, the proposed system classified the 97.5% of pornographic data and 99.5% of non-pornographic data. We expect the proposed method can be used as a component of the more accurate classification system which uses video information and audio information simultaneously.

  • PDF

국어 낭독체 발화의 운율경계 예측 (Prediction of Break Indices in Korean Read Speech)

  • 김효숙;김정원;김선주;김선철;김삼진;권철홍
    • 대한음성학회지:말소리
    • /
    • 제43호
    • /
    • pp.1-9
    • /
    • 2002
  • This study aims to model Korean prosodic phrasing using CART(classification and regression tree) method. Our data are limited to Korean read speech. We used 400 sentences made up of editorials, essays, novels and news scripts. Professional radio actress read 400sentences for about two hours. We used K-ToBI transcription system. For technical reason, original break indices 1,2 are merged into AP. Differ from original K-ToBI, we have three break index Zero, AP and IP. Linguistic information selected for this study is as follows: the number of syllables in ‘Eojeol’, the location of ‘Eojeol’ in sentence and part-of-speech(POS) of adjacent ‘Eojeol’s. We trained CART tree using above information as variables. Average accuracy of predicting NonIP(Zero and AP) and IP was 90.4% in training data and 88.5% in test data. Average prediction accuracy of Zero and AP was 79.7% in training data and 78.7% in test data.

  • PDF

통합 CNN, LSTM, 및 BERT 모델 기반의 음성 및 텍스트 다중 모달 감정 인식 연구 (Enhancing Multimodal Emotion Recognition in Speech and Text with Integrated CNN, LSTM, and BERT Models)

  • 에드워드 카야디;한스 나타니엘 하디 수실로;송미화
    • 문화기술의 융합
    • /
    • 제10권1호
    • /
    • pp.617-623
    • /
    • 2024
  • 언어와 감정 사이의 복잡한 관계의 특징을 보이며, 우리의 말을 통해 감정을 식별하는 것은 중요한 과제로 인식된다. 이 연구는 음성 및 텍스트 데이터를 모두 포함하는 다중 모드 분류 작업을 통해 음성 언어의 감정을 식별하기 위해 속성 엔지니어링을 사용하여 이러한 과제를 해결하는 것을 목표로 한다. CNN(Convolutional Neural Networks)과 LSTM(Long Short-Term Memory)이라는 두 가지 분류기를 BERT 기반 사전 훈련된 모델과 통합하여 평가하였다. 논문에서 평가는 다양한 실험 설정 전반에 걸쳐 다양한 성능 지표(정확도, F-점수, 정밀도 및 재현율)를 다룬다. 이번 연구 결과는 텍스트와 음성 데이터 모두에서 감정을 정확하게 식별하는 두 모델의 뛰어난 능력을 보인다.

음성의 감성요소 추출을 통한 감성 인식 시스템 (The Emotion Recognition System through The Extraction of Emotional Components from Speech)

  • 박창현;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제10권9호
    • /
    • pp.763-770
    • /
    • 2004
  • The important issue of emotion recognition from speech is a feature extracting and pattern classification. Features should involve essential information for classifying the emotions. Feature selection is needed to decompose the components of speech and analyze the relation between features and emotions. Specially, a pitch of speech components includes much information for emotion. Accordingly, this paper searches the relation of emotion to features such as the sound loudness, pitch, etc. and classifies the emotions by using the statistic of the collecting data. This paper deals with the method of recognizing emotion from the sound. The most important emotional component of sound is a tone. Also, the inference ability of a brain takes part in the emotion recognition. This paper finds empirically the emotional components from the speech and experiment on the emotion recognition. This paper also proposes the recognition method using these emotional components and the transition probability.

Discriminant 학습을 이용한 전화 숫자음 인식 (Telephone Digit Speech Recognition using Discriminant Learning)

  • 한문성;최완수;권현직
    • 대한전자공학회논문지TE
    • /
    • 제37권3호
    • /
    • pp.16-20
    • /
    • 2000
  • 대부분의 음성인식 시스템이 확률 모델을 기반으로 한 HMM 방법을 가장 많이 사용하고 있다. 한국어 고립 전화 숫자음 인식인 경우에 만약 충분한 학습 데이터가 주어지면 HMM 방법을 사용해도 높은 인식률을 얻는다 그러나 한국어 연속 전화 숫자음 인식인 경우에 비슷하게 발음되는 전화 숫자음들에 대해서는 HMM방법이 한계를 가지고 있다. 본 논문에서는 한국어 연속 전화 숫자음 인식에서 HMM 방법의 한계를 극복하기 위해 discriminant 학습 방법을 제시한다. 실험결과는 우리가 제시한 discriminant 학습 방법이 비슷하게 발음되는 전화 숫자음들에 대해서 높은 인식률을 갖는 것을 보여준다.

  • PDF

Integration of WFST Language Model in Pre-trained Korean E2E ASR Model

  • Junseok Oh;Eunsoo Cho;Ji-Hwan Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권6호
    • /
    • pp.1692-1705
    • /
    • 2024
  • In this paper, we present a method that integrates a Grammar Transducer as an external language model to enhance the accuracy of the pre-trained Korean End-to-end (E2E) Automatic Speech Recognition (ASR) model. The E2E ASR model utilizes the Connectionist Temporal Classification (CTC) loss function to derive hypothesis sentences from input audio. However, this method reveals a limitation inherent in the CTC approach, as it fails to capture language information from transcript data directly. To overcome this limitation, we propose a fusion approach that combines a clause-level n-gram language model, transformed into a Weighted Finite-State Transducer (WFST), with the E2E ASR model. This approach enhances the model's accuracy and allows for domain adaptation using just additional text data, avoiding the need for further intensive training of the extensive pre-trained ASR model. This is particularly advantageous for Korean, characterized as a low-resource language, which confronts a significant challenge due to limited resources of speech data and available ASR models. Initially, we validate the efficacy of training the n-gram model at the clause-level by contrasting its inference accuracy with that of the E2E ASR model when merged with language models trained on smaller lexical units. We then demonstrate that our approach achieves enhanced domain adaptation accuracy compared to Shallow Fusion, a previously devised method for merging an external language model with an E2E ASR model without necessitating additional training.

RECOGNIZING SIX EMOTIONAL STATES USING SPEECH SIGNALS

  • Kang, Bong-Seok;Han, Chul-Hee;Youn, Dae-Hee;Lee, Chungyong
    • 한국감성과학회:학술대회논문집
    • /
    • 한국감성과학회 2000년도 춘계 학술대회 및 국제 감성공학 심포지움 논문집 Proceeding of the 2000 Spring Conference of KOSES and International Sensibility Ergonomics Symposium
    • /
    • pp.366-369
    • /
    • 2000
  • This paper examines three algorithms to recognize speaker's emotion using the speech signals. Target emotions are happiness, sadness, anger, fear, boredom and neutral state. MLB(Maximum-Likeligood Bayes), NN(Nearest Neighbor) and HMM (Hidden Markov Model) algorithms are used as the pattern matching techniques. In all cases, pitch and energy are used as the features. The feature vectors for MLB and NN are composed of pitch mean, pitch standard deviation, energy mean, energy standard deviation, etc. For HMM, vectors of delta pitch with delta-delta pitch and delta energy with delta-delta energy are used. We recorded a corpus of emotional speech data and performed the subjective evaluation for the data. The subjective recognition result was 56% and was compared with the classifiers' recognition rates. MLB, NN, and HMM classifiers achieved recognition rates of 68.9%, 69.3% and 89.1% respectively, for the speaker dependent, and context-independent classification.

  • PDF