• Title/Summary/Keyword: Speech Data Classification

Search Result 116, Processing Time 0.021 seconds

Classification of Phornographic Videos Based on the Audio Information (오디오 신호에 기반한 음란 동영상 판별)

  • Kim, Bong-Wan;Choi, Dae-Lim;Lee, Yong-Ju
    • MALSORI
    • /
    • no.63
    • /
    • pp.139-151
    • /
    • 2007
  • As the Internet becomes prevalent in our lives, harmful contents, such as phornographic videos, have been increasing on the Internet, which has become a very serious problem. To prevent such an event, there are many filtering systems mainly based on the keyword-or image-based methods. The main purpose of this paper is to devise a system that classifies pornographic videos based on the audio information. We use the mel-cepstrum modulation energy (MCME) which is a modulation energy calculated on the time trajectory of the mel-frequency cepstral coefficients (MFCC) as well as the MFCC as the feature vector. For the classifier, we use the well-known Gaussian mixture model (GMM). The experimental results showed that the proposed system effectively classified 98.3% of pornographic data and 99.8% of non-pornographic data. We expect the proposed method can be applied to the more accurate classification system which uses both video and audio information.

  • PDF

Speech Emotion Recognition in People at High Risk of Dementia

  • Dongseon Kim;Bongwon Yi;Yugwon Won
    • Dementia and Neurocognitive Disorders
    • /
    • v.23 no.3
    • /
    • pp.146-160
    • /
    • 2024
  • Background and Purpose: The emotions of people at various stages of dementia need to be effectively utilized for prevention, early intervention, and care planning. With technology available for understanding and addressing the emotional needs of people, this study aims to develop speech emotion recognition (SER) technology to classify emotions for people at high risk of dementia. Methods: Speech samples from people at high risk of dementia were categorized into distinct emotions via human auditory assessment, the outcomes of which were annotated for guided deep-learning method. The architecture incorporated convolutional neural network, long short-term memory, attention layers, and Wav2Vec2, a novel feature extractor to develop automated speech-emotion recognition. Results: Twenty-seven kinds of Emotions were found in the speech of the participants. These emotions were grouped into 6 detailed emotions: happiness, interest, sadness, frustration, anger, and neutrality, and further into 3 basic emotions: positive, negative, and neutral. To improve algorithmic performance, multiple learning approaches were applied using different data sources-voice and text-and varying the number of emotions. Ultimately, a 2-stage algorithm-initial text-based classification followed by voice-based analysis-achieved the highest accuracy, reaching 70%. Conclusions: The diverse emotions identified in this study were attributed to the characteristics of the participants and the method of data collection. The speech of people at high risk of dementia to companion robots also explains the relatively low performance of the SER algorithm. Accordingly, this study suggests the systematic and comprehensive construction of a dataset from people with dementia.

Meaning and Intonation of Endings with Polysemous Modality: Through the Analysis of the Spontaneous Speech (인식·행위 양태 다의성 어미의 의미와 억양 -구어 자유발화 분석을 통하여-)

  • Jo, Min-ha
    • Korean Linguistics
    • /
    • v.77
    • /
    • pp.331-357
    • /
    • 2017
  • The purpose of this paper is to identify the workings of intonation realized in the endings through the spoken language. To achieve this objective, this paper has analyzed 300 minutes of spontaneous speech by women from Seoul and discussed the meanings of modality and their relationship with intonation. Intonation functions significantly in polysemous modal endings in epistemic and act modality. Epistemic modality is usually expressed through indirect and soft intonations such as L:, M: and LH, whereas act modality is expressed through direct and strong intonations such as H, HL and LHL. Intonation appears to be related to the Certainty degree of information, rather than classification of modality, Lengthening relate to indirectness, H with uncertainty, L with statements or affirmation, and HL and LHL relates to assertive attitude. This paper is significant as it has overcome the abstractness of existing modality studies and has engaged in objective and comprehensive analysis with actual spontaneous speech data.

Training Network Design Based on Convolution Neural Network for Object Classification in few class problem (소 부류 객체 분류를 위한 CNN기반 학습망 설계)

  • Lim, Su-chang;Kim, Seung-Hyun;Kim, Yeon-Ho;Kim, Do-yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.1
    • /
    • pp.144-150
    • /
    • 2017
  • Recently, deep learning is used for intelligent processing and accuracy improvement of data. It is formed calculation model composed of multi data processing layer that train the data representation through an abstraction of the various levels. A category of deep learning, convolution neural network is utilized in various research fields, which are human pose estimation, face recognition, image classification, speech recognition. When using the deep layer and lots of class, CNN that show a good performance on image classification obtain higher classification rate but occur the overfitting problem, when using a few data. So, we design the training network based on convolution neural network and trained our image data set for object classification in few class problem. The experiment show the higher classification rate of 7.06% in average than the previous networks designed to classify the object in 1000 class problem.

A Study on Image Retrieval Using Sound Classifier (사운드 분류기를 이용한 영상검색에 관한 연구)

  • Kim, Seung-Han;Lee, Myeong-Sun;Roh, Seung-Yong
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.419-421
    • /
    • 2006
  • The importance of automatic discrimination image data has evolved as a research topic over recent years. We have used forward neural network as a classifier using sound data features within image data, our initial tests have shown encouraging results that indicate the viability of our approach.

  • PDF

Cepstral and spectral analysis of voices with adductor spasmodic dysphonia (내전형연축성 발성장애 음성에 대한 켑스트럼과 스펙트럼 분석)

  • Shim, Hee Jeong;Jung, Hun;Lee, Sue Ann;Choi, Byung Heun;Heo, Jeong Hwa;Ko, Do-Heung
    • Phonetics and Speech Sciences
    • /
    • v.8 no.2
    • /
    • pp.73-80
    • /
    • 2016
  • The purpose of this study was to analyze perceptual and spectral/cepstral measurements in patients with adductor spasmodic dysphonia(ADSD). Sixty participants with gender and age matched individuals(30 ADSD and 30 controls) were recorded in reading a sentence and sustained the vowel /a/. Acoustic data were analyzed acoustically by measuring CPP, L/H ratio, mean CPP F0 and CSID, and auditory-perceptual ratings were measured using GRBAS. The main results can be summarized as below: (a) the CSID for the connected speech was significantly higher than for the sustained vowel (b) the G, R and S for the connected speech were significantly higher than for the sustained vowel (c) Spectral/cepstral parameters were significantly correlated with the perceptual parameters, and (d) the ROC analysis showed that the threshold of 13.491 for the CSID achieved a good classification for ADSD, with 86.7% sensitivity and 96.7% specificity. Spectral and cepstral analysis for the connected speech is especially meaningful on cases where perceptual analysis and clinical evaluation alone are insufficient.

Adaptation of Classification Model for Improving Speech Intelligibility in Noise (음성 명료도 향상을 위한 분류 모델의 잡음 환경 적응)

  • Jung, Junyoung;Kim, Gibak
    • Journal of Broadcast Engineering
    • /
    • v.23 no.4
    • /
    • pp.511-518
    • /
    • 2018
  • This paper deals with improving speech intelligibility by applying binary mask to time-frequency units of speech in noise. The binary mask is set to "0" or "1" according to whether speech is dominant or noise is dominant by comparing signal-to-noise ratio with pre-defined threshold. Bayesian classifier trained with Gaussian mixture model is used to estimate the binary mask of each time-frequency signal. The binary mask based noise suppressor improves speech intelligibility only in noise condition which is included in the training data. In this paper, speaker adaptation techniques for speech recognition are applied to adapt the Gaussian mixture model to a new noise environment. Experiments with noise-corrupted speech are conducted to demonstrate the improvement of speech intelligibility by employing adaption techniques in a new noise environment.

A Phonetic Study of 'Sasang Constitution' (음성학적으로 본 사상체질)

  • Moon Seung-Jae;Tak Ji-Hyun;Hwang Hyejeong
    • MALSORI
    • /
    • v.55
    • /
    • pp.1-14
    • /
    • 2005
  • Sasang Constitution, one branch of oriental medicine, claims that people can be classified into four different 'constitutions:' Taeyang, Taeum, Soyang, and Soeum. This study investigates whether the classification of the constitutions could be accurately made solely based on people's voice by analyzing the data from 46 different voices whose constitutions were already determined. Seven source-related parameters and four filter-related parameters were phonetically analyzed and the GMM(Gaussian mixture model) was tried on the data. Both the results from phonetic analyses and GMM showed that all the parameters (except one) failed to distinguish the constitutions of the people successfully. And even the single exception, B2 (the bandwidth of the second formant) did not provide us with sufficient reasons to be the source of distinction. This result seems to suggest one of the two conclusions: either the Sasang Constitutions cannot be substantiated with phonetic characteristics of peoples' voices with reliable accuracy, or we need to find yet some other parameters which haven't been conventionally proposed.

  • PDF

A Study on the Gender and Age Classification of Speech Data Using CNN (CNN을 이용한 음성 데이터 성별 및 연령 분류 기술 연구)

  • Park, Dae-Seo;Bang, Joon-Il;Kim, Hwa-Jong;Ko, Young-Jun
    • The Journal of Korean Institute of Information Technology
    • /
    • v.16 no.11
    • /
    • pp.11-21
    • /
    • 2018
  • Research is carried out to categorize voices using Deep Learning technology. The study examines neural network-based sound classification studies and suggests improved neural networks for voice classification. Related studies studied urban data classification. However, related studies showed poor performance in shallow neural network. Therefore, in this paper the first preprocess voice data and extract feature value. Next, Categorize the voice by entering the feature value into previous sound classification network and proposed neural network. Finally, compare and evaluate classification performance of the two neural networks. The neural network of this paper is organized deeper and wider so that learning is better done. Performance results showed that 84.8 percent of related studies neural networks and 91.4 percent of the proposed neural networks. The proposed neural network was about 6 percent high.

Performance comparison on vocal cords disordered voice discrimination via machine learning methods (기계학습에 의한 후두 장애음성 식별기의 성능 비교)

  • Cheolwoo Jo;Soo-Geun Wang;Ickhwan Kwon
    • Phonetics and Speech Sciences
    • /
    • v.14 no.4
    • /
    • pp.35-43
    • /
    • 2022
  • This paper studies how to improve the identification rate of laryngeal disability speech data by convolutional neural network (CNN) and machine learning ensemble learning methods. In general, the number of laryngeal dysfunction speech data is small, so even if identifiers are constructed by statistical methods, the phenomenon caused by overfitting depending on the training method can lead to a decrease the identification rate when exposed to external data. In this work, we try to combine results derived from CNN models and machine learning models with various accuracy in a multi-voting manner to ensure improved classification efficiency compared to the original trained models. The Pusan National University Hospital (PNUH) dataset was used to train and validate algorithms. The dataset contains normal voice and voice data of benign and malignant tumors. In the experiment, an attempt was made to distinguish between normal and benign tumors and malignant tumors. As a result of the experiment, the random forest method was found to be the best ensemble method and showed an identification rate of 85%.