• 제목/요약/키워드: Spontaneous Speech

검색결과 125건 처리시간 0.022초

CNN 기반 스펙트로그램을 이용한 자유발화 음성감정인식 (Spontaneous Speech Emotion Recognition Based On Spectrogram With Convolutional Neural Network)

  • 손귀영;권순일
    • 정보처리학회 논문지
    • /
    • 제13권6호
    • /
    • pp.284-290
    • /
    • 2024
  • 음성감정인식(Speech Emotion Recognition, SER)은 사용자의 목소리에서 나타나는 떨림, 어조, 크기 등의 음성 패턴 분석을 통하여 감정 상태를 판단하는 기술이다. 하지만, 기존의 음성 감정인식 연구는 구현된 시나리오를 이용하여 제한된 환경 내에서 숙련된 연기자를 대상으로 기록된 음성인 구현발화를 중심의 연구로 그 결과 또한 높은 성능을 얻을 수 있지만, 이에 반해 자유발화 감정인식은 일상생활에서 통제되지 않는 환경에서 이루어지기 때문에 기존 구현발화보다 현저히 낮은 성능을 보여주고 있다. 본 논문에서는 일상적 자유발화 음성을 활용하여 감정인식을 진행하고, 그 성능을 향상하고자 한다. 성능평가를 위하여 AI Hub에서 제공되는 한국인 자유발화 대화 음성데이터를 사용하였으며, 딥러닝 학습을 위하여 1차원의 음성신호를 시간-주파수가 포함된 2차원의 스펙트로그램(Spectrogram)로 이미지 변환을 진행하였다. 생성된 이미지는 CNN기반 전이학습 신경망 모델인 VGG (Visual Geometry Group) 로 학습하였고, 그 결과 7개 감정(기쁨, 사랑스러움, 화남, 두려움, 슬픔, 중립, 놀람)에 대해서 성인 83.5%, 청소년 73.0%의 감정인식 성능을 확인하였다. 본 연구를 통하여, 기존의 구현발화기반 감정인식 성능과 비교하면, 낮은 성능이지만, 자유발화 감정표현에 대한 정량화할 수 있는 음성적 특징을 규정하기 어려움에도 불구하고, 일상생활에서 이루어진 대화를 기반으로 감정인식을 진행한 점에서 의의를 두고자 한다.

Affixation effects on word-final coda deletion in spontaneous Seoul Korean speech

  • Kim, Jungsun
    • 말소리와 음성과학
    • /
    • 제8권4호
    • /
    • pp.9-14
    • /
    • 2016
  • This study investigated the patterns of coda deletion in spontaneous Seoul Korean speech. More specifically, the current study focused on three factors in promoting coda deletion, namely, word position, consonant type, and morpheme type. The results revealed that, first, coda deletion frequently occurred when affixes were attached to the ends of words, rather than in affixes in word-internal positions or in roots. Second, alveolar consonants [n] and [l] in the coda positions of high-frequency affixes [nɨn] and [lɨl] were most likely to be deleted. Additionally, regarding affix reduction in the word-final position, all subjects seemed to depend on this articulatory strategy to a similar degree. In sum, the current study found that affixes without primary semantic content in spontaneous speech tend to undergo the process of reduction, favoring the occurrence of specific pronunciation variants.

Patterns of consonant deletion in the word-internal onset position: Evidence from spontaneous Seoul Korean speech

  • Kim, Jungsun;Yun, Weonhee;Kang, Ducksoo
    • 말소리와 음성과학
    • /
    • 제8권1호
    • /
    • pp.45-51
    • /
    • 2016
  • This study examined the deletion of onset consonant in the word-internal structure in spontaneous Seoul Korean speech. It used the dataset of speakers in their 20s extracted from the Korean Corpus of Spontaneous Speech (Yun et al., 2015). The proportion of deletion of word-internal onset consonants was analyzed using the linear mixed-effects regression model. The factors that promoted the deletion of onsets were primarily the types of consonants and their phonetic contexts. The results showed that onset deletion was more likely to occur for a lenis velar stop [k] than the other consonants, and in the phonetic contexts, when the preceding vowel was a low central vowel [a]. Moreover, some speakers tended to more frequently delete onset consonants (e.g., [k] and [n]) than other speakers, which reflected individual differences. This study implies that word-internal onsets undergo a process of gradient reduction within individuals' articulatory strategies.

한국어 자연발화 음성코퍼스의 남성 모음 포먼트 연구 (A Study on the Male Vowel Formants of the Korean Corpus of Spontaneous Speech)

  • 김순옥;윤규철
    • 말소리와 음성과학
    • /
    • 제7권2호
    • /
    • pp.95-102
    • /
    • 2015
  • The purpose of this paper is to extract the vowel formants of the ten adult male speakers in their twenties and thirties from the Korean Corpus of Spontaneous Speech [4], also known as the Seoul corpus, and to analyze them by comparing to earlier works on the Buckeye Corpus of Conversational Speech [1] in terms of the various linguistic factors that are expected to affect the formant distribution. The vowels extracted from the Korean corpus were also compared to those of the read Korean. The results showed that the distribution of the vowel formants from the Korean corpus was very different from that of read Korean speech. The comparison with English corpus and read English speech showed similar patterns. The factors affecting the Korean vowel formants were the interviewer sex, the location of the target vowel or the syllable containing it with respect to the phrasal word or utterance and the speech rate of the surrounding words.

N-gram 기반의 유사도를 이용한 대화체 연속 음성 언어 모델링 (Spontaneous Speech Language Modeling using N-gram based Similarity)

  • 박영희;정민화
    • 대한음성학회지:말소리
    • /
    • 제46호
    • /
    • pp.117-126
    • /
    • 2003
  • This paper presents our language model adaptation for Korean spontaneous speech recognition. Korean spontaneous speech is observed various characteristics of content and style such as filled pauses, word omission, and contraction as compared with the written text corpus. Our approaches focus on improving the estimation of domain-dependent n-gram models by relevance weighting out-of-domain text data, where style is represented by n-gram based tf/sup */idf similarity. In addition to relevance weighting, we use disfluencies as Predictor to the neighboring words. The best result reduces 9.7% word error rate relatively and shows that n-gram based relevance weighting reflects style difference greatly and disfluencies are good predictor also.

  • PDF

The Korean Corpus of Spontaneous Speech

  • Yun, Weonhee;Yoon, Kyuchul;Park, Sunwoo;Lee, Juhee;Cho, Sungmoon;Kang, Ducksoo;Byun, Koonhyuk;Hahn, Hyeseung;Kim, Jungsun
    • 말소리와 음성과학
    • /
    • 제7권2호
    • /
    • pp.103-109
    • /
    • 2015
  • This paper describes the development of the Korean corpus of spontaneous speech, also called the Seoul corpus. The corpus contains the audio recording of the interview-style spontaneous speech from the 40 native speakers of Seoul Korean. The talkers are divided into four age groups; talkers in their teens, twenties, thirties and forties. Each age group has ten talkers, five males and five females. The method used to elicit and record the speech is described. The corpus containing around 220,000 phrasal words was phonemically labeled along with information on the boundaries for Korean phrasal words and utterances, which were additionally romanized. According to the test result of labeling consistency, the inter-labeler agreement on phoneme identification was 98.1% and the mean deviation on boundary placement was 9.04 msec. The corpus will be made available for free to the research community in March, 2015.

A User-friendly Remote Speech Input Method in Spontaneous Speech Recognition System

  • Suh, Young-Joo;Park, Jun;Lee, Young-Jik
    • The Journal of the Acoustical Society of Korea
    • /
    • 제17권2E호
    • /
    • pp.38-46
    • /
    • 1998
  • In this paper, we propose a remote speech input device, a new method of user-friendly speech input in spontaneous speech recognition system. We focus the user friendliness on hands-free and microphone independence in speech recognition applications. Our method adopts two algorithms, the automatic speech detection and the microphone array delay-and-sum beamforming (DSBF)-based speech enhancement. The automatic speech detection algorithm is composed of two stages; the detection of speech and nonspeech using the pitch information for the detected speech portion candidate. The DSBF algorithm adopts the time domain cross-correlation method as its time delay estimation. In the performance evaluation, the speech detection algorithm shows within-200 ms start point accuracy of 93%, 99% under 15dB, 20dB, and 25dB signal-to-noise ratio (SNR) environments, respectively and those for the end point are 72%, 89%, and 93% for the corresponding environments, respectively. The classification of speech and nonspeech for the start point detected region of input signal is performed by the pitch information-base method. The percentages of correct classification for speech and nonspeech input are 99% and 90%, respectively. The eight microphone array-based speech enhancement using the DSBF algorithm shows the maximum SNR gaing of 6dB over a single microphone and the error reductin of more than 15% in the spontaneous speech recognition domain.

  • PDF

한국인 표준 음성 DB 구축 (Developing a Korean Standard Speech DB)

  • 신지영;장혜진;강연민;김경화
    • 말소리와 음성과학
    • /
    • 제7권1호
    • /
    • pp.139-150
    • /
    • 2015
  • The data accumulated in this database will be used to develop a speaker identification system. This may also be applied towards, but not limited to, fields of phonetic studies, sociolinguistics, and language pathology. We plan to supplement the large-scale speech corpus next year, in terms of research methodology and content, to better answer the needs of diverse fields. The purpose of this study is to develop a speech corpus for standard Korean speech. For the samples to viably represent the state of spoken Korean, demographic factors were considered to modulate a balanced spread of age, gender, and dialects. Nine separate regional dialects were categorized, and five age groups were established from individuals in their 20s to 60s. A speech-sample collection protocol was developed for the purpose of this study where each speaker performs five tasks: two reading tasks, two semi-spontaneous speech tasks, and one spontaneous speech task. This particular configuration of sample data collection accommodates gathering of rich and well-balanced speech-samples across various speech types, and is expected to improve the utility of the speech corpus developed in this study. Samples from 639 individuals were collected using the protocol. Speech samples were collected also from other sources, for a combined total of samples from 1,012 individuals.

한국어 자연발화 음성코퍼스의 남녀 모음 포먼트 비교 연구 (A Comparative Study on the Male and Female Vowel Formants of the Korean Corpus of Spontaneous Speech)

  • 윤규철;김순옥
    • 말소리와 음성과학
    • /
    • 제7권2호
    • /
    • pp.131-138
    • /
    • 2015
  • The aim of this work is to compare the vowel formants of the ten adult female speakers in their twenties and thirties from the Seoul corpus[7] with those of corresponding Korean male speakers from the same corpus and of American female speakers from the Buckeye corpus[4]. In addition, various linguistic factors that are expected affect the formant frequencies were examined to account for the distribution of the vowel formants. Formant frequencies extracted from the Seoul corpus were also compared to those from read speech. The results showed that the formant distribution of the spontaneous speech was very different from that of the read speech, while the comparison between the female and male speakers was similar in both languages. To a greater or lesser degree, the potential linguistic factors influenced the formant frequencies of the vowels.

음성인식 시스템에서의 음소분할기의 성능 (Performance of the Phoneme Segmenter in Speech Recognition System)

  • 이광석
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국해양정보통신학회 2009년도 추계학술대회
    • /
    • pp.705-708
    • /
    • 2009
  • 본 연구는 자연음성의 인식을 위하여 신경회로망을 기초로 한 음소 분할기에 대하여 기술하였다. 자연음성의 인식을 위한 음소 분할기의 입력으로는 16차 멜 스케일의 FFT, 정규화된 프레임 에너지, 0~3[KHz] 주파수 대역 및 그 이상의 대역에서의 에너지 비를 사용하였다. 모든 특징들은 두개의 연속적인 10[msec] 프레임의 차이며, 본 연구에 사용한 음소분할기는 하나의 72입력을 가지는 은닉층 퍼셉트론, 20은닉노드 및 하나의 출력노드로 구성하여 사용하였다. 자연음성에 대한 음소분할의 정확도는 7.8%삽입을 가지는 78%를 얻을 수 있었다.

  • PDF