• 제목/요약/키워드: spontaneous speech data

검색결과 41건 처리시간 0.018초

Rhythmic Differences between Spontaneous and Read Speech of English

  • Kim, Sul-Ki;Jang, Tae-Yeoub
    • 말소리와 음성과학
    • /
    • 제1권3호
    • /
    • pp.49-55
    • /
    • 2009
  • This study investigates whether rhythm metrics can be used to capture the rhythmic differences between spontaneous and read English speech. Transcription of spontaneous speech tokens extracted from a corpus is read by three English native speakers to generate the corresponding read speech tokens. Two data sets are compared in terms of seven rhythm measures that are suggested by previous studies. Results show that there is a significant difference in the values of vowel-based metrics (VarcoV and nPVI-V) between spontaneous and read speech. This manifests a greater variability in vocalic intervals in spontaneous speech than in read speech. The current study is especially meaningful as it demonstrates a way in which speech styles can be differentiated and parameterized in numerical terms.

  • PDF

Effects of gender, age, and individual speakers on articulation rate in Seoul Korean spontaneous speech

  • Kim, Jungsun
    • 말소리와 음성과학
    • /
    • 제10권4호
    • /
    • pp.19-29
    • /
    • 2018
  • The present study investigated whether there are differences in articulation rate by gender, age, and individual speakers in a spontaneous speech corpus produced by 40 Seoul Korean speakers. This study measured their articulation rates using a second-per-syllable metric and a syllable-per-second metric. The findings are as follows. First, in spontaneous Seoul Korean speech, there was a gender difference in articulation rates only in age group 10-19, among whom men tended to speak faster than women. Second, individual speakers showed variability in their rates of articulation. The tendency for some speakers to speak faster than others was variable. Finally, there were metric differences in articulation rate. That is, regarding the coefficients of variation, the values of the second-per-syllable metric were much higher than those for the syllable-per-second metric. The articulation rate for the syllable-per-second metric tended to be more distinct among individual speakers. The present results imply that data gathered in a corpus of Seoul Korean spontaneous speech may reflect speaker-specific differences in articulatory movements.

한국인 표준 음성 DB 구축(II) (Developing a Korean standard speech DB (II))

  • 신지영;김경화
    • 말소리와 음성과학
    • /
    • 제9권2호
    • /
    • pp.9-22
    • /
    • 2017
  • The purpose of this paper is to report the whole process of developing Korean Standard Speech Database (KSS DB). This project is supported by SPO (Supreme Prosecutors' Office) research grant for three years from 2014 to 2016. KSS DB is designed to provide speech data for acoustic-phonetic and phonological studies and speaker recognition system. For the samples to represent the spoken Korean, sociolinguistic factors, such as region (9 regional dialects), age (5 age groups over 20) and gender (male and female) were considered. The goal of the project is to collect over 3,000 male and female speakers of nine regional dialects and five age groups employing direct and indirect methods. Speech samples of 3,191 speakers (2,829 speakers and 362 speakers using direct and indirect methods, respectively) are collected and databased. KSS DB designs to collect read and spontaneous speech samples from each speaker carrying out 5 speech tasks: three (pseudo-)spontaneous speech tasks (producing prolonged simple vowels, 28 blanked sentences and spontaneous talk) and two read speech tasks (reading 55 phonetically and phonologically rich sentences and reading three short passages). KSS DB includes a 16-bit, 44.1kHz speech waveform file and a orthographic file for each speech task.

한국인 표준 음성 DB 구축 (Developing a Korean Standard Speech DB)

  • 신지영;장혜진;강연민;김경화
    • 말소리와 음성과학
    • /
    • 제7권1호
    • /
    • pp.139-150
    • /
    • 2015
  • The data accumulated in this database will be used to develop a speaker identification system. This may also be applied towards, but not limited to, fields of phonetic studies, sociolinguistics, and language pathology. We plan to supplement the large-scale speech corpus next year, in terms of research methodology and content, to better answer the needs of diverse fields. The purpose of this study is to develop a speech corpus for standard Korean speech. For the samples to viably represent the state of spoken Korean, demographic factors were considered to modulate a balanced spread of age, gender, and dialects. Nine separate regional dialects were categorized, and five age groups were established from individuals in their 20s to 60s. A speech-sample collection protocol was developed for the purpose of this study where each speaker performs five tasks: two reading tasks, two semi-spontaneous speech tasks, and one spontaneous speech task. This particular configuration of sample data collection accommodates gathering of rich and well-balanced speech-samples across various speech types, and is expected to improve the utility of the speech corpus developed in this study. Samples from 639 individuals were collected using the protocol. Speech samples were collected also from other sources, for a combined total of samples from 1,012 individuals.

자발화에 나타난 구문구조 발달 양상 (A Study on Syntactic Development in Spontaneous Speech)

  • 장진아;김수진;신지영;이봉원
    • 대한음성학회지:말소리
    • /
    • 제68권
    • /
    • pp.17-32
    • /
    • 2008
  • The purpose of the present study is to investigate syntactic development of Korean by analysing the spontaneous speech data. Thirty children(3, 5, and 7-year-old and 10 per each age group) and 10 adults are employed as subjects for this study. Speech data were recorded and transcribed in orthography. Transcribed data are analysed syntactically: sentence(simple vs complex) patterns and clause patterns(4 basic types according to the predicate) etc. The results are as follows: 1) simple sentences show higher frequency for the upper age groups, 2) complex sentences with conjunctive and embedded clauses show higher frequency for the upper age groups.

  • PDF

음향 데이터로부터 얻은 확장된 음소 단위를 이용한 한국어 자유발화 음성인식기의 성능 (Performance of Korean spontaneous speech recognizers based on an extended phone set derived from acoustic data)

  • 방정욱;김상훈;권오욱
    • 말소리와 음성과학
    • /
    • 제11권3호
    • /
    • pp.39-47
    • /
    • 2019
  • 본 논문에서는 대량의 음성 데이터를 이용하여 기존의 음소 세트를 확장하여 자유발화 음성인식기의 성능을 향상시키는 방법을 제안한다. 제안된 방법은 먼저 방송 데이터에서 가변 길이의 음소 세그먼트를 추출한 다음 LSTM 구조를 기반으로 고정 길이의 잠복벡터를 얻는다. 그런 다음, k-means 군집화 알고리즘을 사용하여 음향적으로 유사한 세그먼트를 군집시키고, Davies-Bouldin 지수가 가장 낮은 군집 수를 선택하여 새로운 음소 세트를 구축한다. 이후, 음성인식기의 발음사전은 가장 높은 조건부 확률을 가지는 각 단어의 발음 시퀀스를 선택함으로써 업데이트된다. 새로운 음소 세트의 음향적 특성을 분석하기 위하여, 확장된 음소 세트의 스펙트럼 패턴과 세그먼트 지속 시간을 시각화하여 비교한다. 제안된 단위는 자유발화뿐만 아니라, 낭독체 음성인식 작업에서 음소 단위 및 자소 단위보다 더 우수한 성능을 보였다.

N-gram 기반의 유사도를 이용한 대화체 연속 음성 언어 모델링 (Spontaneous Speech Language Modeling using N-gram based Similarity)

  • 박영희;정민화
    • 대한음성학회지:말소리
    • /
    • 제46호
    • /
    • pp.117-126
    • /
    • 2003
  • This paper presents our language model adaptation for Korean spontaneous speech recognition. Korean spontaneous speech is observed various characteristics of content and style such as filled pauses, word omission, and contraction as compared with the written text corpus. Our approaches focus on improving the estimation of domain-dependent n-gram models by relevance weighting out-of-domain text data, where style is represented by n-gram based tf/sup */idf similarity. In addition to relevance weighting, we use disfluencies as Predictor to the neighboring words. The best result reduces 9.7% word error rate relatively and shows that n-gram based relevance weighting reflects style difference greatly and disfluencies are good predictor also.

  • PDF

CNN 기반 스펙트로그램을 이용한 자유발화 음성감정인식 (Spontaneous Speech Emotion Recognition Based On Spectrogram With Convolutional Neural Network)

  • 손귀영;권순일
    • 정보처리학회 논문지
    • /
    • 제13권6호
    • /
    • pp.284-290
    • /
    • 2024
  • 음성감정인식(Speech Emotion Recognition, SER)은 사용자의 목소리에서 나타나는 떨림, 어조, 크기 등의 음성 패턴 분석을 통하여 감정 상태를 판단하는 기술이다. 하지만, 기존의 음성 감정인식 연구는 구현된 시나리오를 이용하여 제한된 환경 내에서 숙련된 연기자를 대상으로 기록된 음성인 구현발화를 중심의 연구로 그 결과 또한 높은 성능을 얻을 수 있지만, 이에 반해 자유발화 감정인식은 일상생활에서 통제되지 않는 환경에서 이루어지기 때문에 기존 구현발화보다 현저히 낮은 성능을 보여주고 있다. 본 논문에서는 일상적 자유발화 음성을 활용하여 감정인식을 진행하고, 그 성능을 향상하고자 한다. 성능평가를 위하여 AI Hub에서 제공되는 한국인 자유발화 대화 음성데이터를 사용하였으며, 딥러닝 학습을 위하여 1차원의 음성신호를 시간-주파수가 포함된 2차원의 스펙트로그램(Spectrogram)로 이미지 변환을 진행하였다. 생성된 이미지는 CNN기반 전이학습 신경망 모델인 VGG (Visual Geometry Group) 로 학습하였고, 그 결과 7개 감정(기쁨, 사랑스러움, 화남, 두려움, 슬픔, 중립, 놀람)에 대해서 성인 83.5%, 청소년 73.0%의 감정인식 성능을 확인하였다. 본 연구를 통하여, 기존의 구현발화기반 감정인식 성능과 비교하면, 낮은 성능이지만, 자유발화 감정표현에 대한 정량화할 수 있는 음성적 특징을 규정하기 어려움에도 불구하고, 일상생활에서 이루어진 대화를 기반으로 감정인식을 진행한 점에서 의의를 두고자 한다.

KMSAV: Korean multi-speaker spontaneous audiovisual dataset

  • Kiyoung Park;Changhan Oh;Sunghee Dong
    • ETRI Journal
    • /
    • 제46권1호
    • /
    • pp.71-81
    • /
    • 2024
  • Recent advances in deep learning for speech and visual recognition have accelerated the development of multimodal speech recognition, yielding many innovative results. We introduce a Korean audiovisual speech recognition corpus. This dataset comprises approximately 150 h of manually transcribed and annotated audiovisual data supplemented with additional 2000 h of untranscribed videos collected from YouTube under the Creative Commons License. The dataset is intended to be freely accessible for unrestricted research purposes. Along with the corpus, we propose an open-source framework for automatic speech recognition (ASR) and audiovisual speech recognition (AVSR). We validate the effectiveness of the corpus with evaluations using state-of-the-art ASR and AVSR techniques, capitalizing on both pretrained models and fine-tuning processes. After fine-tuning, ASR and AVSR achieve character error rates of 11.1% and 18.9%, respectively. This error difference highlights the need for improvement in AVSR techniques. We expect that our corpus will be an instrumental resource to support improvements in AVSR.

The f0 distribution of Korean speakers in a spontaneous speech corpus

  • Yang, Byunggon
    • 말소리와 음성과학
    • /
    • 제13권3호
    • /
    • pp.31-37
    • /
    • 2021
  • The fundamental frequency, or f0, is an important acoustic measure in the prosody of human speech. The current study examined the f0 distribution of a corpus of spontaneous speech in order to provide normative data for Korean speakers. The corpus consists of 40 speakers talking freely about their daily activities and their personal views. Praat scripts were created to collect f0 values, and a majority of obvious errors were corrected manually by watching and listening to the f0 contour on a narrow-band spectrogram. Statistical analyses of the f0 distribution were conducted using R. The results showed that the f0 values of all the Korean speakers were right-skewed, with a pointy distribution. The speakers produced spontaneous speech within a frequency range of 274 Hz (from 65 Hz to 339 Hz), excluding statistical outliers. The mode of the total f0 data was 102 Hz. The female f0 range, with a bimodal distribution, appeared wider than that of the male group. Regression analyses based on age and f0 values yielded negligible R-squared values. As the mode of an individual speaker could be predicted from the median, either the median or mode could serve as a good reference for the individual f0 range. Finally, an analysis of the continuous f0 points of intonational phrases revealed that the initial and final segments of the phrases yielded several f0 measurement errors. From these results, we conclude that an examination of a spontaneous speech corpus can provide linguists with useful measures to generalize acoustic properties of f0 variability in a language by an individual or groups. Further studies would be desirable of the use of statistical measures to secure reliable f0 values of individual speakers.