• Title/Summary/Keyword: phonetic data

Search Result 200, Processing Time 0.022 seconds

Age and Sex Differences in Acoustic Parameter of Middle Age and Elderly Adult Voice (장.노년기 성인 음성의 성별과 연령에 따른 음향음성학적 특성 비교)

  • Lee, Hyo-Jin;Kim, Soo-Jin
    • MALSORI
    • /
    • no.60
    • /
    • pp.13-28
    • /
    • 2006
  • This study focused on comparing the following acoustic changes according to age and sex in adulthood: Fo, Jitter, Shimmer, and NHR. One hundred twenty Korean adults were divided into three age groups (20's, 50's, and 70's) and two sex groups (male and female). The subjects of this study performed three tasks: (1) sustained three vowels; (2) read on paragraph of 'Taking a Walk' (3) explained a picture. The data was analyzed using the MDVP of Multi-Speech. In the parameter of Fo, sex and age were influential factors. In the parameters of Jitter, Shimmer and NHR, the effect of sex and age was different in all three parameters. When the groups organized by sex were analyzed by age, the 20's group showed a statistical difference in all four parameters (Fo, Jitter, Shimmer, and NKR), when compared to the other two age ranges of 50's and 70's. We need to consider our standard parameter for the normal voice in the Korean elderly because the 50's and 70's age normal groups in our study are out of the current range of normal in MDVP.

  • PDF

Feature Parameter Extraction and Analysis in the Wavelet Domain for Discrimination of Music and Speech (음악과 음성 판별을 위한 웨이브렛 영역에서의 특징 파라미터)

  • Kim, Jung-Min;Bae, Keun-Sung
    • MALSORI
    • /
    • no.61
    • /
    • pp.63-74
    • /
    • 2007
  • Discrimination of music and speech from the multimedia signal is an important task in audio coding and broadcast monitoring systems. This paper deals with the problem of feature parameter extraction for discrimination of music and speech. The wavelet transform is a multi-resolution analysis method that is useful for analysis of temporal and spectral properties of non-stationary signals such as speech and audio signals. We propose new feature parameters extracted from the wavelet transformed signal for discrimination of music and speech. First, wavelet coefficients are obtained on the frame-by-frame basis. The analysis frame size is set to 20 ms. A parameter $E_{sum}$ is then defined by adding the difference of magnitude between adjacent wavelet coefficients in each scale. The maximum and minimum values of $E_{sum}$ for period of 2 seconds, which corresponds to the discrimination duration, are used as feature parameters for discrimination of music and speech. To evaluate the performance of the proposed feature parameters for music and speech discrimination, the accuracy of music and speech discrimination is measured for various types of music and speech signals. In the experiment every 2-second data is discriminated as music or speech, and about 93% of music and speech segments have been successfully detected.

  • PDF

Spoken-to-written text conversion for enhancement of Korean-English readability and machine translation

  • HyunJung Choi;Muyeol Choi;Seonhui Kim;Yohan Lim;Minkyu Lee;Seung Yun;Donghyun Kim;Sang Hun Kim
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.127-136
    • /
    • 2024
  • The Korean language has written (formal) and spoken (phonetic) forms that differ in their application, which can lead to confusion, especially when dealing with numbers and embedded Western words and phrases. This fact makes it difficult to automate Korean speech recognition models due to the need for a complete transcription training dataset. Because such datasets are frequently constructed using broadcast audio and their accompanying transcriptions, they do not follow a discrete rule-based matching pattern. Furthermore, these mismatches are exacerbated over time due to changing tacit policies. To mitigate this problem, we introduce a data-driven Korean spoken-to-written transcription conversion technique that enhances the automatic conversion of numbers and Western phrases to improve automatic translation model performance.

Literature Analysis on PROMPT Treatment (1984-2020) (프롬프트(PROMPT) 치료기법에 관한 문헌 분석(1984-2020년))

  • Kim, Wha-soo;Lee, Rio;Lee, Ji-woo
    • Journal of Digital Convergence
    • /
    • v.19 no.2
    • /
    • pp.447-456
    • /
    • 2021
  • This study analyzed 28 domestic and foreign studies related Prompts for Restructuring Oral Muscular Phonetic Targets treatment techniques from 1984 to 2020 to prepare basic data for the development of PROMPT intervention programs and examination tools. According to the analysis, continuous research has been conducted since 1984 when the prompt study was first started, and the method of research was 16 intervention studies, with the highest number of speech disorders, and the target age being 3 to 5 years old, the most frequently conducted for infancy. The treatment was the most frequent in the 16th sessions, and the activities were based on the Motor Speech Hierarchy(MSH), except for the subjects of the non-verbal autism spectrum disorder. According to the analysis of the dependent variables, 'speech production' was the most common, followed by 'speech motor control', 'articulation', and 'speech intelligibility' were highest. Combined with all these studies, it suggests that PROMPT, which are directly useful for exercise spoken word production, are effectively being used outside the country and that it is necessary to develop a PROMPT program that can be applied domestically, in Korea.

Vowel Classification of Imagined Speech in an Electroencephalogram using the Deep Belief Network (Deep Belief Network를 이용한 뇌파의 음성 상상 모음 분류)

  • Lee, Tae-Ju;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.1
    • /
    • pp.59-64
    • /
    • 2015
  • In this paper, we found the usefulness of the deep belief network (DBN) in the fields of brain-computer interface (BCI), especially in relation to imagined speech. In recent years, the growth of interest in the BCI field has led to the development of a number of useful applications, such as robot control, game interfaces, exoskeleton limbs, and so on. However, while imagined speech, which could be used for communication or military purpose devices, is one of the most exciting BCI applications, there are some problems in implementing the system. In the previous paper, we already handled some of the issues of imagined speech when using the International Phonetic Alphabet (IPA), although it required complementation for multi class classification problems. In view of this point, this paper could provide a suitable solution for vowel classification for imagined speech. We used the DBN algorithm, which is known as a deep learning algorithm for multi-class vowel classification, and selected four vowel pronunciations:, /a/, /i/, /o/, /u/ from IPA. For the experiment, we obtained the required 32 channel raw electroencephalogram (EEG) data from three male subjects, and electrodes were placed on the scalp of the frontal lobe and both temporal lobes which are related to thinking and verbal function. Eigenvalues of the covariance matrix of the EEG data were used as the feature vector of each vowel. In the analysis, we provided the classification results of the back propagation artificial neural network (BP-ANN) for making a comparison with DBN. As a result, the classification results from the BP-ANN were 52.04%, and the DBN was 87.96%. This means the DBN showed 35.92% better classification results in multi class imagined speech classification. In addition, the DBN spent much less time in whole computation time. In conclusion, the DBN algorithm is efficient in BCI system implementation.

BackTranScription (BTS)-based Jeju Automatic Speech Recognition Post-processor Research (BackTranScription (BTS)기반 제주어 음성인식 후처리기 연구)

  • Park, Chanjun;Seo, Jaehyung;Lee, Seolhwa;Moon, Heonseok;Eo, Sugyeong;Jang, Yoonna;Lim, Heuiseok
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.178-185
    • /
    • 2021
  • Sequence to sequence(S2S) 기반 음성인식 후처리기를 훈련하기 위한 학습 데이터 구축을 위해 (음성인식 결과(speech recognition sentence), 전사자(phonetic transcriptor)가 수정한 문장(Human post edit sentence))의 병렬 말뭉치가 필요하며 이를 위해 많은 노동력(human-labor)이 소요된다. BackTranScription (BTS)이란 기존 S2S기반 음성인식 후처리기의 한계점을 완화하기 위해 제안된 데이터 구축 방법론이며 Text-To-Speech(TTS)와 Speech-To-Text(STT) 기술을 결합하여 pseudo 병렬 말뭉치를 생성하는 기술을 의미한다. 해당 방법론은 전사자의 역할을 없애고 방대한 양의 학습 데이터를 자동으로 생성할 수 있기에 데이터 구축에 있어서 시간과 비용을 단축 할 수 있다. 본 논문은 BTS를 바탕으로 제주어 도메인에 특화된 음성인식 후처리기의 성능을 향상시키기 위하여 모델 수정(model modification)을 통해 성능을 향상시키는 모델 중심 접근(model-centric) 방법론과 모델 수정 없이 데이터의 양과 질을 고려하여 성능을 향상시키는 데이터 중심 접근(data-centric) 방법론에 대한 비교 분석을 진행하였다. 실험결과 모델 교정없이 데이터 중심 접근 방법론을 적용하는 것이 성능 향상에 더 도움이 됨을 알 수 있었으며 모델 중심 접근 방법론의 부정적 측면 (negative result)에 대해서 분석을 진행하였다.

  • PDF

Visual.Auditory.Acoustic Study on Singing Vowels of Korean Lyric Songs (시각과 청각 및 음향적 관점에서의 노랫말 모음 연구)

  • Lee Jai Kang
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.362-366
    • /
    • 1996
  • This paper is generally divided in 2 parts. One is the study on vowels about korean singer's lyric song in view of Daniel Jones' Cardinal Vowel. The other is acoustic study on vowels in my singing about korean lyric song. Analysis data are KBS concert video tape and CSL's. NSP file on my singing and Informants are famous singers i.e. 3 sopranos, 1 mezzo, 2 tenors, 1baritone, and me. Analysis aim is to find out Korean 8 vowels([equation omitted]) quality in singing. The methods of descrition are used in closed vowels, half closed vowels, half open vowels, open vowels and rounded vowels, unroundes vowels and formants. The study of the former is while watching the monitor screen to stop the scene that is to be analysixed. The study of the latter is to analysis the spectrogram converted by CSL's. SP file. Analysis results are an follows: Visual and auditory korean vowels quality in singing have the 3 tendency. One is the tendency of more rounded than is usual Korean vowels. Another is the tendency of centralized to center point in Cardinal Vowel and the other is the tendency of diversity in vowel quality. Acoustic analysis is studied by means of 4 formants. Fl and F2 show similiar step in spoken. In Fl there is the same formant values. This seems to vocal organization be perceived the singign situation. The width of F3 is the widest of all, so F3 may be the characteristics in singing. In conclude, the characteristics of vowels in Korean lyric songs are seems to have the tendencies of rounding, centralizing to center point in Cardinal Vowel, diversity in vowel quality and, F3'widest width in compared with usual Korean vowels.

  • PDF

Inter-speaker and intra-speaker variability on sound change in contemporary Korean

  • Kim, Mi-Ryoung
    • Phonetics and Speech Sciences
    • /
    • v.9 no.3
    • /
    • pp.25-32
    • /
    • 2017
  • Besides their effect on the f0 contour of the following vowel, Korean stops are undergoing a sound change in which a partial or complete consonantal merger on voice onset time (VOT) is taking place between aspirated and lax stops. Many previous studies on sound change have mainly focused on group-normative effects, that is, effects that are representative of the population as a whole. Few systematic quantitative studies of change in adult individuals have been carried out. The current study examines whether the sound change holds for individual speakers. It focuses on inter-speaker and intra-speaker variability on sound change in contemporary Korean. Speech data were collected for thirteen Seoul Korean speakers studying abroad in America. In order to minimize the possible effects of speech production, socio-phonetic factors such as age, gender, dialect, speech rate, and L2 exposure period were controlled when recruiting participants. The results showed that, for nine out of thirteen speakers, the consonantal merger is taking place between the aspirated and lax stop in terms of VOT. There were also intra-speaker variations on the merger in three aspects: First, is the consonantal (VOT) merger between the two stops is in progress or not? Second, are VOTs for aspirated stops getting shorter or not (i.e., the aspirated-shortening process)? Third, are VOTs for lax stops getting longer or not (i.e., the lax-lengthening process)? The results of remarkable inter-speaker and intra-speaker variability indicate a synchronous speech sound change of the stop system in contemporary Korean. Some speakers are early adopters or active propagators of sound change whereas others are not. Further study is necessary to see whether the inter-speaker differences exceed intra-speaker differences in sound change.

Web Contents Mining System for Real-Time Monitoring of Opinion Information based on Web 2.0 (웹2.0에서 의견정보의 실시간 모니터링을 위한 웹 콘텐츠 마이닝 시스템)

  • Kim, Young-Choon;Joo, Hae-Jong;Choi, Hae-Gill;Cho, Moon-Taek;Kim, Young-Baek;Rhee, Sang-Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.1
    • /
    • pp.68-79
    • /
    • 2011
  • This paper focuses on the opinion information extraction and analysis system through Web mining that is based on statistics collected from Web contents. That is, users' opinion information which is scattered across several websites can be automatically analyzed and extracted. The system provides the opinion information search service that enables users to search for real-time positive and negative opinions and check their statistics. Also, users can do real-time search and monitoring about other opinion information by putting keywords in the system. Proposing technique proved that the actual performance is excellent by comparison experiment with other techniques. Performance evaluation of function extracting positive/negative opinion information, the performance evaluation applying dynamic window technique and tokenizer technique for multilingual information retrieval, and the performance evaluation of technique extracting exact multilingual phonetic translation are carried out. The experiment with typical movie review sentence and Wikipedia experiment data as object as that applying example is carried out and the result is analyzed.

Teaching English Stress Using a Drum: Based on Phonetic Experiments

  • Yi, Do-Kyong
    • English Language & Literature Teaching
    • /
    • v.15 no.2
    • /
    • pp.261-280
    • /
    • 2009
  • This study focuses on providing the pedagogical implications of stress in English pronunciation teaching since stress is one the most important characteristic factors in English pronunciation (Bolinger, 1976; Brown, 1994; Celce-Murcia, Brinton & Goodwin, 1996; Kreidler, 1989). The author investigated stress production regarding in terms of duration, pitch, and intensity by a group of native speakers of English and a group of low-proficiency South Kyungsang Korean college students for their pre-test. For both of the pre- and post-test, the same stimuli, which consisted of a one-syllable word, two two-syllable words, three three-syllable words, and three four-syllable words, were used along with the various sentence positions: isolation, initial, medial, and final. Soft ware programs, ALVIN and Praat, were used to record and analyze the data. Since Celce-Murcia et al. (1996), Klatt (1975), and Ladefoged (2001) treat duration of the stressed syllable more significantly than other factors, pitch and intensity, with respect to the listener's point of view, the author developed a special method of teaching English stress using a traditional Korean drum to emphasize duration. In addition, the results from the native speakers' production showed that their main strategy to realize stress was through lengthening stressed syllables. After six weeks of stress instruction using the drum, the production of the native speakers and the SK Korean participants from the pre- and post-test were compared. The results from the post-test indicated that the participants showed great improvement not only in duration but also in pitch after the stress instruction. Pitch improvement was unexpected but well-explained by the statement that long vowels receive accent in loan word adaptation in North Kyungsang Korean. The results also showed that the Korean participants' pitch values became more even in their duration values for each syllable as the structure of the word or the sentence became more complex, due to their dependency upon their L1.

  • PDF