• Title/Summary/Keyword: Orthographic Transcription

Search Result 5, Processing Time 0.016 seconds

The Relationship Between Speech Intelligibility and Comprehensibility for Children with Cochlear Implants (조음중증도에 따른 인공와우이식 아동들의 말명료도와 이해가능도의 상관연구)

  • Heo, Hyun-Sook;Ha, Seung-Hee
    • Phonetics and Speech Sciences
    • /
    • v.2 no.3
    • /
    • pp.171-178
    • /
    • 2010
  • This study examined the relationship between speech intelligibility and comprehensibility for hearing impaired children with cochlear implants. Speech intelligibility was measured by orthographic transcription method for acoustic signal at the level of words and sentences. Comprehensibility was evaluated by examining listener's ability to answer questions about the contents of a narrative. Speech samples were collected from 12 speakers(age of 6~15 years) with cochlear implants. For each speaker, 4 different listeners(total of 48 listeners) completed 2 tasks: One task involved making orthographic transcriptions and the other task involved answering comprehension questions. The results of the study were as follows: (1) Speech intelligibility and comprehensibility scores tended to be increased by decreasing of severity. (2) Across all speakers, the relationship was significant between speech intelligibility and comprehensibility scores without considering severity. However, within severity groups, there was the significant relationship between comprehensibility and speech intelligibility only for moderate-severe group. These results suggest that speech intelligibility scores measured by orthographic transcription may not accurately reflect how well listener comprehend speech of children with cochlear implants and therefore, measures of both speech intelligibility and listener comprehension should be considered in evaluating speech ability and information-bearing capability in speakers with cochlear implants.

  • PDF

Computer Codes for Korean Sounds: K-SAMPA

  • Kim, Jong-mi
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.4E
    • /
    • pp.3-16
    • /
    • 2001
  • An ASCII encoding of Korean has been developed for extended phonetic transcription of the Speech Assessment Methods Phonetic Alphabet (SAMPA). SAMPA is a machine-readable phonetic alphabet used for multilingual computing. It has been developed since 1987 and extended to more than twenty languages. The motivating factor for creating Korean SAMPA (K-SAMPA) is to label Korean speech for a multilingual corpus or to transcribe native language (Ll) interfered pronunciation of a second language learner for bilingual education. Korean SAMPA represents each Korean allophone with a particular SAMPA symbol. Sounds that closely resemble it are represented by the same symbol, regardless of the language they are uttered in. Each of its symbols represents a speech sound that is spectrally and temporally so distinct as to be perceptually different when the components are heard in isolation. Each type of sound has a separate IPA-like designation. Korean SAMPA is superior to other transcription systems with similar objectives. It describes better the cross-linguistic sound quality of Korean than the official Romanization system, proclaimed by the Korean government in July 2000, because it uses an internationally shared phonetic alphabet. It is also phonetically more accurate than the official Romanization in that it dispenses with orthographic adjustments. It is also more convenient for computing than the International Phonetic Alphabet (IPA) because it consists of the symbols on a standard keyboard. This paper demonstrates how the Korean SAMPA can express allophonic details and prosodic features by adopting the transcription conventions of the extended SAMPA (X-SAMPA) and the prosodic SAMPA(SAMPROSA).

  • PDF

A Prosodic Labeling System of Intonation Patterns and Prosodic Structures in Korean

  • Cho, Yong-Hyung
    • Speech Sciences
    • /
    • v.4 no.1
    • /
    • pp.113-133
    • /
    • 1998
  • The system proposed in this paper prosodically transcribes the intonation patterns, prosodic structures, phrasings, and other prosodic aspects of Korean utterances, on four parallel tiers: a tone tier, an orthographic tier, a break index tier, and a miscellaneous tier. The tone tier employs two phrase accents (L* and H *), three accentual phrase boundary tones (L-, H-, LH-), and four intonational phrase boundary tones (L%,H%,LH%,LHL%) in order to provide a phonological transcription of pitch events associated with accented syllables and phrase boundaries. The break index tier uses five break indices, numbered from 0 to 4, which mark a prosodic grouping of words and its prosodic structure in an utterance. Among the five indices, the break index 3 and the break index 4 align with an accentual phrase boundary tone and an intonational phrase boundary tone, respectively, in the tone tier.

  • PDF

A Study On Generation and Reduction of the Notation Candidate for the Notation Restoration of Korean Phonetic Value (한국어 음가의 표기 복원을 위한 표기 후보 생성 및 감소에 관한 연구)

  • Rhee, Sang-Burm;Park, Sung-Hyun
    • The KIPS Transactions:PartB
    • /
    • v.11B no.1
    • /
    • pp.99-106
    • /
    • 2004
  • The syllable restoration is a process restoring a phonetic value recognized in a speech recognition device with the notation form that a vocalization is former. In this paper a syllable restoration rule was composed of a based on standard pronunciation for a syllable restoration process. A syllable restoring regulation was used, and a generation method of a notation candidate set was researched. Also, A study is held to reduce the number of created notation candidate. Three phases of reduction processes were suggested. Reduction of a notation candidate has the non-notation syllable, non-vocabulary syllable and non-stem syllable. As a result of experiment, an average of 74% notation candidate decrease rates were shown.

Implementation of the Automatic Segmentation and Labeling System (자동 음성분할 및 레이블링 시스템의 구현)

  • Sung, Jong-Mo;Kim, Hyung-Soon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.5
    • /
    • pp.50-59
    • /
    • 1997
  • In this paper, we implement an automatic speech segmentation and labeling system which marks phone boundaries automatically for constructing the Korean speech database. We specify and implement the system based on conventional speech segmentation and labeling techniques, and also develop the graphic user interface(GUI) on Hangul $Motif^{TM}$ environment for the users to examine the automatic alignment boundaries and to refine them easily. The developed system is applied to 16kHz sampled speech, and the labeling unit is composed of 46 phoneme-like units(PLUs) and silence. The system uses both of the phonetic and orthographic transcription as input methods of linguistic information. For pattern-matching method, hidden Markov models(HMM) is employed. Each phoneme model is trained using the manually segmented 445 phonetically balanced word (PBW) database. In order to evaluate the performance of the system, we test it using another database consisting of sentence-type speech. According to our experiment, 74.7% of phoneme boundaries are within 20ms of the true boundary and 92.8% are within 40ms.

  • PDF