• Title/Summary/Keyword: International phonetic alphabet(IPA)

Search Result 11, Processing Time 0.037 seconds

Transcription of Sounds and a Problem of the IPA

  • Chung, Kook
    • Speech Sciences
    • /
    • v.9 no.1
    • /
    • pp.63-75
    • /
    • 2002
  • This paper examines the principles of the International Phonetic Association and its Alphabet to see if the International Phonetic Alphabet (the IPA, for short) is adequate for transcribing sounds of a language like Korean. Special attention is given to 'broad transcription' and it has been found that the IPA is inadequate in representing the phonemes: there is no way to correctly transcribe phonemically the sounds of Korean with the current alphabet. A suggestion is given to help solve this problem and extend the IPA to accommodate all the different languages of the world.

  • PDF

Phonetic Alphabet as a Pronunciation Guide (영어발음교육과 발음기호)

  • Kang, Yongsoon
    • Journal of English Language & Literature
    • /
    • v.56 no.1
    • /
    • pp.65-78
    • /
    • 2010
  • The purpose of this paper is to suggest that the International Phonetic Alphabet be included in English curriculum and taught in English classroom. Current English curriculum for elementary and middle school students doesn't specify anything for the education of the IPA. The knowledge of IPA is essential for the students to study by themselves how to pronounce English words. The IPA, however, is either too little or too much to be taught at school. It is too little in that it doesn't tell us anything about allophones, the knowledge of which could enable us to get rid of foreign accents as much as possible. It is too much in that it can represent more than one sounds (e.g., /ɔ/ in American and British English). To overcome these drawbacks, it should be introduced gradually with the allophones in the same environments. The correct vowel sounds should be introduced with the aid of pronunciation dictionary so that the students could get their own vowel quality. Moreover, the IPA symbol should be adopted for the English textbooks.

The Organic Principle of the International Korean Phonetic Alphabet

  • Lee, Hyun-Bok
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.285-288
    • /
    • 1996
  • Based on the articulatory phonetic (or organic) principle, the Korean alphabet of 28 letters as invented by King Sejong in 1443 is not only systematic and scientifically oriented but also easy to learn and use in everyday life of the Korean people. The International Korean Phonetic Alphabet was devised by the present writer in 1971 by applying the organic principle much more extensively. Accordingly, the IKPA symbols are just as simple and easy to loam and memorize as the Korean alphabet, and at the same time they are much more consistent and logical than the IPA symbols which, having been derived mainly from Roman and Greek letters, are unsystematic mass of letters except in one respect, i.e., retroflex symbols. This paper describes the organic principles exploited in devising the International Korean Phonetic Alphabet and assesses its advantages.

  • PDF

Computer Codes for Korean Sounds: K-SAMPA

  • Kim, Jong-mi
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.4E
    • /
    • pp.3-16
    • /
    • 2001
  • An ASCII encoding of Korean has been developed for extended phonetic transcription of the Speech Assessment Methods Phonetic Alphabet (SAMPA). SAMPA is a machine-readable phonetic alphabet used for multilingual computing. It has been developed since 1987 and extended to more than twenty languages. The motivating factor for creating Korean SAMPA (K-SAMPA) is to label Korean speech for a multilingual corpus or to transcribe native language (Ll) interfered pronunciation of a second language learner for bilingual education. Korean SAMPA represents each Korean allophone with a particular SAMPA symbol. Sounds that closely resemble it are represented by the same symbol, regardless of the language they are uttered in. Each of its symbols represents a speech sound that is spectrally and temporally so distinct as to be perceptually different when the components are heard in isolation. Each type of sound has a separate IPA-like designation. Korean SAMPA is superior to other transcription systems with similar objectives. It describes better the cross-linguistic sound quality of Korean than the official Romanization system, proclaimed by the Korean government in July 2000, because it uses an internationally shared phonetic alphabet. It is also phonetically more accurate than the official Romanization in that it dispenses with orthographic adjustments. It is also more convenient for computing than the International Phonetic Alphabet (IPA) because it consists of the symbols on a standard keyboard. This paper demonstrates how the Korean SAMPA can express allophonic details and prosodic features by adopting the transcription conventions of the extended SAMPA (X-SAMPA) and the prosodic SAMPA(SAMPROSA).

  • PDF

Phonetic Evaluation in Speech Sciences and Issues in Phonetic Transcription (음성 평가의 다학문적 현황과 표기의 과제)

  • Kim, Jong-Mi
    • Speech Sciences
    • /
    • v.10 no.2
    • /
    • pp.259-280
    • /
    • 2003
  • The paper discusses the way in which speech sounds are being evaluated and transcribed in various fields of speech sciences, and suggests ways for a more accurate transcription. The academic fields explored are of phonetics, speech processing, speech pathology, and foreign language education. The discussion centers on the International Phonetic Alphabet (IPA), most commonly used in these fields, and other less widely-accepted transcription conventions such as the TOnes and Break Indices (ToBI), the Speech Assessment Methods Phonetic Alphabet (SAMPA), an extension of the official Korean Romanization (KORBET), and the American-English transcription system in the TIMIT database (TIMITBET). These transcription conventions are dealt with Korean, English, and Korean-accented English. The paper demonstrates that each transcription can exclusively be recommended for a specific need from different academic fields. Due to its publicity, the IPA is best suited for phonetic evaluation in the fields of phonetics, speech pathology, and foreign language education. The rest of the transcriptions are useful for keyboard-inputting the phonetically evaluated data from all these fields as well as for sound transcription in speech engineering, because they use convenient letter symbols for typing, searching, and programming. Several practical suggestions are made to maintain the transcriptional efficiency and consistency to accommodate the intra-and inter-transcriber variability.

  • PDF

Eligibility of the affinity between alphabet codes and pronunciation drills

  • Kim, Hyoung-Youb
    • Lingua Humanitatis
    • /
    • v.8
    • /
    • pp.331-367
    • /
    • 2006
  • In this paper I attempted to investigate the matters related with the clarification of the close relationship between writing system and pronunciation. On the way of pursuing the research on the subject I found the fact that the same topic has been the main academic target in Korea. There have been some remarks about English alphabets and pronunciation. Nevertheless, the relation between alphabet codes and pronunciation tokens wasn't considered as the main key to master the English pronunciation correctly and completely. As the main target of this paper I argue that it is necessary to comprehend the connection. Then, we can recognize the significant role of alphabetic structure for understanding the gist of pronunciation exercise. This paper is classified into four parts. Each part consists of the material to affirm the fact that writing system should be the inevitable equivalent of sound system, and vice versa. In the first section I show that the development of the way of pronouncing English words is closely related with the endeavors of the scholars. While performing the survey of the studies about the alphabetic structure of the age many scholars found that the spelling construction was recorded without any common denominator. Thus, they not only sought to stage the bedrock for the standard written form of words but also to associate the alphabet letters with phonetic features. Secondly I mention the negative aspect of the 'only spelling based English pronunciation education' for the educational goal of 'Phonics methodology.' In this part I suggest the essentiality of phonemic properties with the phonetic prospect: phonemic awareness. Thirdly I refer to the standardization of the spelling system of English. As the realm of application of the language is extended toward the various professional areas such as commercial, scientific, and cultural spheres, it is quite natural to assume that the usage of the language will be transformed according to the areas in the world. Fourthly I introduce the first English-Korean grammar book with the section of 'the introduction to English pronunciation.' At the chapter the author explained the sound features of English based on the regulation of 'Scientific Alphabet' of U.S.A. In the transcribing system all the symbols were postulated on the basis of the English alphabet form instead of the separate phonetic signs of IPA.

  • PDF

Proposed Methodology for Building Korean Machine Translation Data sets Considering Phonetic Features (단어의 음성학적 특징을 이용한 한국어 기계 번역 데이터 세트 구축 방안)

  • Zhang Qinghao;Yang Hongjian;Serin Kim;Hyuk-Chul Kwon
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.592-595
    • /
    • 2022
  • 한국어에서 한자어와 외래어가 차지하는 비중은 매우 높다. 일상어의 경우 한자어와 외래어의 비중이 약 53%, 전문어의 경우 약 92%에 달한다. 한자어나 외래어는 중국이나 다른 나라로부터 영향을 받아 한국에서 쓰이는 단어들이다. 한국어에서 사용되는 한자어와 외래어의 한글 표기과 원어 표기를 발음해보면, 발음이 상당히 유사하다는 것을 알 수 있다. 한자어인 도서관(图书馆)을 중국어로 발음해보면 thu.ʂu.kwan'로 해당 단어에 대한 한국 사람의 발음과 상당히 유사하다. 본 논문에서는 Source Length, Source IPA Length, Target Length, Target IPA Length, IPA Distance 등 총 5가지의 음성학적 특징을 고려한 한국어-중국어 한국어-영어 단어 기계번역 데이터 세트를 구축하고자 한다.

  • PDF

A Study on the Multilingual Speech Recognition for On-line International Game (온라인 다국적 게임을 위한 다국어 혼합 음성 인식에 관한 연구)

  • Kim, Suk-Dong;Kang, Heung-Soon;Woo, In-Sung;Shin, Chwa-Cheul;Yoon, Chun-Duk
    • Journal of Korea Game Society
    • /
    • v.8 no.4
    • /
    • pp.107-114
    • /
    • 2008
  • The requests for speech-recognition for multi-language in field of game and the necessity of multi-language system, which expresses one phonetic model from many different kind of language phonetics, has been increased in field of game industry. Here upon, the research regarding development of multi-national language system which can express speeches, that is consist of various different languages, into only one lexical model is needed. In this paper is basic research for establishing integrated system from multi-language lexical model, and it shows the system which recognize Korean and English speeches into IPA(International Phonetic Alphabet). We focused on finding the IPA model which is satisfied with Korean and English phoneme one simutaneously. As a result, we could get the 90.62% of Korean speech-recognition rate, also 91.71% of English speech-recognition rate.

  • PDF

EEG based Vowel Feature Extraction for Speech Recognition System using International Phonetic Alphabet (EEG기반 언어 인식 시스템을 위한 국제음성기호를 이용한 모음 특징 추출 연구)

  • Lee, Tae-Ju;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.1
    • /
    • pp.90-95
    • /
    • 2014
  • The researchs using brain-computer interface, the new interface system which connect human to macine, have been maded to implement the user-assistance devices for control of wheelchairs or input the characters. In recent researches, there are several trials to implement the speech recognitions system based on the brain wave and attempt to silent communication. In this paper, we studied how to extract features of vowel based on international phonetic alphabet (IPA), as a foundation step for implementing of speech recognition system based on electroencephalogram (EEG). We conducted the 2 step experiments with three healthy male subjects, and first step was speaking imagery with single vowel and second step was imagery with successive two vowels. We selected 32 channels, which include frontal lobe related to thinking and temporal lobe related to speech function, among acquired 64 channels. Eigen value of the signal was used for feature vector and support vector machine (SVM) was used for classification. As a result of first step, we should use over than 10th order of feature vector to analyze the EEG signal of speech and if we used 11th order feature vector, the highest average classification rate was 95.63 % in classification between /a/ and /o/, the lowest average classification rate was 86.85 % with /a/ and /u/. In the second step of the experiments, we studied the difference of speech imaginary signals between single and successive two vowels.

Vowel Classification of Imagined Speech in an Electroencephalogram using the Deep Belief Network (Deep Belief Network를 이용한 뇌파의 음성 상상 모음 분류)

  • Lee, Tae-Ju;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.1
    • /
    • pp.59-64
    • /
    • 2015
  • In this paper, we found the usefulness of the deep belief network (DBN) in the fields of brain-computer interface (BCI), especially in relation to imagined speech. In recent years, the growth of interest in the BCI field has led to the development of a number of useful applications, such as robot control, game interfaces, exoskeleton limbs, and so on. However, while imagined speech, which could be used for communication or military purpose devices, is one of the most exciting BCI applications, there are some problems in implementing the system. In the previous paper, we already handled some of the issues of imagined speech when using the International Phonetic Alphabet (IPA), although it required complementation for multi class classification problems. In view of this point, this paper could provide a suitable solution for vowel classification for imagined speech. We used the DBN algorithm, which is known as a deep learning algorithm for multi-class vowel classification, and selected four vowel pronunciations:, /a/, /i/, /o/, /u/ from IPA. For the experiment, we obtained the required 32 channel raw electroencephalogram (EEG) data from three male subjects, and electrodes were placed on the scalp of the frontal lobe and both temporal lobes which are related to thinking and verbal function. Eigenvalues of the covariance matrix of the EEG data were used as the feature vector of each vowel. In the analysis, we provided the classification results of the back propagation artificial neural network (BP-ANN) for making a comparison with DBN. As a result, the classification results from the BP-ANN were 52.04%, and the DBN was 87.96%. This means the DBN showed 35.92% better classification results in multi class imagined speech classification. In addition, the DBN spent much less time in whole computation time. In conclusion, the DBN algorithm is efficient in BCI system implementation.