• Title/Summary/Keyword: 단어 발음

Search Result 153, Processing Time 0.025 seconds

Case Study of a Dog Vocalizing Human's Words (사람의 말을 발성하는 개의 사례 연구)

  • Kyon, Doo-Heon;Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.235-243
    • /
    • 2012
  • This paper studies characteristics and causes of sound, and many others by distinguishing passivity and activity of the cases of a dog vocalizing human's words. As a result of the previous cases of vocalization of human's words, the dog was able to understand characteristics of a host's voice and imitate the sound using his own vocal organs. This is the case of passive vocalization accompanied by temporary voice imitation without a function of communication. On the contrary, as a consequence of the recently reported case in which a dog vocalizes such words as "Um-ma" and "Nu-na-ya," it shows the vocalization pattern clearly distinguished from the prior cases. The given dog repeatedly vocalizes pertaining words in an active manner according to circumstances and plays a role of fundamental communication and interaction with its host. The reason why the dog can vocalize the man's words actively is determined to be that the dog has a high level of intelligence and intimacy with its host, that people react actively to its pertaining pronunciation, and so forth. The following results can be used for the study that investigates animals' sound with vocalization possibility and language learning feasibility.

A Study on Rhythm Information Visualization Using Syllable of Digital Text (디지털 텍스트의 음절을 이용한 운율 정보 시각화에 관한 연구)

  • Park, seon-hee;Lee, jae-joong;Park, jin-wan
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2009.05a
    • /
    • pp.120-126
    • /
    • 2009
  • As the information age grows rapidly, the amount of digital texts has been increasing as well. It has brought an increasing of visualization case in order to figure out lots of digital texts. Existing visualized design of digital text is merely concentrating on figuration of subject word through adoption of stemming algorithm and word frequency extraction, prominence of meaning of text, and connection in between sentences. So it is a fact that expression of rhythm that can visualize sentimental feeing of digital text was insufficient. Syllable is a phoneme unit that can express rhythm more efficiently. In sentences, syllable is a most basic pronunciation unit in pronouncing word, phase and sentence. On this basis, accent, intonation, length of rhythm factor and others are based on syllable. Sonority, which is most closely associated with definitions of syllable, is expressed through air flow of igniting lung and acoustic energy that is specified kinetic energy into sonority. Seen from this perspective, this study examines phonologic definition and characteristics based on syllable, which is properties of digital text, and research the way to visualize rhythm through diagram. After converting digital text into phonetic symbol by the experiment, rhythm information are visualized into images using degree of resonance, which was started from rhythm in all languages, and using syllable establishment of digital text. By visualizing syllable information, it provides syllable information of digital text and express sentiment of digital text through diagram to assist user's understanding by systematic formula. Therefore, this study is aimed at planning for easy understanding of text's rhythm and realizing visualization of digital text.

  • PDF

A Study on the Spoken Korean Citynames Using Multi-Layered Perceptron of Back-Propagation Algorithm (오차 역전파 알고리즘을 갖는 MLP를 이용한 한국 지명 인식에 대한 연구)

  • Song, Do-Sun;Lee, Jae-Gheon;Kim, Seok-Dong;Lee, Haing-Sei
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.6
    • /
    • pp.5-14
    • /
    • 1994
  • This paper is about an experiment of speaker-independent automatic Korean spoken words recognition using Multi-Layered Perceptron and Error Back-propagation algorithm. The object words are 50 citynames of D.D.D local numbers. 43 of those are 2 syllables and the rest 7 are 3 syllables. The words were not segmented into syllables or phonemes, and some feature components extracted from the words in equal gap were applied to the neural network. That led independent result on the speech duration, and the PARCOR coefficients calculated from the frames using linear predictive analysis were employed as feature components. This paper tried to find out the optimum conditions through 4 differerent experiments which are comparison between total and pre-classified training, dependency of recognition rate on the number of frames and PAROCR order, recognition change due to the number of neurons in the hidden layer, and the comparison of the output pattern composition method of output neurons. As a result, the recognition rate of $89.6\%$ is obtaimed through the research.

  • PDF

Convergent Analysis on the Speech Sound of Typically Developing Children Aged 3 to 5 : Focused on Word Level and Connected Speech Level (3-5세 일반아동의 말소리에 대한 융합적 분석: 단어와 자발화를 중심으로)

  • Kim, Yun-Joo;Park, Hyun-Ju
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.6
    • /
    • pp.125-132
    • /
    • 2018
  • This study was to investigate the speech sound production characteristics and evaluation aspects of preschool children through word test and connected speech test. For this, the authors conducted Assessment of Phonology and Articulation for Children(APAC) to 72 normal children(24 three-, four-, and five-year-olds each) and analyzed difference in percent of correct consonant(PCC) and intelligibility according to age and sex, correlation between PCC and intelligibility, and speech sound error patterns. PCC and intelligibility increased with age but there was no difference according to sex. The correlation was statistically significant in 5-year-old group. Speech sound error patterns were different in the two tests. This study showed that children's speech sound production varied according to language unit. Therefore, both types of tests should be done to grasp their speech sound production ability properly. This suggests that current standard to identify language impairment only by PCC of word level requires review and further studies.

Automatic Back-Transliteration with Word Origin Information (어원 정보를 이용한 외래어의 자동 원어 복원)

  • Lee, Sang-Yool;Kang, In-Su;Na, Seung-Hoon;Lee, Jong-Hyeok
    • Annual Conference on Human and Language Technology
    • /
    • 2003.10d
    • /
    • pp.54-60
    • /
    • 2003
  • 음차 표기된 외래어로부터 원어를 복원하는 문제는 원어의 발음정보를 이용한 통계적인 방법을 많이 사용한다. 하지만 지금까지의 연구들은 대부분 영어단어만을 그 대상으로 하였기 때문에 '도쿄(Tokyo)', '하인리히(Hinrich)'와 같이 어원이 영어가 아닌 단어들의 복원에는 좋은 결과를 보여주지 못했다. 이러한 문제를 해결하기 위하여 한글로 표기된 외래어의 어원을 판단할 수 있는 방법을 찾아내고, 이 방법을 통해 외래어를 어원별로 분리하여 학습모델을 구축함으로써 다양한 어원을 가진 외래어들의 복원 정확률을 높이고자 하였다. 위의 방식으로 구현된 시스템은 영어, 일본어, 중국어, 프랑스어의, 서로 다른 4개의 어원을 가진 데이터의 복원 실험에서 기존의 방식에 비해 13%의 성능 향상을 보였다.

  • PDF

Shale을 왜 '혈암'이라 하는가?

  • Lee, Chang-Jin;Ryu, Chun-Ryeol
    • 한국지구과학회:학술대회논문집
    • /
    • 2010.04a
    • /
    • pp.24-24
    • /
    • 2010
  • 중등 지구과학교과서와 대학 교재에서 학습하는 광물과 암석 이름은 대부분 영어, 한자, 일본어에서 도입한 용어이다. 이 용어에 대한 어원과 말뜻에 대한 분석이나 연구가 되지 않은 상태에서 바로 사용해왔기 때문에 지질학 초보자들이 학습하기에 아주 어렵다. 광물과 암석이름의 어원과 말뜻을 잘 알지 못하고 단순히 외우거나 학술적인 이름이나 의미만을 생각하고 사용하고 있으며, 한 광물이나 암석에 대하여 여러 가지 이름을 사용하기도 한다. 심지어 전혀 엉뚱한 암석 이름이 대중 사이에서 사용되고 있지만 이를 통제하지도 못하고 그 명칭이 틀렸다는 것도 모르고 있다. 예를 들면 영어로 Shale을 중등 교과서와 대학 교재에서 영어 발음을 따라 한국어로 셰일이라고 표기하지만 중국과 일본에서는 혈암(頁岩)으로 표기한다. 우리나라의 대중 매체의 인터넷 사전과 대중들이 사용하는 용어는 중국어 혈암(頁岩)을 공공연하게 '혈암'으로 표기하고 있다. '혈(頁)'을 한자 사전에서 찾아보면 '머리 혈'과 '책 면 엽'으로 정리되어 있다. 그러면 셰일의 암석학적 특징으로 볼 때 혈암이라고 해야 하나? 엽암이라고 해야 하나? 과학과의 다른 분야에서는 어려운 한자를 쉬운 한글로 표준화하는 연구와 실행을 꾸준히 진행해오고 있다. 생물의 경우 생물의 어려운 학명을 이미 쉬운 한글로 표준화했으며, 그 학명이 학생과 대중들에게 널리 알려져 있다. 지구과학의 교과서 문장에 나오는 단어와 전문용어가 한자를 한글로 표기한 경우가 많은데 이 단어들을 하루 속히 한글로 표준화하여 전문가들이 먼저 사용하는 동시에 학생과 대중들에게 알려 주어야 한다. 이렇게 되면 지구과학의 내용보다 용어가 어렵다는 인식을 바꾸어 줄 것이고 지구과학을 전공하고자 하는 학생들에게 희망과 용기를 줄 것이다. 그 일환으로 광물과 암석 이름의 어원을 조사해 보고 한글 표준화의 가능성을 타진해 보고자 한다.

  • PDF

Comparison of vowel lengths of articles and monosyllabic nouns in Korean EFL learners' noun phrase production in relation to their English proficiency (한국인 영어학습자의 명사구 발화에서 영어 능숙도에 따른 관사와 단음절 명사 모음 길이 비교)

  • Park, Woojim;Mo, Ranm;Rhee, Seok-Chae
    • Phonetics and Speech Sciences
    • /
    • v.12 no.3
    • /
    • pp.33-40
    • /
    • 2020
  • The purpose of this research was to find out the relation between Korean learners' English proficiency and the ratio of the length of the stressed vowel in a monosyllabic noun to that of the unstressed vowel in an article of the noun phrases (e.g., "a cup", "the bus", etcs.). Generally, the vowels in monosyllabic content words are phonetically more prominent than the ones in monosyllabic function words as the former have phrasal stress, making the vowels in content words longer in length, higher in pitch, and louder in amplitude. This study, based on the speech samples from Korean-Spoken English Corpus (K-SEC) and Rated Korean-Spoken English Corpus (Rated K-SEC), examined 879 English noun phrases, which are composed of an article and a monosyllabic noun, from sentences which are rated on 4 levels of proficiency. The lengths of the vowels in these 879 target NPs were measured and the ratio of the vowel lengths in nouns to those in articles was calculated. It turned out that the higher the proficiency level, the greater the mean ratio of the vowels in nouns to the vowels in articles, confirming the research's hypothesis. This research thus concluded that for the Korean English learners, the higher the English proficiency level, the better they could produce the stressed and unstressed vowels with more conspicuous length differences between them.

A Study on the Improvement of Isolated Word Recognition for Telephone Speech (전화음성의 격리단어인식 개선에 관한 연구)

  • Do, Sam-Joo;Un, Chong-Kwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.9 no.4
    • /
    • pp.66-76
    • /
    • 1990
  • In this work, the effect of noise and distortion of a telephone channel on the speech recognition is studied, and methods to improve the recognition rate are proposed. Computer simulation is done using the 100-word test data whichwere made by pronouncing ten times 100-phonetically balanced Korean isolated words in a speaker dependent mode. First, a spectral subtraction method is suggested to improve the noisy speech recognition. Then, the effect of bandwidth limiting and channel distortion is studied. It has been found that bandwidth limiting and amplitude distortion lower the recognition rate significantly, but phase distortion affects little. To reduce the channel effect, we modify the reference pattern according to some training data. When both channel noise and distortion exist, the recognition rate without the proposed method is merely 7.7~26.4%, but the recognition rate with the proposed method is drastically increased to 76.2~92.3%.

  • PDF

The Application of an HMM-based Clustering Method to Speaker Independent Word Recognition (HMM을 기본으로한 집단화 방법의 불특정화자 단어 인식에 응용)

  • Lim, H.;Park, S.-Y.;Park, M.-W.
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.5
    • /
    • pp.5-10
    • /
    • 1995
  • In this paper we present a clustering procedure based on the use of HMM in order to get multiple statistical models which can well absorb the variants of each speaker with different ways of saying words. The HMM-clustered models obtained from the developed technique are applied to the speaker independent isolated word recognition. The HMM clustering method splits off all observation sequences with poor likelihood scores which fall below threshold from the training set and create a new model out of the observation sequences in the new cluster. Clustering is iterated by classifying each observation sequence as belonging to the cluster whose model has the maximum likelihood score. If any clutter has changed from the previous iteration the model in that cluster is reestimated by using the Baum-Welch reestimation procedure. Therefore, this method is more efficient than the conventional template-based clustering technique due to the integration capability of the clustering procedure and the parameter estimation. Experimental data show that the HMM-based clustering procedure leads to $1.43\%$ performance improvements over the conventional template-based clustering method and $2.08\%$ improvements over the single HMM method for the case of recognition of the isolated korean digits.

  • PDF

Influences of Unilateral Mandibular Block Anesthesia on Motor Speech Abilities (편측 하악전달마취가 운동구어능력에 미치는 영향)

  • Yang, Seung-Jae;Seo, In-Hyo;Kim, Mee-Eun;Kim, Ki-Suk
    • Journal of Oral Medicine and Pain
    • /
    • v.31 no.1
    • /
    • pp.59-67
    • /
    • 2006
  • There exist patients complaining speech problem due to dysesthesia or anesthesia following dental surgical procedure accompanied by local anesthesia in clinical setting. However, it is not clear whether sensory problems in orofacial region may have an influence on motor speech abilities. The purpose of this study was to investigate whether transitory sensory impairment of mandibular nerve by local anesthesia may influence on the motor speech abilities and thus to evaluate possibility of distorted motor speech abilities due to dysesthesia of mandibular nerve. The subjects in this study consisted of 7 men and 3 women, whose right inferior alveolar nerve, lingual nerve and long buccal nerve was anesthetized by 1.8 mL lidocaine containing 1:100,000 epinephrine. All the subjects were instructed to self estimate degree of anesthesia on the affected region and speech discomfort with VAS before anesthesia, 30 seconds, 30, 60, 90, 120 and 150 minutes after anesthesia. In order to evaluate speech problems objectively, the words and sentences suggested to be read for testing speech speed, diadochokinetic rate, intonation, tremor and articulation were recorded according to the time and evaluated using a Computerized Speech $Lab^{(R)}$. Articulation was evaluated by a speech language clinician. The results of this study indicated that subjective discomfort of speech and depth of anesthesia was increased with time until 60 minutes after anesthesia and then decreased. Degree of subjective speech discomfort was correlated with depth of anesthesia self estimated by each subject. On the while, there was no significant difference in objective assessment item including speech speed, diadochokinetic rate, intonation and tremor. There was no change in articulation related with anesthesia. Based on the results of this study, it is not thought that sensory impairment of unilateral mandibular nerve deteriorates motor speech abilities in spite of individual's complaint of speech discomfort.