• Title/Summary/Keyword: spoken word

Search Result 111, Processing Time 0.027 seconds

The Etymology of Scientific Names for Korean Mammals

  • Jo, Yeong-Seok;Koprowski, John L.;Baccus, John T.;Yoo, Jung-Sun
    • Animal Systematics, Evolution and Diversity
    • /
    • v.37 no.3
    • /
    • pp.255-272
    • /
    • 2021
  • Etymologies are explanations of what our words meant and how they sounded 600 to 2,000 years ago. When Linnaeus in the mid-1700s began naming animals with a binomial nomenclature, he based names on the Latin Grammatical Form. Since many names have Latin or Greek roots, the name for an animal is the same throughout the world because Latin is no longer a spoken language and meanings of names will not evolve or change. In his use of Latin or a Latinized word for the genus and species, Linnaeus used descriptive words that will always be the same. Notwithstanding the importance of etymologies for scientific names, no study has addressed the etymology of scientific names for Korean mammals. Here, we list etymologies for scientific names of 127 mammal species, 84 genera, 32 families, and 8 orders from Korea. The origins of etymologies are mostly based on morphology, color of pelage, behavior, distribution, locality, country name, or a person's name. This paper will be useful for new students and trained scholars studying Korean mammals.

Formulaic Language Development in Asian Learners of English: A Comparative Study of Phrase-frames in Written and Oral Production

  • Yoon Namkung;Ute Romer
    • Asia Pacific Journal of Corpus Research
    • /
    • v.4 no.2
    • /
    • pp.1-39
    • /
    • 2023
  • Recent research in usage-based Second Language Acquisition has provided new insights into second language (L2) learners' development of formulaic language (Wulff, 2019). The current study examines the use of phrase-frames, which are recurring sequences of words including one or more variable slots (e.g., it is * that), in written and oral production data from Asian learners of English across four proficiency levels (beginner, low-intermediate, high-intermediate, advanced) and native English speakers. The variability, predictability, and discourse functions of the most frequent 4-word phrase-frames from the written essay and spoken dialogue sub-corpora of the International Corpus Network of Asian Learners of English (ICNALE) were analyzed and then compared across groups and modes. The results revealed that while learners' phrase-frames in writing became more variable and unpredictable as proficiency increased, no clear developmental patterns were found in speaking, although all groups used more fixed and predictable phrase-frames than the reference group. Further, no developmental trajectories in the functions of the most frequent phrase-frames were found in both modes. Additionally, lower-level learners and the reference group used more variable phrase-frames in speaking, whereas advanced-level learners showed more variability in writing. This study contributes to a better understanding of the development of L2 phraseological competence.

Speaker Identification Using Dynamic Time Warping Algorithm (동적 시간 신축 알고리즘을 이용한 화자 식별)

  • Jeong, Seung-Do
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.5
    • /
    • pp.2402-2409
    • /
    • 2011
  • The voice has distinguishable acoustic properties of speaker as well as transmitting information. The speaker recognition is the method to figures out who speaks the words through acoustic differences between speakers. The speaker recognition is roughly divided two kinds of categories: speaker verification and identification. The speaker verification is the method which verifies speaker himself based on only one's voice. Otherwise, the speaker identification is the method to find speaker by searching most similar model in the database previously consisted of multiple subordinate sentences. This paper composes feature vector from extracting MFCC coefficients and uses the dynamic time warping algorithm to compare the similarity between features. In order to describe common characteristic based on phonological features of spoken words, two subordinate sentences for each speaker are used as the training data. Thus, it is possible to identify the speaker who didn't say the same word which is previously stored in the database.

Sentence design for speech recognition database

  • Zu Yiqing
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.472-472
    • /
    • 1996
  • The material of database for speech recognition should include phonetic phenomena as much as possible. At the same time, such material should be phonetically compact with low redundancy[1, 2]. The phonetic phenomena in continuous speech is the key problem in speech recognition. This paper describes the processing of a set of sentences collected from the database of 1993 and 1994 "People's Daily"(Chinese newspaper) which consist of news, politics, economics, arts, sports etc.. In those sentences, both phonetic phenometla and sentence patterns are included. In continuous speech, phonemes always appear in the form of allophones which result in the co-articulary effects. The task of designing a speech database should be concerned with both intra-syllabic and inter-syllabic allophone structures. In our experiments, there are 404 syllables, 415 inter-syllabic diphones, 3050 merged inter-syllabic triphones and 2161 merged final-initial structures in read speech. Statistics on the database from "People's Daily" gives and evaluation to all of the possible phonetic structures. In this sentence set, we first consider the phonetic balances among syllables, inter-syllabic diphones, inter-syllabic triphones and semi-syllables with their junctures. The syllabic balances ensure the intra-syllabic phenomena such as phonemes, initial/final and consonant/vowel. the rest describes the inter-syllabic jucture. The 1560 sentences consist of 96% syllables without tones(the absent syllables are only used in spoken language), 100% inter-syllabic diphones, 67% inter-syllabic triphones(87% of which appears in Peoples' Daily). There are rougWy 17 kinds of sentence patterns which appear in our sentence set. By taking the transitions between syllables into account, the Chinese speech recognition systems have gotten significantly high recognition rates[3, 4]. The following figure shows the process of collecting sentences. [people's Daily Database] -> [segmentation of sentences] -> [segmentation of word group] -> [translate the text in to Pin Yin] -> [statistic phonetic phenomena & select useful paragraph] -> [modify the selected sentences by hand] -> [phonetic compact sentence set]

  • PDF

The Aquisition and Description of Voiceless Stops of Spanish and English

  • Marie Fellbaum
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.274-274
    • /
    • 1996
  • This presents the preliminary results from work in progress of a paired study of the acquisition of voiceless stops by Spanish speakers learning English, and American English speakers learning Spanish. For this study the hypothesis was that the American speakers would have no difficulty suppressing the aspiration in Spanish unaspirated stops; the Spanish speakers would have difficulty acquiring the aspiration necessary for English voiceless stops, according to Eckman's Markedness Differential Hypothesis. The null hypothesis was proved. All subjects were given the same set of disyllabic real words of English and Spanish in carrier phrases. The tokens analyzed in this report are limited to word-initial voiceless stops, followed by a low back vowel in stressed syllables. Tokens were randomized and then arranged in a list with the words appearing three separate times. Aspiration was measured from the burst to the onset of voicing(VOT). Both the first language (Ll) tokens and second language (L2) tokens were compared for each speaker and between the two groups of language speakers. Results indicate that the Spanish speakers, as a group, were able to reach the accepted target language VOT of English, but English speakers were not able to reach the accepted range for Spanish, in spite of statistically significant changes of p<.OOl by speakers in both groups of learners. A closer analysis of the speech samples revealed wide variability within the speech of native speakers of English. Not only is variability in English due to the wide range of VOT (120 msecs. for English labials, for example) but individual speakers showed different patterns. These results are revealing for the demands requied in experimental designs and the number of speakers and tokens requied for an adequate description of different languages. In addition, a simple report of means will not distinguish the speakers and the respective language learning situation; measurements must also include the RANGE of acceptability of VOT for phonetic segments. This has immediate consequences for the learning and teaching of foreign languages involving aspirated stops. In addition, the labelling of spoken language in speech technology is shown to be inadequate without a fuller mathematical description.

  • PDF

SOME PROSODIC FEATURES OBSERVED IN THE PASSAGE READING BY JAPANESE LEARNERS OF ENGLISH

  • Kanzaki, Kazuo
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.37-42
    • /
    • 1996
  • This study aims to see some prosodic features of English spoken by Japanese learners of English. It focuses on speech rates, pauses, and intonation when the learners read an English passage. Three Japanese learners of English, who are all male university students, were asked to read the speech material, an English passage of 110 word length, at their normal reading speed. Then a native speaker of English, a male American English teacher. was asked to read the same passage. The Japanese speakers were also asked to read a Japanese passage of 286 letters (Japanese Kana) to compare the reading of English with that of japanese. Their speech was analyzed on a computerized system (KAY Computerized Speech Lab). Wave forms, spectrograms, and F0 contours were shown on the screen to measure the duration of pauses, phrases and sentences and to observe intonation contours. One finding of the experiment was that the movement of the low speakers' speech rates showed a similar tendency in their reading of the English passage. Reading of the Japanese passage by the three learners also had a similar tendency in the movement of speech rates. Another finding was that the frequency of pauses in the learners speech was greater than that in the speech of the native speaker, but that the ration of the total pause length to the whole utterance length was about tile same in both the learners' and the native speaker's speech. A similar tendency was observed about the learners' reading of the Japanese passage except that they used shorter pauses in the mid-sentence position. As to intonation contours, we found that the learners used a narrower pitch range than the native speaker in their reading of the English passage while they used a wider pitch range as they read the Japanese passage. It was found that the learners tended to use falling intonation before pauses whereas the native speaker used different intonation patterns. These findings are applicable to the teaching of English pronunciation at the passage level in the sense that they can show the learners. Japanese here, what their problems are and how they could be solved.

  • PDF

Comparison of ICA Methods for the Recognition of Corrupted Korean Speech (잡음 섞인 한국어 인식을 위한 ICA 비교 연구)

  • Kim, Seon-Il
    • 전자공학회논문지 IE
    • /
    • v.45 no.3
    • /
    • pp.20-26
    • /
    • 2008
  • Two independent component analysis(ICA) algorithms were applied for the recognition of speech signals corrupted by a car engine noise. Speech recognition was performed by hidden markov model(HMM) for the estimated signals and recognition rates were compared with those of orginal speech signals which are not corrupted. Two different ICA methods were applied for the estimation of speech signals, one of which is FastICA algorithm that maximizes negentropy, the other is information-maximization approach that maximizes the mutual information between inputs and outputs to give maximum independence among outputs. Word recognition rate for the Korean news sentences spoken by a male anchor is 87.85%, while there is 1.65% drop of performance on the average for the estimated speech signals by FastICA and 2.02% by information-maximization for the various signal to noise ratio(SNR). There is little difference between the methods.

A Real-Time Embedded Speech Recognition System (실시간 임베디드 음성 인식 시스템)

  • 남상엽;전은희;박인정
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.40 no.1
    • /
    • pp.74-81
    • /
    • 2003
  • In this study, we'd implemented a real time embedded speech recognition system that requires minimum memory size for speech recognition engine and DB. The word to be recognized consist of 40 commands used in a PCS phone and 10 digits. The speech data spoken by 15 male and 15 female speakers was recorded and analyzed by short time analysis method, which window size is 256. The LPC parameters of each frame were computed through Levinson-Burbin algorithm and they were transformed to Cepstrum parameters. Before the analysis, speech data should be processed by pre-emphasis that will remove the DC component in speech and emphasize high frequency band. Baum-Welch reestimation algorithm was used for the training of HMM. In test phone, we could get a recognition rate using likelihood method. We implemented an embedded system by porting the speech recognition engine on ARM core evaluation board. The overall recognition rate of this system was 95%, while the rate on 40 commands was 96% and that 10 digits was 94%.

A Preliminary Report on Perceptual Resolutions of Korean Consonant Cluster Simplification and Their Possible Change over Time

  • Cho, Tae-Hong
    • Phonetics and Speech Sciences
    • /
    • v.2 no.4
    • /
    • pp.83-92
    • /
    • 2010
  • The present study examined how listeners of Seoul Korean would recover deleted phonemes in consonant cluster simplification. In a phoneme monitoring experiment, listeners had to monitor for C2 (/k/ or /p/) in C1C2C3 when C2 was deleted (C1 was preserved) or preserved (C1 was deleted). The target consonant (C2) was either /k/ or /p/ (e.g., i$\b{lk}$-t${\partial}$lato vs. pa$\b{lp}$-t${\partial}$lato), and there were two listener groups, one group tested in 2002 and the other in 2009. Some points have emerged from the results. First, listeners were able to detect deleted phonemes as accurately and rapidly as preserved phonemes, showing that the physical presence of the acoustic information did not improve the listeners' performance. This suggests that listeners must have relied on language-specific phonological knowledge about the consonant cluster simplification, rather than relying on the low-level acoustic-phonetic information. Second, listener groups (participants in 2002 vs. 2009), differed in processing /p/ versus /k/: listeners in 2009 failed to detect /p/ more frequently than those in 2002, suggesting that the way the consonant cluster sequence is produced and perceived has changed over time. This result was interpreted as coming from statistical patterns of speech production in contemporary Seoul Korean as reported in a recent study by Cho & Kim (2009): /p/ is deleted far more often than /p/ is preserved, which is likely reflected in the way listeners process simplified variants. Finally, listeners processed /k/ more efficiently than /p/, especially when the target was physically present (in C-preserved condition), indicating that listeners benefited more from the presence of /k/ than of /p/. This was interpreted as supporting the view that velars are perceptually more robust than labials, which constrains shaping phonological patterns of the language. These results were then discussed in terms of their implications for theories of spoken word recognition.

  • PDF

Automatic Generation of Pronunciation Variants for Korean Continuous Speech Recognition (한국어 연속음성 인식을 위한 발음열 자동 생성)

  • 이경님;전재훈;정민화
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.2
    • /
    • pp.35-43
    • /
    • 2001
  • Many speech recognition systems have used pronunciation lexicon with possible multiple phonetic transcriptions for each word. The pronunciation lexicon is of often manually created. This process requires a lot of time and efforts, and furthermore, it is very difficult to maintain consistency of lexicon. To handle these problems, we present a model based on morphophon-ological analysis for automatically generating Korean pronunciation variants. By analyzing phonological variations frequently found in spoken Korean, we have derived about 700 phonemic contexts that would trigger the multilevel application of the corresponding phonological process, which consists of phonemic and allophonic rules. In generating pronunciation variants, morphological analysis is preceded to handle variations of phonological words. According to the morphological category, a set of tables reflecting phonemic context is looked up to generate pronunciation variants. Our experiments show that the proposed model produces mostly correct pronunciation variants of phonological words. Then we estimated how useful the pronunciation lexicon and training phonetic transcription using this proposed systems.

  • PDF