• Title/Summary/Keyword: spoken word

Search Result 111, Processing Time 0.023 seconds

A VQ Codebook Design Based on Phonetic Distribution for Distributed Speech Recognition (분산 음성인식 시스템의 성능향상을 위한 음소 빈도 비율에 기반한 VQ 코드북 설계)

  • Oh Yoo-Rhee;Yoon Jae-Sam;Lee Gil-Ho;Kim Hong-Kook;Ryu Chang-Sun;Koo Myoung-Wa
    • Proceedings of the KSPS conference
    • /
    • 2006.05a
    • /
    • pp.37-40
    • /
    • 2006
  • In this paper, we propose a VQ codebook design of speech recognition feature parameters in order to improve the performance of a distributed speech recognition system. For the context-dependent HMMs, a VQ codebook should be correlated with phonetic distributions in the training data for HMMs. Thus, we focus on a selection method of training data based on phonetic distribution instead of using all the training data for an efficient VQ codebook design. From the speech recognition experiments using the Aurora 4 database, the distributed speech recognition system employing a VQ codebook designed by the proposed method reduced the word error rate (WER) by 10% when compared with that using a VQ codebook trained with the whole training data.

  • PDF

Acoustic Phonetic Study about Focus Realization of wh-word Questions in Korean (국어 의문사${\cdot}$부정사 의문문의 초점 실현에 대한 음향음성학적 연구)

  • Park Mi-young;Ahn Byoung-seob
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.289-292
    • /
    • 2002
  • 국어에서 wh-단어가 포함된 의문사 의문문과 부정사 의문문은 통사적으로 같은 구조를 가지지만 의미적으로는 중의 관계에 있다. 그러나 두 의문문은 문장으로 발화될 때 음성적으로 서로 다른 여러 가지 운율 특징의 차이를 보여줌으로써, 발화 차원에서는 더 이상 중의 관계를 유지하지 않는다. 본고에서는 이러한 중의성의 해소는 두 의문문의 초점이 달리 실현되기 때문이라고 본다. 기존의 연구에서는 두 가지 의문문의 억양 연구를 초점의 작용 범위와 문말 억양의 차이, 강세구 형성의 유형을 중심으로 고찰하였다 .그리고 의문사와 부정사의 의미는, 이에 후행하는 서술어와 형성하는 강세구 유형에서 우선적으로 그 의미가 구분될 수 있다고 보았다. 그러나, 본고에서는 국어의 wh-단어가 초점으로서 작용하는 운율적 돋들림을 좀더 다양한 환경에서 실험하였다. 그리고 의문사${\cdot}$부정사와 후행하는 언어단위의 강세구 형성(accentual phrasing) 유형, 의문사${\cdot}$부정사 의문문 전체 문장 억양의 실현 양상, wh-단어 자체의 음의 높낮이(pitch contour) 실현 유형, 문말 억양(boundary tone)에서 음의 높낮이를 대상으로 분석하였다.

  • PDF

Language-Independent Word Acquisition Method Using a State-Transition Model

  • Xu, Bin;Yamagishi, Naohide;Suzuki, Makoto;Goto, Masayuki
    • Industrial Engineering and Management Systems
    • /
    • v.15 no.3
    • /
    • pp.224-230
    • /
    • 2016
  • The use of new words, numerous spoken languages, and abbreviations on the Internet is extensive. As such, automatically acquiring words for the purpose of analyzing Internet content is very difficult. In a previous study, we proposed a method for Japanese word segmentation using character N-grams. The previously proposed method is based on a simple state-transition model that is established under the assumption that the input document is described based on four states (denoted as A, B, C, and D) specified beforehand: state A represents words (nouns, verbs, etc.); state B represents statement separators (punctuation marks, conjunctions, etc.); state C represents postpositions (namely, words that follow nouns); and state D represents prepositions (namely, words that precede nouns). According to this state-transition model, based on the states applied to each pseudo-word, we search the document from beginning to end for an accessible pattern. In other words, the process of this transition detects some words during the search. In the present paper, we perform experiments based on the proposed word acquisition algorithm using Japanese and Chinese newspaper articles. These articles were obtained from Japan's Kyoto University and the Chinese People's Daily. The proposed method does not depend on the language structure. If text documents are expressed in Unicode the proposed method can, using the same algorithm, obtain words in Japanese and Chinese, which do not contain spaces between words. Hence, we demonstrate that the proposed method is language independent.

The Effect of Acoustic Correlates of Domain-initial Strengthening in Lexical Segmentation of English by Native Korean Listeners

  • Kim, Sa-Hyang;Cho, Tae-Hong
    • Phonetics and Speech Sciences
    • /
    • v.2 no.3
    • /
    • pp.115-124
    • /
    • 2010
  • The current study investigated the role of acoustic correlates of domain-initial strengthening in lexical segmentation of a non-native language. In a series of cross-modal identity-priming experiments, native Korean listeners heard English auditory stimuli and made lexical decision to visual targets (i.e., written words). The auditory stimuli contained critical two word sequences which created temporal lexical ambiguity (e.g., 'mill#company', with the competitor 'milk'). There was either an IP boundary or a word boundary between the two words in the critical sequences. The initial CV of the second word (e.g., [$k_{\Lambda}$] in 'company') was spliced from another token of the sequence in IP- or Wd-initial positions. The prime words were postboundary words (e.g., company) in Experiment 1, and preboundary words (e.g., mill) in Experiment 2. In both experiments, Korean listeners showed priming effects only in IP contexts, indicating that they can make use of IP boundary cues of English in lexical segmentation of English. The acoustic correlates of domain-initial strengthening were also exploited by Korean listeners, but significant effects were found only for the segmentation of postboundary words. The results therefore indicate that L2 listeners can make use of prosodically driven phonetic detail in lexical segmentation of L2, as long as the direction of those cues are similar in their L1 and L2. The exact use of the cues by Korean listeners was, however, different from that found with native English listeners in Cho, McQueen, and Cox (2007). The differential use of the prosodically driven phonetic cues by the native and non-native listeners are thus discussed.

  • PDF

An Artificial Intelligence Approach for Word Semantic Similarity Measure of Hindi Language

  • Younas, Farah;Nadir, Jumana;Usman, Muhammad;Khan, Muhammad Attique;Khan, Sajid Ali;Kadry, Seifedine;Nam, Yunyoung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.6
    • /
    • pp.2049-2068
    • /
    • 2021
  • AI combined with NLP techniques has promoted the use of Virtual Assistants and have made people rely on them for many diverse uses. Conversational Agents are the most promising technique that assists computer users through their operation. An important challenge in developing Conversational Agents globally is transferring the groundbreaking expertise obtained in English to other languages. AI is making it possible to transfer this learning. There is a dire need to develop systems that understand secular languages. One such difficult language is Hindi, which is the fourth most spoken language in the world. Semantic similarity is an important part of Natural Language Processing, which involves applications such as ontology learning and information extraction, for developing conversational agents. Most of the research is concentrated on English and other European languages. This paper presents a Corpus-based word semantic similarity measure for Hindi. An experiment involving the translation of the English benchmark dataset to Hindi is performed, investigating the incorporation of the corpus, with human and machine similarity ratings. A significant correlation to the human intuition and the algorithm ratings has been calculated for analyzing the accuracy of the proposed similarity measures. The method can be adapted in various applications of word semantic similarity or module for any other language.

Language-based Classification of Words using Deep Learning (딥러닝을 이용한 언어별 단어 분류 기법)

  • Zacharia, Nyambegera Duke;Dahouda, Mwamba Kasongo;Joe, Inwhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.411-414
    • /
    • 2021
  • One of the elements of technology that has become extremely critical within the field of education today is Deep learning. It has been especially used in the area of natural language processing, with some word-representation vectors playing a critical role. However, some of the low-resource languages, such as Swahili, which is spoken in East and Central Africa, do not fall into this category. Natural Language Processing is a field of artificial intelligence where systems and computational algorithms are built that can automatically understand, analyze, manipulate, and potentially generate human language. After coming to discover that some African languages fail to have a proper representation within language processing, even going so far as to describe them as lower resource languages because of inadequate data for NLP, we decided to study the Swahili language. As it stands currently, language modeling using neural networks requires adequate data to guarantee quality word representation, which is important for natural language processing (NLP) tasks. Most African languages have no data for such processing. The main aim of this project is to recognize and focus on the classification of words in English, Swahili, and Korean with a particular emphasis on the low-resource Swahili language. Finally, we are going to create our own dataset and reprocess the data using Python Script, formulate the syllabic alphabet, and finally develop an English, Swahili, and Korean word analogy dataset.

Spoken Dialogue Management System based on Word Spotting (단어추출을 기반으로 한 음성 대화처리 시스템)

  • Song, Chang-Hwan;Yu, Ha-Jin;Oh, Yung-Hwan
    • Annual Conference on Human and Language Technology
    • /
    • 1994.11a
    • /
    • pp.313-317
    • /
    • 1994
  • 본 연구에서는 인간과 컴퓨터 사이의 음성을 이용한 대화 시스템을 구현하였다. 특별히 음성을 인식하는데 있어서 단어추출(word apotting) 방법을 사용하는 경우에 알맞은 의미 분석 방법과 도표 형태의 규칙을 기반으로 하여 시스템의 응답을 생성하는 방법에 대하여 연구하였다. 단어추출 방법을 사용하여 음성을 인식하는 경우에는 형태소분석 및 구문분석의 과정을 이용하여 사용자의 발화 의도를 분석하기 어려우므로 새로운 의미분석 방법을 필요로 한다. 본 연구에서는 퍼지 관계를 사용하여 사용자의 발화 의도를 파악하는 새로운 의미분석 방법을 제안하였다. 그리고, 사용자의 발화 의도에 적절한 시스템의 응답을 만들고 응답의 내용을 효율적으로 관리하기 위한 방범으로 현재의 상태와 사용자의 의도에 따른 응답 규칙을 만들었다. 이 규칙은 도표의 형태로 구현되어 규칙의 갱신 및 확장을 편리하게 만들었다. 대화의 영역은 열차 예매에 관련된 예매, 취소, 문의 및 관광지 안내로 제안하였다. 음성의 오인식에 의한 오류에 적절히 대처하기 위해 시스템의 응답은 확인 및 수정 과정을 포함하고 있다. 본 시스템은 문자 입력과 음성 입력으로 각각 실험한 결과, 사용자는 시스템의 도움을 받아 자신이 의도하는 목적을 달성할 수 있었다.

  • PDF

Recent update on reading disability (dyslexia) focused on neurobiology

  • Kim, Sung Koo
    • Clinical and Experimental Pediatrics
    • /
    • v.64 no.10
    • /
    • pp.497-503
    • /
    • 2021
  • Reading disability (dyslexia) refers to an unexpected difficulty with reading for an individual who has the intelligence to be a much better reader. Dyslexia is most commonly caused by a difficulty in phonological processing (the appreciation of the individual sounds of spoken language), which affects the ability of an individual to speak, read, and spell. In this paper, I describe reading disabilities by focusing on their underlying neurobiological mechanisms. Neurobiological studies using functional brain imaging have uncovered the reading pathways, brain regions involved in reading, and neurobiological abnormalities of dyslexia. The reading pathway is in the order of visual analysis, letter recognition, word recognition, meaning (semantics), phonological processing, and speech production. According to functional neuroimaging studies, the important areas of the brain related to reading include the inferior frontal cortex (Broca's area), the midtemporal lobe region, the inferior parieto-temporal area, and the left occipitotemporal region (visual word form area). Interventions for dyslexia can affect reading ability by causing changes in brain function and structure. An accurate diagnosis and timely specialized intervention are important in children with dyslexia. In cases in which national infant development screening tests have been conducted, as in Korea, if language developmental delay and early predictors of dyslexia are detected, careful observation of the progression to dyslexia and early intervention should be made.

A Research on the Spoken Language in Korean Voices from Berlin: Focusing on Phonological and Morphological Features (20세기 초 베를린 한인 음원의 음운과 형태)

  • Cha, Jaeeun;Hong, Jongseon
    • Korean Linguistics
    • /
    • v.72
    • /
    • pp.257-282
    • /
    • 2016
  • The aim of this paper is to research phonological and morphological features in Korean Voices from Berlin. The Korean Voices from Berlin was recorded in 1917 at Berlin by 5 Korean prisoners engaged in World War I, some of them came from North Hamgyeong Province, the others came from Pyeongan Province, therefore these data show a North Korean regional dialect. The data are composed of three materials, counting numbers, reciting scriptures and singing folksongs. The results of this research are as follows. 1) The consonant system of Korean voices is similar to standard Korean. The 19 consonants are classified according to 5 manners of articulations and 5 points of articulations. 2) The liquid /l/ has three allophones, [ɾ] appeared in an onset position, [l] in a word medial coda position or preceded by [l], [ɹ] in a word final coda position. 3) The vowel system of Korean voices is similar to early 20th Korean's. It has 8 monophthongs, /a, ʌ, o, u, ɯ, i, e, ${\varepsilon}$/. 4) The 1 to 10 numbers in Korean voices are similar to Middle-Korean numerals. 5) The genitive particle '/ɯi/의' is pronounced [i], [ɯ], [${\varepsilon}$], especially [ɯ] is appeared in Sino Korean. 6) The /l/-deletion of conjugations are similar to Middle-Korean, /l/ deletion always occurred, if [+cor] consonants are followed.

Korean Word Recognition Using Diphone- Level Hidden Markov Model (Diphone 단위 의 hidden Markov model을 이용한 한국어 단어 인식)

  • Park, Hyun-Sang;Un, Chong-Kwan;Park, Yong-Kyu;Kwon, Oh-Wook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.1
    • /
    • pp.14-23
    • /
    • 1994
  • In this paper, speech units appropriate for recognition of Korean language have been studied. For better speech recognition, co-articulatory effects within an utterance should be considered in the selection of a recognition unit. One way to model such effects is to use larger units of speech. It has been found that diphone is a good recognition unit because it can model transitional legions explicitly. When diphone is used, stationary phoneme models may be inserted between diphones. Computer simulation for isolated word recognition was done with 7 word database spoken by seven male speakers. Best performance was obtained when transition regions between phonemes were modeled by two-state HMM's and stationary phoneme regions by one-state HMM's excluding /b/, /d/, and /g/. By merging rarely occurring diphone units, the recognition rate was increased from $93.98\%$ to $96.29\%$. In addition, a local interpolation technique was used to smooth a poorly-modeled HMM with a well-trained HMM. With this technique we could get the recognition rate of $97.22\%$ after merging some diphone units.

  • PDF