• 제목/요약/키워드: spoken word

검색결과 111건 처리시간 0.025초

청각장애아동의 음운인식능력에 대한 연구 (Phonological Awareness in Hearing Impaired Children)

  • 박상희;석동일;정옥란
    • 음성과학
    • /
    • 제9권2호
    • /
    • pp.193-202
    • /
    • 2002
  • The purpose of this study is to examine the phonological awareness of hearing impaired children. A number of researches indicate that hearing impaired children have articulation disorders due to their impaired auditory feedback. However, in children who have the ability to distinguish certain phonemes, they sometimes show misarticulation of the phonemes. Phonological awareness refers to recognizing the speech-sound units and their forms in spoken language (Hong, 2001). The subjects who participated in the experiment are composed of four hearing impaired children (3 cochlear implanted children and 1 hearing aided child). Phonological Awareness was evaluated by the test battery developed by Paik et al. (2001). The subtests consisted of rhyme matching, onset matching I II, word initial segmentation and matching I II. If the children asked for retelling, it was retold to a maximum of 4 times. Each item score was 1 point. The results were compared to those of Paik et al. (2001). The results of study were that subject 1 showed superior rhyme matching ability, subjects 2 and 3 fair ability, and subject 4 inferior ability. In onset matching I, all subjects showed inferior ability except for subject 3. Interestingly, subjects 1 showed the lowest onset matching I score. In word initial segmentation and matching I, subjects 1 and 4 showed inferior ability and subjects 2 and 3 showed fair ability. In onset matching II, subject 2 showed the perfect score 10 even though she showed very low score. In word initial segmentation and matching II, only subjects 2 and 3 showed appropriate levels of the skill. The results show that the phonological awareness of hearing impaired children is different from that of normal children.

  • PDF

On-Line Blind Channel Normalization for Noise-Robust Speech Recognition

  • Jung, Ho-Young
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제1권3호
    • /
    • pp.143-151
    • /
    • 2012
  • A new data-driven method for the design of a blind modulation frequency filter that suppresses the slow-varying noise components is proposed. The proposed method is based on the temporal local decorrelation of the feature vector sequence, and is done on an utterance-by-utterance basis. Although the conventional modulation frequency filtering approaches the same form regardless of the task and environment conditions, the proposed method can provide an adaptive modulation frequency filter that outperforms conventional methods for each utterance. In addition, the method ultimately performs channel normalization in a feature domain with applications to log-spectral parameters. The performance was evaluated by speaker-independent isolated-word recognition experiments under additive noise environments. The proposed method achieved outstanding improvement for speech recognition in environments with significant noise and was also effective in a range of feature representations.

  • PDF

Prosody in Spoken Language Processing

  • Schafer Amy J.;Jun Sun-Ah
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 2000년도 하계학술발표대회 논문집 제19권 1호
    • /
    • pp.7-10
    • /
    • 2000
  • Studies of prosody and sentence processing have demonstrated that prosodic phrasing can exhibit strong effects on processing decisions in English. In this paper, we tested Korean sentence fragments containing syntactically ambiguous Adj-N1-N2 strings in a cross-modal naming task. Four accentual phrasing patterns were tested: (a) the default phrasing pattern, in which each word forms an accentual phrase; (b) a phrasing biased toward N1 modification; (c) a phrasing biased toward complex-NP modification; and (d) a phrasing used with adjective focus. Patterns (b) and (c) are disambiguating phrasings; the other two are commonly found with both interpretations and are thus ambiguous. The results showed that the naming time of items produced in the prosody contradicting the semantic grouping is significantly longer than that produced in either default or supporting prosody, We claim that, as in English, prosodic information in Korean is parsed into a well-formed prosodic representation during the early stages of processing. The partially constructed prosodic representation produces incremental effects on syntactic and semantic processing decisions and is retained in memory to influence reanalysis decisions.

  • PDF

Robust Syntactic Annotation of Corpora and Memory-Based Parsing

  • Hinrichs, Erhard W.
    • 한국언어정보학회:학술대회논문집
    • /
    • 한국언어정보학회 2002년도 Language, Information, and Computation Proceedings of The 16th Pacific Asia Conference
    • /
    • pp.1-1
    • /
    • 2002
  • This talk provides an overview of current work in my research group on the syntactic annotation of the T bingen corpus of spoken German and of the German Reference Corpus (Deutsches Referenzkorpus: DEREKO) of written texts. Morpho-syntactic and syntactic annotation as well as annotation of function-argument structure for these corpora is performed automatically by a hybrid architecture that combines robust symbolic parsing with finite-state methods ("chunk parsing" in the sense Abney) with memory-based parsing (in the sense of Daelemans). The resulting robust annotations can be used by theoretical linguists, who lire interested in large-scale, empirical data, and by computational linguists, who are in need of training material for a wide range of language technology applications. To aid retrieval of annotated trees from the treebank, a query tool VIQTORYA with a graphical user interface and a logic-based query language has been developed. VIQTORYA allows users to query the treebanks for linguistic structures at the word level, at the level of individual phrases, and at the clausal level.

  • PDF

한국인을 위한 외국어 발음 교정 시스템의 개발 및 성능 평가 (Performance Evaluation of English Word Pronunciation Correction System)

  • 김무중;김효숙;김선주;김병기;하진영;권철홍
    • 대한음성학회지:말소리
    • /
    • 제46호
    • /
    • pp.87-102
    • /
    • 2003
  • In this paper, we present an English pronunciation correction system for Korean speakers and show some of experimental results on it. The aim of the system is to detect mispronounced phonemes in spoken words and to give appropriate correction comments to users. There are several English pronunciation correction systems adopting speech recognition technology, however, most of them use conventional speech recognition engines. From this reason, they could not give phoneme based correction comments to users. In our system, we build two kinds of phoneme models: standard native speaker models and Korean's error models. We also design recognition network based on phonemes to detect Koreans' common mispronunciations. We get 90% detection rate in insertion/deletion/replacement of phonemes, but we cannot get high detection rate in diphthong split and accents.

  • PDF

Prosodic Strengthening in Speech Production and Perception: The Current Issues

  • Cho, Tae-Hong
    • 음성과학
    • /
    • 제14권4호
    • /
    • pp.7-24
    • /
    • 2007
  • This paper discusses some current issues regarding how prosodic structure is manifested in fine-grained phonetic details, how prosodically-conditioned articulatory variation is explained in terms of speech dynamics, and how such phonetic manifestation of prosodic structure may be exploited in spoken word recognition. Prosodic structure is phonetically manifested in prosodically important landmark locations such as prosodic domain-final position, domain-initial position and stressed/accented syllables. It will be discussed how each of the prosodic landmarks engenders particular phonetic patterns, ow articulatory variation in such locations are dynamically accounted for, and how prosodically-driven fine-grained phonetic detail is exploited by listeners in speech comprehension.

  • PDF

3세${\sim}$8세 아동의 자유 발화 분석을 바탕으로 한 한국어 말소리의 빈도 관련 정보 (Phoneme Frequency of 3 to 8-year-old Korean Children)

  • 신지영
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2005년도 춘계 학술대회 발표논문집
    • /
    • pp.15-19
    • /
    • 2005
  • The aim of this study is to provide some information on frequencies of occurrence for units of Korean phonemes and syllables analysing spontaneous speech spoken by 3 to 8-year-old Korean children. 49 Korean Children(7${\sim}$10 children for each age) were employed as subjects for this study. Speech data were recorded and phonemically transcribed. 120 utterances for each child were selected for analysis except one child whose data were only 91 utterances. The data size of the present study were 5,971 utterances, 5,1554 syllables, and 105491 phonemes. Among 19 consonants, /n/ showed highest frequency rate of these four conson ants were over 50% for all age groups. Among 18 vowels, /a/ was the most frequent one and /i/ and / ${\wedge}$ were the second and third respectively. The frequency rate of these four consonants were over 50% for all age groups. Frequently occurring syllable types were a part of grammatical word in most cases. Only 5${\sim}$6% of syllable types covered 50% of speech.

  • PDF

국소 극대-극소점 간의 간격정보를 이용한 시간영역에서의 음성인식을 위한 파라미터 추출 방법 (A Time-Domain Parameter Extraction Method for Speech Recognition using the Local Peak-to-Peak Interval Information)

  • 임재열;김형일;안수길
    • 전자공학회논문지B
    • /
    • 제31B권2호
    • /
    • pp.28-34
    • /
    • 1994
  • In this paper, a new time-domain parameter extraction method for speech recognition is proposed. The suggested emthod is based on the fact that the local peak-to-peak interval, i.e., the interval between maxima and minima of speech waveform is closely related to the frequency component of the speech signal. The parameterization is achieved by a sort of filter bank technique in the time domain. To test the proposed parameter extraction emthod, an isolated word recognizer based on Vector Quantization and Hidden Markov Model was constructed. As a test material, 22 words spoken by ten males were used and the recognition rate of 92.9% was obtained. This result leads to the conclusion that the new parameter extraction method can be used for speech recognition system. Since the proposed method is processed in the time domain, the real-time parameter extraction can be implemented in the class of personal computer equipped onlu with an A/D converter without any DSP board.

  • PDF

Considering Dynamic Non-Segmental Phonetics

  • Fujino, Yoshinari
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2000년도 7월 학술대회지
    • /
    • pp.312-320
    • /
    • 2000
  • This presentation aims to explore some possibility of non-segmental phonetics usually ignored in phonetics education. In pedagogical phonetics, especially ESL/EFL oriented phonetics speech sounds tend to be classified in two criteria 1) 'pronunciation' which deals with segments and 2) 'prosody' or 'suprasegmentals', a criterion that deals with non-segmental elements such as stress and intonation. However, speech involves more dynamic processing. It is non-linear and multi-dimensional in spite of the linear sequence of symbols in phonetic/phonological transcriptions. No word is without pitch or voice quality apart from segmental characteristics whether it is spoken in isolation or cut out from continuous speech. This simply tells the dichotomy of pronunciation and prosody is merely a useful convention. There exists some room to consider dynamic non-segmental phonetics. Examples of non-segmental phonetic investigation, some of the analyses conducted within the frame of Firthian Prosodic Analysis, especially of the relation between vowel variants and foot types, are examined and we see what kind of auditory phonetic training is required to understand impressionistic transcriptions which lie behind the non-segmental phonetics.

  • PDF

음성명령에 의한 모바일로봇의 실시간 무선원격 제어 실현 (Real-Time Implementation of Wireless Remote Control of Mobile Robot Based-on Speech Recognition Command)

  • 심병균;한성현
    • 한국생산제조학회지
    • /
    • 제20권2호
    • /
    • pp.207-213
    • /
    • 2011
  • In this paper, we present a study on the real-time implementation of mobile robot to which the interactive voice recognition technique is applied. The speech command utters the sentential connected word and asserted through the wireless remote control system. We implement an automatic distance speech command recognition system for voice-enabled services interactively. We construct a baseline automatic speech command recognition system, where acoustic models are trained from speech utterances spoken by a microphone. In order to improve the performance of the baseline automatic speech recognition system, the acoustic models are adapted to adjust the spectral characteristics of speech according to different microphones and the environmental mismatches between cross talking and distance speech. We illustrate the performance of the developed speech recognition system by experiments. As a result, it is illustrated that the average rates of proposed speech recognition system shows about 95% above.