• Title/Summary/Keyword: syllable detection

Search Result 16, Processing Time 0.018 seconds

The Development of Phonological Awareness in Children (아동의 음운인식 발달)

  • Park, Hyang Ah
    • Korean Journal of Child Studies
    • /
    • v.21 no.1
    • /
    • pp.35-44
    • /
    • 2000
  • This study examined the development of phonological awareness of 3-, 5-, and 7-year-old children, 20 subjects at each age level. The 3-year-olds were given 2 phoneme detection tasks and the 5- and 7-year-olds were given 5 phoneme detection tasks. In each task, the children first heard a target syllable together with 2 other syllables and were asked to tell which of the 2 syllables sounded similar to the target. Children were able to detect relatively large segments ($Consonant_1+Vowel$ or $Vowel+Consonant_2$: $C_1V$ or $VC_2$) at the age of 3 and gradually progressed to smaller sound segments(e.g., phonemes). This study indicated the Korean children detect $C_1V$ segments better than $VC_2$ segments and detect the initial consonant better than the middle vowel and the final consonant.

  • PDF

The Primitive Representation in Speech Perception: Phoneme or Distinctive Features (말지각의 기초표상: 음소 또는 변별자질)

  • Bae, Moon-Jung
    • Phonetics and Speech Sciences
    • /
    • v.5 no.4
    • /
    • pp.157-169
    • /
    • 2013
  • Using a target detection task, this study compared the processing automaticity of phonemes and features in spoken syllable stimuli to determine the primitive representation in speech perception, phoneme or distinctive feature. For this, we modified the visual search task(Treisman et al., 1992) developed to investigate the processing of visual features(ex. color, shape or their conjunction) for auditory stimuli. In our task, the distinctive features(ex. aspiration or coronal) corresponded to visual primitive features(ex. color and shape), and the phonemes(ex. /$t^h$/) to visual conjunctive features(ex. colored shapes). The automaticity is measured by the set size effect that was the increasing amount of reaction time when the number of distracters increased. Three experiments were conducted. The laryngeal features(experiment 1), the manner features(experiment 2), and the place features(experiment 3) were compared with phonemes. The results showed that the distinctive features are consistently processed faster and automatically than the phonemes. Additionally there were differences in the processing automaticity among the classes of distinctive features. The laryngeal features are the most automatic, the manner features are moderately automatic and the place features are the least automatic. These results are consistent with the previous studies(Bae et al., 2002; Bae, 2010) that showed the perceptual hierarchy of distinctive features.

Context Based Real-time Korean Writing Correction for Foreigners (외국인 학습자를 위한 문맥 기반 실시간 국어 문장 교정)

  • Park, Young-Keun;Kim, Jae-Min;Lee, Seong-Dong;Lee, Hyun Ah
    • Journal of KIISE
    • /
    • v.44 no.10
    • /
    • pp.1087-1093
    • /
    • 2017
  • Educating foreigners in Korean language is attracting increasing attention with the growing number of foreigners who want to learn Korean or want to reside in Korea. Existing spell checkers mostly focus on native Korean speakers, so they are inappropriate for foreigners. In this paper, we propose a correction method for the Korean language that reflects the contextual characteristics of Korean and writing characteristics of foreigners. Our method can extract frequently used expressions by Koreans by constructing syllable reverse-index for eojeol bi-gram extracted from corpus as correction candidates, and generate ranked Korean corrections for foreigners with upgraded edit distance calculation. Our system provides a user interface based on keyboard hooking, so a user can easily use the correction system along with other applications. Our system improves the detection rate for foreign language users by about 45% compared to other systems in foreign language writing environments. This will help foreign users to judge and correct their own writing errors.

A longitudinal study on the development of English phonological awareness in preschool children (어린이집 유아의 영어 음운 인식 발달 종단 연구)

  • Chung, Hyunsong
    • Phonetics and Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.53-66
    • /
    • 2018
  • This study investigated the development of English phonological awareness in preschool children based on a longitudinal study. It carried out a phonological matching task, mispronunciation task, articulation test, explicit phoneme awareness task, rhyme matching task, and initial-phoneme matching task for three-, four- and five-year-old children. A letter knowledge test was also added to the tests for the 5-year-old children. The results revealed that the development of phonological awareness follows a progression of syllable, then onset and rhyme, then phoneme. It was also revealed that language skills such as vocabulary, detection of mispronunciations, and articulation were partially related to the development of phoneme awareness. Finally, we also found that letter knowledge partially affected the children's development of phonological awareness.

The Role of Post-lexical Intonational Patterns in Korean Word Segmentation

  • Kim, Sa-Hyang
    • Speech Sciences
    • /
    • v.14 no.1
    • /
    • pp.37-62
    • /
    • 2007
  • The current study examines the role of post-lexical tonal patterns of a prosodic phrase in word segmentation. In a word spotting experiment, native Korean listeners were asked to spot a disyllabic or trisyllabic word from twelve syllable speech stream that was composed of three Accentual Phrases (AP). Words occurred with various post-lexical intonation patterns. The results showed that listeners spotted more words in phrase-initial than in phrase-medial position, suggesting that the AP-final H tone from the preceding AP helped listeners to segment the phrase-initial word in the target AP. Results also showed that listeners' error rates were significantly lower when words occurred with initial rising tonal pattern, which is the most frequent intonational pattern imposed upon multisyllabic words in Korean, than with non-rising patterns. This result was observed both in AP-initial and in AP-medial positions, regardless of the frequency and legality of overall AP tonal patterns. Tonal cues other than initial rising tone did not positively influence the error rate. These results not only indicate that rising tone in AP-initial and AP_final position is a reliable cue for word boundary detection for Korean listeners, but further suggest that phrasal intonation contours serve as a possible word boundary cue in languages without lexical prominence.

  • PDF

Developing a New Algorithm for Conversational Agent to Detect Recognition Error and Neologism Meaning: Utilizing Korean Syllable-based Word Similarity (대화형 에이전트 인식오류 및 신조어 탐지를 위한 알고리즘 개발: 한글 음절 분리 기반의 단어 유사도 활용)

  • Jung-Won Lee;Il Im
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.267-286
    • /
    • 2023
  • The conversational agents such as AI speakers utilize voice conversation for human-computer interaction. Voice recognition errors often occur in conversational situations. Recognition errors in user utterance records can be categorized into two types. The first type is misrecognition errors, where the agent fails to recognize the user's speech entirely. The second type is misinterpretation errors, where the user's speech is recognized and services are provided, but the interpretation differs from the user's intention. Among these, misinterpretation errors require separate error detection as they are recorded as successful service interactions. In this study, various text separation methods were applied to detect misinterpretation. For each of these text separation methods, the similarity of consecutive speech pairs using word embedding and document embedding techniques, which convert words and documents into vectors. This approach goes beyond simple word-based similarity calculation to explore a new method for detecting misinterpretation errors. The research method involved utilizing real user utterance records to train and develop a detection model by applying patterns of misinterpretation error causes. The results revealed that the most significant analysis result was obtained through initial consonant extraction for detecting misinterpretation errors caused by the use of unregistered neologisms. Through comparison with other separation methods, different error types could be observed. This study has two main implications. First, for misinterpretation errors that are difficult to detect due to lack of recognition, the study proposed diverse text separation methods and found a novel method that improved performance remarkably. Second, if this is applied to conversational agents or voice recognition services requiring neologism detection, patterns of errors occurring from the voice recognition stage can be specified. The study proposed and verified that even if not categorized as errors, services can be provided according to user-desired results.