• Title/Summary/Keyword: Visual Word Recognition

Search Result 47, Processing Time 0.026 seconds

The neighborhood size and frequency effect in Korean words (한국어 단어재인에서 나타나는 이웃효과)

  • Kwon You-An;Cho Hye-Suk;Nam Ki-Chun
    • Proceedings of the KSPS conference
    • /
    • 2006.05a
    • /
    • pp.117-120
    • /
    • 2006
  • This paper examined two hypotheses. Firstly, if the first syllable of word play an important role in visual word recognition, it may be the unit of word neighbor. Secondly, if the first syllable is the unit of lexical access, the neighborhood size effect and the neighborhood frequency effect would appear in a lexical decision task and a form primed lexical decision task. We conducted two experiments. Experiment 1 showed that words had large neighbors made a inhibitory effect in the LDT(lexical decision task). Experiment 2 showed the interaction between the neighborhood frequency effectand the word form similarity in the form primed LDT. We concluded that the first syllable in Korean words might be the unit of word neighborhood and play a central role in a lexical access.

  • PDF

The Phonological and Orthographic activation in Korean Word Recognition(II) (한국어 단어 재인에서의 음운정보와 철자정보의 활성화(II))

  • Choi Wonil;Nam Kichun
    • Proceedings of the KSPS conference
    • /
    • 2003.10a
    • /
    • pp.33-36
    • /
    • 2003
  • Two experiments were conducted to support the suggestion that the same information processing was used in both input modalities, visual and auditory modality in Wonil Choi & Kichun Nam(2003)'s paper. The primed lexical decision task was performed and pseudoword prime stimuli were used. The result was that priming effect did not occur in any experimental condition. This result might be interpreted visual facilitative information and phonological inhibitory information cancelled each other.

  • PDF

Recent update on reading disability (dyslexia) focused on neurobiology

  • Kim, Sung Koo
    • Clinical and Experimental Pediatrics
    • /
    • v.64 no.10
    • /
    • pp.497-503
    • /
    • 2021
  • Reading disability (dyslexia) refers to an unexpected difficulty with reading for an individual who has the intelligence to be a much better reader. Dyslexia is most commonly caused by a difficulty in phonological processing (the appreciation of the individual sounds of spoken language), which affects the ability of an individual to speak, read, and spell. In this paper, I describe reading disabilities by focusing on their underlying neurobiological mechanisms. Neurobiological studies using functional brain imaging have uncovered the reading pathways, brain regions involved in reading, and neurobiological abnormalities of dyslexia. The reading pathway is in the order of visual analysis, letter recognition, word recognition, meaning (semantics), phonological processing, and speech production. According to functional neuroimaging studies, the important areas of the brain related to reading include the inferior frontal cortex (Broca's area), the midtemporal lobe region, the inferior parieto-temporal area, and the left occipitotemporal region (visual word form area). Interventions for dyslexia can affect reading ability by causing changes in brain function and structure. An accurate diagnosis and timely specialized intervention are important in children with dyslexia. In cases in which national infant development screening tests have been conducted, as in Korea, if language developmental delay and early predictors of dyslexia are detected, careful observation of the progression to dyslexia and early intervention should be made.

Lip Reading Method Using CNN for Utterance Period Detection (발화구간 검출을 위해 학습된 CNN 기반 입 모양 인식 방법)

  • Kim, Yong-Ki;Lim, Jong Gwan;Kim, Mi-Hye
    • Journal of Digital Convergence
    • /
    • v.14 no.8
    • /
    • pp.233-243
    • /
    • 2016
  • Due to speech recognition problems in noisy environment, Audio Visual Speech Recognition (AVSR) system, which combines speech information and visual information, has been proposed since the mid-1990s,. and lip reading have played significant role in the AVSR System. This study aims to enhance recognition rate of utterance word using only lip shape detection for efficient AVSR system. After preprocessing for lip region detection, Convolution Neural Network (CNN) techniques are applied for utterance period detection and lip shape feature vector extraction, and Hidden Markov Models (HMMs) are then used for the recognition. As a result, the utterance period detection results show 91% of success rates, which are higher performance than general threshold methods. In the lip reading recognition, while user-dependent experiment records 88.5%, user-independent experiment shows 80.2% of recognition rates, which are improved results compared to the previous studies.

Phonological awareness skills in terms of visual and auditory stimulus and syllable position in typically developing children (청각적, 시각적 자극제시 방법과 음절위치에 따른 일반아동의 음운인식 능력)

  • Choi, Yu Mi;Ha, Seunghee
    • Phonetics and Speech Sciences
    • /
    • v.9 no.4
    • /
    • pp.123-128
    • /
    • 2017
  • This study aims to compare the performance of syllable identification task according to auditory and visual stimuli presentation methods and syllable position. Twenty-two typically developing children (age 4-6) participated in the study. Three-syllable words were used to identify the first syllable and the final syllable in each word with auditory and visual stimuli. For the auditory stimuli presentation, the researcher presented the test word only with oral speech. For the visual stimuli presentation, the test words were presented as a picture, and asked each child to choose appropriate pictures for the task. The results showed that when tasks were presented visually, the performances of phonological awareness were significantly higher than in presenting with auditory stimuli. Also, the performances of the first syllable identification were significantly higher than those of the last syllable identification. When phonological awareness task are presented by auditory stimuli, it is necessary to go through all the steps of the speech production process. Therefore, the phonological awareness performance by auditory stimuli may be low due to the weakness of the other stages in the speech production process. When phonological awareness tasks are presented using visual picture stimuli, it can be performed directly at the phonological representation stage without going through the peripheral auditory processing, phonological recognition, and motor programming. This study suggests that phonological awareness skills can be different depending on the methods of stimulus presentation and syllable position of the tasks. The comparison of performances between visual and auditory stimulus tasks will help identify where children may show weakness and vulnerability in speech production process.

The Relationship between Neurocognitive Functioning and Emotional Recognition in Chronic Schizophrenic Patients (만성 정신분열병 환자들의 인지 기능과 정서 인식 능력의 관련성)

  • Hwang, Hye-Li;Hwang, Tae-Yeon;Lee, Woo-Kyung;Han, Eun-Sun
    • Korean Journal of Biological Psychiatry
    • /
    • v.11 no.2
    • /
    • pp.155-164
    • /
    • 2004
  • Objective:The present study examined the association between basic neurocognitive functions and emotional recognition in chronic schizophrenia. Furthermore, to Investigate cognitive variable related to emotion recognition in Schizophrenia. Methods:Forty eight patients from the Yongin Psychiatric Rehabilitation Center were evaluated for neurocognitive function, and Emotional Recognition Test which has four subscales finding emotional clue, discriminating emotions, understanding emotional context and emotional capacity. Measures of neurocognitive functioning were selected based on hypothesized relationships to perception of emotion. These measures included:1) Letter Number Sequencing Test, a measure of working memory;2) Word Fluency and Block Design, a measure of executive function;3) Hopkins Verbal Learning Test-Korean version, a measure of verbal memory;4) Digit Span, a measure of immediate memory;5) Span of Apprehension Task, a measure of early visual processing, visual scanning;6) Continuous Performance Test, a measure of sustained attention functioning. Correlation analyses between specific neurocognitive measures and emotional recognition test were made. To examine the degree to which neurocognitive performance predicting emotional recognition, hierarchical regression analyses were also made. Results:Working memory, and verbal memory were closely related with emotional discrimination. Working memory, Span of Apprehension and Digit Span were closely related with contextual recognition. Among cognitive measures, Span of Apprehension, Working memory, Digit Span were most important variables in predicting emotional capacity. Conclusion:These results are relevant considering that emotional information processing depends, in part, on the abilities to scan the context and to use immediate working memory. These results indicated that mul- tifaceted cognitive training program added with Emotional Recognition Task(Cognitive Behavioral Rehabilitation Therapy added with Emotional Management Program) are promising.

  • PDF

Equipment and Worker Recognition of Construction Site with Vision Feature Detection

  • Qi, Shaowen;Shan, Jiazeng;Xu, Lei
    • International Journal of High-Rise Buildings
    • /
    • v.9 no.4
    • /
    • pp.335-342
    • /
    • 2020
  • This article comes up with a new method which is based on the visual characteristic of the objects and machine learning technology to achieve semi-automated recognition of the personnel, machine & materials of the construction sites. Balancing the real-time performance and accuracy, using Faster RCNN (Faster Region-based Convolutional Neural Networks) with transfer learning method appears to be a rational choice. After fine-tuning an ImageNet pre-trained Faster RCNN and testing with it, the result shows that the precision ratio (mAP) has so far reached 67.62%, while the recall ratio (AR) has reached 56.23%. In other word, this recognizing method has achieved rational performance. Further inference with the video of the construction of Huoshenshan Hospital also indicates preliminary success.

Association between Global Cortical Atrophy, Medial Temporal Atrophy, White Matter Hyperintensities and Cognitive Functions in Korean Alzheimer's Disease Patients (알츠하이머병 환자의 전반적 피질 위축, 내측두엽 위축, 백질 고강도 신호와 인지기능의 연관성)

  • Choi, Leen;Joo, Soo-Hyun;Lee, Chang-Uk;Paik, In-Ho
    • Korean Journal of Biological Psychiatry
    • /
    • v.22 no.3
    • /
    • pp.140-148
    • /
    • 2015
  • Objectives The aim of this study is to investigate the correlation between degenerative changes in brain [i.e., global cortical atrophy (GCA), medial temporal atrophy (MTA), white matter hyperintensities (WMH)] and neurocognitive dysfunction in Korean patients with Alzheimer's disease. Methods A total of 62 elderly subjects diagnosed with Alzheimer's disease were included in this study. The degenerative changes in brain MRI were rated with standardized visual rating scales (GCA or global cortical atrophy, MTA or medial temporal atrophy, and Fazekas scales) and the subjects were divided into two groups according to the degree of degeneration for each scale. Cognitive function was evaluated with Korean version of the Consortium to Establish a Registry for Alzheimer's Disease (CERAD-K) and several clinical features, including apolipoprotein E ${\varepsilon}4$ status, lipid profile and thyroid hormones, were also examined. Chi-square test and Fisher's exact test were performed to analyze the relationship between the degree of cerebral degeneration and neurocognitive functions. Results Demographic and clinical features, except for the age, did not show any significant difference between the two groups divided according to the degree of cerebral degenerative changes. However, higher degree of GCA was shown to be associated with poorer performance in verbal fluency test, word list recall test, and word list recognition test. Higher degree of MTA was shown to be associated with poorer performance in Mini-Mental State Examination in the Korean Version of CERAD Assessment Packet (MMSE-KC), word list recognition test and construction praxis recall test. Higher degree of white matter hyperintensities was shown to be associated with poorer performance in MMSE-KC. Conclusions Our results suggest that severe brain degeneration shown in MRI is associated with significantly poorer performance in neurocognitive tests in patients with Alzheimer's disease. Moreover, the degree of GCA, MTA and white matter hyperintensities, represented by scores from different visual rating scales, seems to affect certain neurocognitive domains each, which would provide useful information in clinical settings.

The Effect of Acoustic Correlates of Domain-initial Strengthening in Lexical Segmentation of English by Native Korean Listeners

  • Kim, Sa-Hyang;Cho, Tae-Hong
    • Phonetics and Speech Sciences
    • /
    • v.2 no.3
    • /
    • pp.115-124
    • /
    • 2010
  • The current study investigated the role of acoustic correlates of domain-initial strengthening in lexical segmentation of a non-native language. In a series of cross-modal identity-priming experiments, native Korean listeners heard English auditory stimuli and made lexical decision to visual targets (i.e., written words). The auditory stimuli contained critical two word sequences which created temporal lexical ambiguity (e.g., 'mill#company', with the competitor 'milk'). There was either an IP boundary or a word boundary between the two words in the critical sequences. The initial CV of the second word (e.g., [$k_{\Lambda}$] in 'company') was spliced from another token of the sequence in IP- or Wd-initial positions. The prime words were postboundary words (e.g., company) in Experiment 1, and preboundary words (e.g., mill) in Experiment 2. In both experiments, Korean listeners showed priming effects only in IP contexts, indicating that they can make use of IP boundary cues of English in lexical segmentation of English. The acoustic correlates of domain-initial strengthening were also exploited by Korean listeners, but significant effects were found only for the segmentation of postboundary words. The results therefore indicate that L2 listeners can make use of prosodically driven phonetic detail in lexical segmentation of L2, as long as the direction of those cues are similar in their L1 and L2. The exact use of the cues by Korean listeners was, however, different from that found with native English listeners in Cho, McQueen, and Cox (2007). The differential use of the prosodically driven phonetic cues by the native and non-native listeners are thus discussed.

  • PDF

Language Lateralization Using Magnetoencephalography (MEG): A Preliminary Study (뇌자도를 이용한 언어 편재화: 예비 연구)

  • Lee, Seo-Young;Kang, Eunjoo;Kim, June Sic;Lee, Sang-Kun;Kang, Hyejin;Park, Hyojin;Kim, Sung Hun;Lee, Seung Hwan;Chung, Chun Kee
    • Annals of Clinical Neurophysiology
    • /
    • v.8 no.2
    • /
    • pp.163-170
    • /
    • 2006
  • Backgrounds: MEG can measure the task-specific neurophysiologic activity with good spatial and time resolution. Language lateralization using noninvasive method has been a subject of interest in resective brain surgery. We purposed to develop a paradigm for language lateralization using MEG and validate its feasibility. Methods: Magnetic fields were obtained in 12 neurosurgical candidates and one volunteer for language tasks, with a 306 channel whole head MEG. Language tasks were word listening, reading and picture naming. We tested two word listening paradigms: semantic decision of meaning of abstract nouns, and recognition of repeated words. The subjects were instructed to silently name or read, and respond with pushing button or not. We decided language dominance according to the number of acceptable equivalent current dipoles (ECD) modeled by sequential single dipole, and the mean magnetic field strength by root mean square value, in each hemisphere. We collected clinical data including Wada test. Results: Magnetic fields evoked by word listening were generally distributed in bilateral temporoparietal areas with variable hemispheric dominance. Language tasks using visual stimuli frequently evoked magnetic field in posterior midline area, which made laterality decision difficult. Response during task resulted in more artifacts and different results depending on responding hand. Laterality decision with mean magnetic field strength was more concordant with Wada than the method with ECD number of each hemisphere. Conclusions: Word listening task without hand response is the most feasible paradigm for language lateralization using MEG. Mean magnetic field strength in each hemisphere is a proper index for hemispheric dominance.

  • PDF