• Title/Summary/Keyword: Visual Word Recognition

Search Result 47, Processing Time 0.024 seconds

Robustness of Lipreading against the Variations of Rotation, Translation and Scaling

  • Min, Duk-Soo;Kim, Jin-Young;Park, Seung-Ho;Kim, Ki-Jung
    • Proceedings of the IEEK Conference
    • /
    • 2000.07a
    • /
    • pp.15-18
    • /
    • 2000
  • In this study, we improve the performance of a speech recognition system of visual information depending on lip movements. This paper focuses on the robustness of the word recognition system with the rotation, transition and scaling of the lip images. The different methods of lipreading have been used to estimate the stability of recognition performance. Especially, we work out the special system of the log-polar mapping, which is called Mellin transform with quasi RTS-invariant and related approaches to machine vision. The results of word recognition are reported with HMM (Hidden Markov Model) recognition system.

  • PDF

Computational Model for Proving Phonological Information a Role in Visual Korean Word Recognition (한국어 시각단어재인 과정에서의 음운정보 역할 규명을 위한 계산주의적 모델)

  • Park, Ki-Nam;Lim, Heui-Seok;Han, Kun-Hee
    • Proceedings of the KAIS Fall Conference
    • /
    • 2007.05a
    • /
    • pp.178-180
    • /
    • 2007
  • 본 논문은 인간의 언어정보처리 과정 중 시각단어재인(visual word recognition) 과정에서 음운정보와 철자정보의 역할 및 심성어휘집의 표상 형태를 알아보기 위해, 계산주의적 모델을 제안하고, 제안된 모델을 이용하여 실험하였다. 실험결과 계산주의적 모텔은 한국어에 대한 시각 단어재인 시 보이는 언어현상 중 음운, 철자 이웃 크기효과(phonological and orthographic neighborhood effect)를 나타냈으며, 이를 통해 한국어 시각단어재인 과정에서 심성어휘집이 음운정보로 표상되어 있다는 것을 시사하는 증거를 보였다.

  • PDF

A Speech Recognition System based on a New Endpoint Estimation Method jointly using Audio/Video Informations (음성/영상 정보를 이용한 새로운 끝점추정 방식에 기반을 둔 음성인식 시스템)

  • 이동근;김성준;계영철
    • Journal of Broadcast Engineering
    • /
    • v.8 no.2
    • /
    • pp.198-203
    • /
    • 2003
  • We develop the method of estimating the endpoints of speech by jointly using the lip motion (visual speech) and speech being included in multimedia data and then propose a new speech recognition system (SRS) based on that method. The endpoints of noisy speech are estimated as follows : For each test word, two kinds of endpoints are detected from visual speech and clean speech, respectively Their difference is made and then added to the endpoints of visual speech to estimate those for noisy speech. This estimation method for endpoints (i.e. speech interval) is applied to form a new SRS. The SRS differs from the convention alone in that each word model in the recognizer is provided an interval of speech not Identical but estimated respectively for the corresponding word. Simulation results show that the proposed method enables the endpoints to be accurately estimated regardless of the amount of noise and consequently achieves 8 o/o improvement in recognition rate.

The Effects of Visual and Phonological Similarity on Hanja Word Recognition (시각 형태 정보와 소리 정보가 한자 단어 재인에 미치는 영향)

  • Nam, Ki-Chun
    • Annual Conference on Human and Language Technology
    • /
    • 1995.10a
    • /
    • pp.244-252
    • /
    • 1995
  • 본 연구는 한자를 이용하여 시각 정보 (Visual Information)와 음성 정보(Phonological Information)가 단어 재인과 단어 명명 과정에 어떻게 영향을 주는 지를 조사하기 위하여 실시되었다. 기존의 영어를 이용한 연구에서는 시각 정보와 음성 정보를 독립적으로 조작할 수 없었기에 두 요소가 단어 재인에 어떤 영향을 주는 지를 살피는데 어려움이 있었다. 그러나 한자단어를 이용하면 시각 정보와 음성 정보를 독립적으로 조작할 수 있기 때문에 영어 단어를 사용하는 것보다 유리하다. 본 실험에서는 한자 단어를 이용하여 점화 단어 (Prime Word)와 목표 단어(Target Word)간의 시간간격(SOA)을 100 ms, 200 ms, 750 ms, 그리고 2000 ms로 변화시키면서 시간이 흐름에 따라 시각적 유사성과 음성적 유사성에 의한 점화 효과(Priming Effect)가 어떻게 변화하는 지를 조사하였다. 이 실험 결과에 의하면, 100 ms 조건에서는 시각적 유사성에 의한 점화 효과만 있었다. 그러나, 200 ms, 750 ms, 2000 ms 조건들에서는 시각적 유사성뿐만 아니라 음성적 유사성에 의해서도 점화효과가 있었다. 이와 같은 실험 결과는 최초의 한자 단어의 어휘 접근 (Lexical Access)이 시각 정보에 의해 결정됨을 보여주고 있다.

  • PDF

Neuroanatomical analysis for onomatopoeia : fMRI study

  • Han, Jong-Hye;Choi, Won-Il;Chang, Yong-Min;Jeong, Ok-Ran;Nam, Ki-Chun
    • Annual Conference on Human and Language Technology
    • /
    • 2004.10d
    • /
    • pp.315-318
    • /
    • 2004
  • The purpose of this study is to examine the neuroanatomical areas related with onomatopoeia (sound-imitated word). Using the block-designed fMRI, whole-brain images (N=11) were acquired during lexical decisions. We examined how the lexical information initiates brain activation during visual word recognition. The onomatopoeic word recognition activated the bilateral occipital lobes and superior mid-temporal gyrus.

  • PDF

Lexico-semantic interactions during the visual and spoken recognition of homonymous Korean Eojeols (한국어 시·청각 동음동철이의 어절 재인에 나타나는 어휘-의미 상호작용)

  • Kim, Joonwoo;Kang, Kathleen Gwi-Young;Yoo, Doyoung;Jeon, Inseo;Kim, Hyun Kyung;Nam, Hyeomin;Shin, Jiyoung;Nam, Kichun
    • Phonetics and Speech Sciences
    • /
    • v.13 no.1
    • /
    • pp.1-15
    • /
    • 2021
  • The present study investigated the mental representation and processing of an ambiguous word in the bimodal processing system by manipulating the lexical ambiguity of a visually or auditorily presented word. Homonyms (e.g., '물었다') with more than two meanings and control words (e.g., '고통을') with a single meaning were used in the experiments. The lemma frequency of words was manipulated while the relative frequency of multiple meanings of each homonym was balanced. In both experiments using the lexical decision task, a robust frequency effect and a critical interaction of word type by frequency were found. In Experiment 1, spoken homonyms yielded faster latencies relative to control words (i.e., ambiguity advantage) in the low frequency condition, while ambiguity disadvantage was found in the high frequency condition. A similar interactive pattern was found in visually presented homonyms in the subsequent Experiment 2. Taken together, the first key finding is that interdependent lexico-semantic processing can be found both in the visual and auditory processing system, which in turn suggests that semantic processing is not modality dependent, but rather takes place on the basis of general lexical knowledge. The second is that multiple semantic candidates provide facilitative feedback only when the lemma frequency of the word is relatively low.

Keyword Selection for Visual Search based on Wikipedia (비주얼 검색을 위한 위키피디아 기반의 질의어 추출)

  • Kim, Jongwoo;Cho, Soosun
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.8
    • /
    • pp.960-968
    • /
    • 2018
  • The mobile visual search service uses a query image to acquire linkage information through pre-constructed DB search. From the standpoint of this purpose, it would be more useful if you could perform a search on a web-based keyword search system instead of a pre-built DB search. In this paper, we propose a representative query extraction algorithm to be used as a keyword on a web-based search system. To do this, we use image classification labels generated by the CNN (Convolutional Neural Network) algorithm based on Deep Learning, which has a remarkable performance in image recognition. In the query extraction algorithm, dictionary meaningful words are extracted using Wikipedia, and hierarchical categories are constructed using WordNet. The performance of the proposed algorithm is evaluated by measuring the system response time.

Multimodal audiovisual speech recognition architecture using a three-feature multi-fusion method for noise-robust systems

  • Sanghun Jeon;Jieun Lee;Dohyeon Yeo;Yong-Ju Lee;SeungJun Kim
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.22-34
    • /
    • 2024
  • Exposure to varied noisy environments impairs the recognition performance of artificial intelligence-based speech recognition technologies. Degraded-performance services can be utilized as limited systems that assure good performance in certain environments, but impair the general quality of speech recognition services. This study introduces an audiovisual speech recognition (AVSR) model robust to various noise settings, mimicking human dialogue recognition elements. The model converts word embeddings and log-Mel spectrograms into feature vectors for audio recognition. A dense spatial-temporal convolutional neural network model extracts features from log-Mel spectrograms, transformed for visual-based recognition. This approach exhibits improved aural and visual recognition capabilities. We assess the signal-to-noise ratio in nine synthesized noise environments, with the proposed model exhibiting lower average error rates. The error rate for the AVSR model using a three-feature multi-fusion method is 1.711%, compared to the general 3.939% rate. This model is applicable in noise-affected environments owing to its enhanced stability and recognition rate.

The Cerebral activation of Korean visual word recognition in Ventral stream (한글 시각단어재인의 초기처리과정에 대한 대뇌 활성화 양상 : 'VWFA(visual word from area)'를 중심으로)

  • Sohn, Hyo-Jeong;Jung, Jae-Beom;Pyun, Sung-Bum;Song, Hui-Jin;Lee, Jae-Jun;Min, Sung-Ki;Chang, Yong-Min;Nam, Ki-Chun
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2006.06a
    • /
    • pp.119-123
    • /
    • 2006
  • 문자는 의사소통의 중요한 매개체 중 하나로 사람이 문자를 인식할 때, 글자의 크기나 모양, 위치, 글 자체 등의 매우 다양한 지각적인 변화에 의한 영향을 크게 받지 않는다. 이는 문자에 대한 처리가 다른 사물과는 다소 다르게 일어나며 머릿속에 추상적인 형태(abstract form)로 저장되어 있음을 의미한다. 이러한 처리과정은 시각단어재인 과정에서 어휘 지식에 접근하기 위한 중요한 단계로 여겨지면서 이와 관련된 대뇌 영역의 국재화 양상에 대한 연구들이 진행되고 있다. 본 연구에서는 한글 시각단어재인에 있어 Cohen과 Dehaene 등이 'visual word form area'주장하고 있는 좌측 ventral occipito-tempoarl region의 대뇌 활성화 양상을 살펴보았다. 실험 결과, 좌측 'VWFA'는 어휘의 친숙성에 우뇌의 대측 지점은 어휘성(lexicality)에 민감한 것으로 나타났다.

  • PDF