• 제목/요약/키워드: Recognition Psychology

검색결과 185건 처리시간 0.027초

ON THE USE OF SPEECH RECOGNITION TECHNOLOGY FOR FOREIGN LANGUAGE PRONUNCIATION TEACHING

  • Keikichi Hirose;Carlos T. Ishi;Goh Kawai
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2000년도 7월 학술대회지
    • /
    • pp.17-28
    • /
    • 2000
  • Recently speech technologies have shown notable advancements and they now play major roles in computer-aided language learning systems. In the current paper, use of speech recognition technologies is viewed with our system for teaching English pronunciation to Japanese speakers.

  • PDF

한글 시각단어재인의 초기처리과정에 대한 대뇌 활성화 양상 : 'VWFA(visual word from area)'를 중심으로 (The Cerebral activation of Korean visual word recognition in Ventral stream)

  • 손효정;정재범;편성범;송희진;이재준;민승기;장용민;남기춘
    • 한국인지과학회:학술대회논문집
    • /
    • 한국인지과학회 2006년도 춘계학술대회
    • /
    • pp.119-123
    • /
    • 2006
  • 문자는 의사소통의 중요한 매개체 중 하나로 사람이 문자를 인식할 때, 글자의 크기나 모양, 위치, 글 자체 등의 매우 다양한 지각적인 변화에 의한 영향을 크게 받지 않는다. 이는 문자에 대한 처리가 다른 사물과는 다소 다르게 일어나며 머릿속에 추상적인 형태(abstract form)로 저장되어 있음을 의미한다. 이러한 처리과정은 시각단어재인 과정에서 어휘 지식에 접근하기 위한 중요한 단계로 여겨지면서 이와 관련된 대뇌 영역의 국재화 양상에 대한 연구들이 진행되고 있다. 본 연구에서는 한글 시각단어재인에 있어 Cohen과 Dehaene 등이 'visual word form area'주장하고 있는 좌측 ventral occipito-tempoarl region의 대뇌 활성화 양상을 살펴보았다. 실험 결과, 좌측 'VWFA'는 어휘의 친숙성에 우뇌의 대측 지점은 어휘성(lexicality)에 민감한 것으로 나타났다.

  • PDF

육미지황탕가감방-1, 2가 학습과 기억능력에 미치는 영향에 관한 임상연구 (Clinical Study for YMG-1, 2's Effects on Learning and Memory Abilities)

  • 박은혜;정명숙;박창범;지상은;이영혁;배현수;신민규;김현택;홍무창
    • 동의생리병리학회지
    • /
    • 제16권5호
    • /
    • pp.976-988
    • /
    • 2002
  • The aim of this study was to examine the memory and attention enhancement effect of YMG-1 and YMG-2, which are modified herbal extracts from Yukmijihwang-tang (YMJ). YMJ, composing six herbal medicine, has been used for restoring the normal functions of the body to consolidate the constitution, nourishing and invigorating the kidney functions for hundreds years in Asian countries. A series of studies reported that YMJ and its components enhance memory retention, protects neuronal cell from reactive oxygen attack and boost immune activities. Recently the microarray analysis suggested that YMG-1 protects neurodegeneration through modulating various neuron specific genes. A total of 55 subjects were divided into three groups according to the treatment of YMG-1 (n=20), YMG-2 (n=20) and control (C; n=15) groups. Before treatments, all of subjects were subjected to the assessments on neuropsychological tests of K-WAIS test, Rey-Kim memory test, and psychophysiological test of Event-Related Potential (ERP) during auditory oddball task and repeated word recognition task. They were repeatedly assessed with the same methods after drug treatment for 6 weeks. Although no significant effect of drug was found in Rey-Kim memory test, a significant interaction (P = .010, P < 0.05) between YMG-2 and C groups was identified in the scores digit span and block design, which are the subscales of K-WAIS. The very similar but marginal interaction (P = .064) between YMG-1 and C groups was found too. In ERP analysis, only YMG-1 group showed decreasing tendency of P300 latency during oddball task while the others tended to increase, and it caused significant interaction between session and group (p= .004). This result implies the enhancement of cognitive function in due to consideration of relationship between P300 latency and the speed of information processing. However, no evidence which could demonstrate the significant drug effect was found in neither amplitude or latency. These results come together suggest that YMG-1, 2 may enhance the attention, resulting in enhancement of memory processing. For elucidating detailed mechanism of YMG on learning and memory, the further studies are necessary.

Facial Expression Recognition with Fuzzy C-Means Clusstering Algorithm and Neural Network Based on Gabor Wavelets

  • Youngsuk Shin;Chansup Chung;Lee, Yillbyung
    • 한국감성과학회:학술대회논문집
    • /
    • 한국감성과학회 2000년도 춘계 학술대회 및 국제 감성공학 심포지움 논문집 Proceeding of the 2000 Spring Conference of KOSES and International Sensibility Ergonomics Symposium
    • /
    • pp.126-132
    • /
    • 2000
  • This paper presents a facial expression recognition based on Gabor wavelets that uses a fuzzy C-means(FCM) clustering algorithm and neural network. Features of facial expressions are extracted to two steps. In the first step, Gabor wavelet representation can provide edges extraction of major face components using the average value of the image's 2-D Gabor wavelet coefficient histogram. In the next step, we extract sparse features of facial expressions from the extracted edge information using FCM clustering algorithm. The result of facial expression recognition is compared with dimensional values of internal stated derived from semantic ratings of words related to emotion. The dimensional model can recognize not only six facial expressions related to Ekman's basic emotions, but also expressions of various internal states.

  • PDF

Emotion Recognition using Facial Thermal Images

  • Eom, Jin-Sup;Sohn, Jin-Hun
    • 대한인간공학회지
    • /
    • 제31권3호
    • /
    • pp.427-435
    • /
    • 2012
  • The aim of this study is to investigate facial temperature changes induced by facial expression and emotional state in order to recognize a persons emotion using facial thermal images. Background: Facial thermal images have two advantages compared to visual images. Firstly, facial temperature measured by thermal camera does not depend on skin color, darkness, and lighting condition. Secondly, facial thermal images are changed not only by facial expression but also emotional state. To our knowledge, there is no study to concurrently investigate these two sources of facial temperature changes. Method: 231 students participated in the experiment. Four kinds of stimuli inducing anger, fear, boredom, and neutral were presented to participants and the facial temperatures were measured by an infrared camera. Each stimulus consisted of baseline and emotion period. Baseline period lasted during 1min and emotion period 1~3min. In the data analysis, the temperature differences between the baseline and emotion state were analyzed. Eyes, mouth, and glabella were selected for facial expression features, and forehead, nose, cheeks were selected for emotional state features. Results: The temperatures of eyes, mouth, glanella, forehead, and nose area were significantly decreased during the emotional experience and the changes were significantly different by the kind of emotion. The result of linear discriminant analysis for emotion recognition showed that the correct classification percentage in four emotions was 62.7% when using both facial expression features and emotional state features. The accuracy was slightly but significantly decreased at 56.7% when using only facial expression features, and the accuracy was 40.2% when using only emotional state features. Conclusion: Facial expression features are essential in emotion recognition, but emotion state features are also important to classify the emotion. Application: The results of this study can be applied to human-computer interaction system in the work places or the automobiles.

자연스러운 정서 반응의 범주 및 차원 분류에 적합한 음성 파라미터 (Acoustic parameters for induced emotion categorizing and dimensional approach)

  • 박지은;박정식;손진훈
    • 감성과학
    • /
    • 제16권1호
    • /
    • pp.117-124
    • /
    • 2013
  • 본 연구는 음성 인식기에서 일반적으로 사용되는 음향적 특징인 MFCC, LPC, 에너지, 피치 관련 파라미터들을 이용하여 자연스러운 음성의 정서를 범주 및 차원으로 얼마나 잘 인식할 수 있는지 살펴보았다. 자연스러운 정서 반응 데이터를 얻기 위해 선행 연구에서 이미 타당도와 효과성이 밝혀진 정서 유발 자극을 사용하였고, 110명의 대학생들에게 7가지 정서 유발 자극을 제시한 후 유발된 음성 반응을 녹음하여 분석에 사용하였다. 각 음성 데이터에서 추출한 파라미터들을 독립변인으로 하여 선형 판별 분석(LDA)으로 7가지 정서 범주를 분류하였고, 범주 분류의 한계를 극복하기 위해 단계별 다중회귀(stepwise multiple regression) 모형을 도출하여 4가지 정서 차원(valence, arousal, intensity, potency)을 가장 잘 예측하는 음성 특징 파라미터를 산출하였다. 7가지 정서 범주 판별율은 평균 62.7%이었고, 4 차원 예측 회귀모형들도 p<.001수준에서 통계적으로 유의하였다. 결론적으로, 본 연구 결과는 자연스러운 감정의 음성 반응을 분류하는데 유용한 파라미터들을 선정하여 정서의 범주와 차원적 접근으로 정서 분류 가능성을 보였으며 논의에 본 연구의 개선방향에 대해 기술하였다.

  • PDF

Discrimination of Three Emotions using Parameters of Autonomic Nervous System Response

  • Jang, Eun-Hye;Park, Byoung-Jun;Eum, Yeong-Ji;Kim, Sang-Hyeob;Sohn, Jin-Hun
    • 대한인간공학회지
    • /
    • 제30권6호
    • /
    • pp.705-713
    • /
    • 2011
  • Objective: The aim of this study is to compare results of emotion recognition by several algorithms which classify three different emotional states(happiness, neutral, and surprise) using physiological features. Background: Recent emotion recognition studies have tried to detect human emotion by using physiological signals. It is important for emotion recognition to apply on human-computer interaction system for emotion detection. Method: 217 students participated in this experiment. While three kinds of emotional stimuli were presented to participants, ANS responses(EDA, SKT, ECG, RESP, and PPG) as physiological signals were measured in twice first one for 60 seconds as the baseline and 60 to 90 seconds during emotional states. The obtained signals from the session of the baseline and of the emotional states were equally analyzed for 30 seconds. Participants rated their own feelings to emotional stimuli on emotional assessment scale after presentation of emotional stimuli. The emotion classification was analyzed by Linear Discriminant Analysis(LDA, SPSS 15.0), Support Vector Machine (SVM), and Multilayer perceptron(MLP) using difference value which subtracts baseline from emotional state. Results: The emotional stimuli had 96% validity and 5.8 point efficiency on average. There were significant differences of ANS responses among three emotions by statistical analysis. The result of LDA showed that an accuracy of classification in three different emotions was 83.4%. And an accuracy of three emotions classification by SVM was 75.5% and 55.6% by MLP. Conclusion: This study confirmed that the three emotions can be better classified by LDA using various physiological features than SVM and MLP. Further study may need to get this result to get more stability and reliability, as comparing with the accuracy of emotions classification by using other algorithms. Application: This could help get better chances to recognize various human emotions by using physiological signals as well as be applied on human-computer interaction system for recognizing human emotions.

정당방위 유형, 신문기사의 정당방위 인정비율, 판단자 개인 특성이 정당방위 판단에 미치는 영향 (The Effects of Self-Defense Categories, Rate of Self-Defense recognition in News Article, and the Individual Characteristics of Mock Jurors on the Self-Defense Judgment)

  • 김용애;김민지
    • 한국심리학회지:법
    • /
    • 제12권2호
    • /
    • pp.171-197
    • /
    • 2021
  • 본 연구는 일반인이 정당방위를 어떻게 판단하고 있으며 판단시 영향을 줄 수 있는 요인들은 어떤 것이 있는지 실증적으로 고찰하고자 하였다. 정당방위 판단에 영향을 주는 요인을 정당방위 유형, 정당방위에 대한 기사, 판단자 개인 특성인 폭력 허용도, 법적 태도로 나누어 만 20세 이상의 성인 남녀 총 651명을 대상으로 연구자료를 수집 및 분석하였다. 참가자들은 정당방위가 주장되는 상황을 유형화하여 작성된 세 유형 중 하나의 유형에 할당되고, 각 유형에 해당하는 정당방위 관련 기사와 시나리오를 제공받은 후 정당방위 판단을 하였다. 또한, 개인적 요소인 법에 대해 가지는 태도, 폭력 허용도가 측정된 후 정당방위 판단에 미치는 영향을 분석하였다. 연구 결과, 자신을 위한 정당방위 유형에서는 정당방위를 인정하는 비율이 가장 높았으나, 국가기관에 대항하는 정당방위 유형에서는 정당방위를 불인정하는 비율이 훨씬 높은 반대 양상을 나타내었다. 또한, 정당방위에 대한 기사 중 정당방위가 잘 인정되지 않는다는 부정적인 기사가 정당방위 판단에 영향을 미치는 것으로 나타났다. 더불어 참가자 개인의 폭력 허용도, 법적 태도가 정당방위 판단에 영향을 미칠 수 있음을 확인하였다. 본 연구를 통해 확인한 일반인들의 정당방위 판단 과정과 정당방위 판단에 영향을 줄 수 있는 제반 요소들은 실제 배심 재판에서의 편향적인 판단을 방지하기 위한 고려 요소가 될 수 있을 것이다. 마지막으로 본 연구의 의의와 한계점, 후속 연구를 제언하였다.

안정적인 실시간 얼굴 특징점 추적과 감정인식 응용 (Robust Real-time Tracking of Facial Features with Application to Emotion Recognition)

  • 안병태;김응희;손진훈;권인소
    • 로봇학회논문지
    • /
    • 제8권4호
    • /
    • pp.266-272
    • /
    • 2013
  • Facial feature extraction and tracking are essential steps in human-robot-interaction (HRI) field such as face recognition, gaze estimation, and emotion recognition. Active shape model (ASM) is one of the successful generative models that extract the facial features. However, applying only ASM is not adequate for modeling a face in actual applications, because positions of facial features are unstably extracted due to limitation of the number of iterations in the ASM fitting algorithm. The unaccurate positions of facial features decrease the performance of the emotion recognition. In this paper, we propose real-time facial feature extraction and tracking framework using ASM and LK optical flow for emotion recognition. LK optical flow is desirable to estimate time-varying geometric parameters in sequential face images. In addition, we introduce a straightforward method to avoid tracking failure caused by partial occlusions that can be a serious problem for tracking based algorithm. Emotion recognition experiments with k-NN and SVM classifier shows over 95% classification accuracy for three emotions: "joy", "anger", and "disgust".

제스처와 EEG 신호를 이용한 감정인식 방법 (Emotion Recognition Method using Gestures and EEG Signals)

  • 김호덕;정태민;양현창;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제13권9호
    • /
    • pp.832-837
    • /
    • 2007
  • Electroencephalographic(EEG) is used to record activities of human brain in the area of psychology for many years. As technology develope, neural basis of functional areas of emotion processing is revealed gradually. So we measure fundamental areas of human brain that controls emotion of human by using EEG. Hands gestures such as shaking and head gesture such as nodding are often used as human body languages for communication with each other, and their recognition is important that it is a useful communication medium between human and computers. Research methods about gesture recognition are used of computer vision. Many researchers study Emotion Recognition method which uses one of EEG signals and Gestures in the existing research. In this paper, we use together EEG signals and Gestures for Emotion Recognition of human. And we select the driver emotion as a specific target. The experimental result shows that using of both EEG signals and gestures gets high recognition rates better than using EEG signals or gestures. Both EEG signals and gestures use Interactive Feature Selection(IFS) for the feature selection whose method is based on a reinforcement learning.