• Title/Summary/Keyword: Learning Emotion

Search Result 401, Processing Time 0.029 seconds

EFL College Students' Learning Experiences during Film-based Reading Class: Focused on the Analysis of Students' Reflective Journals

  • Baek, Jiyeon
    • International Journal of Advanced Culture Technology
    • /
    • v.7 no.4
    • /
    • pp.49-55
    • /
    • 2019
  • In the age of information, newly produced knowledge is mostly written in English. Therefore, there has been a strong demand for English language learning in the EFL context. However, most EFL learners possess a lack of interest and motivation in the text-based reading class. In this educational context, film is one of the most widely used materials in English reading classes considering that modern learners are predominantly familiar with various audiovisual materials. The purpose of this study is to investigate how Korean EFL learners experienced in the film-based reading class. Specifically, this study aims to analyze the EFL students' perceptions about the class and learning strategies that they used during the class. In order to comprehensively interpret the EFL learners' experiences in the classroom, a coding system consisting of five categories was developed: report, emotion, reflection, evaluation, future plans. The results of data analysis showed that the use of movies in English reading classes had positive effects on reading comprehension and inference of word meaning. The most frequently used learning strategies were affective strategies which helped them control their emotion, attitude, motivations and values, whereas memorization strategies were rarely used. In this respect, this study suggests that the use of movies in the EFL reading classroom encourage students' attention and help them obtain and activate schema which is useful in gaining a better understanding of text-based reading materials.

The Study of the Analysis of a User's Perception of Screen Component for Inducing Emotion in the 3D Virtual Reality Environment (3차원 가상현실 환경에서의 감성 유발 화면 구성 요소에 대한 사용자 인식 분석 연구)

  • Han, Hyeong-Jong
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.7
    • /
    • pp.165-176
    • /
    • 2018
  • With the development of information and communication technology, the possibility of utilizing 3D virtual reality in education has been sought. Especially, the screen composition in the virtual reality has the possibility of inducing the emotion of the user which may affect the learning. However, there is little research on what aspects of the screen can cause emotions. The purpose of this study is to analyze the user's perception of screen components inducing emotion in virtual reality learning environment. Using Multi Dimensional Scaling (MDS), the user's perception of the main screen in a representative virtual reality learning environment platform was investigated. As a result, the dimension of depth on the screen and the dynamics of the avatar related to the movement were confirmed. This study is meaningful to explore technical variables that can induce emotions among screen elements in virtual reality contents.

Implementation of Multi Channel Network Platform based Augmented Reality Facial Emotion Sticker using Deep Learning (딥러닝을 이용한 증강현실 얼굴감정스티커 기반의 다중채널네트워크 플랫폼 구현)

  • Kim, Dae-Jin
    • Journal of Digital Contents Society
    • /
    • v.19 no.7
    • /
    • pp.1349-1355
    • /
    • 2018
  • Recently, a variety of contents services over the internet are becoming popular, among which MCN(Multi Channel Network) platform services have become popular with the generalization of smart phones. The MCN platform is based on streaming, and various factors are added to improve the service. Among them, augmented reality sticker service using face recognition is widely used. In this paper, we implemented the MCN platform that masks the augmented reality sticker on the face through facial emotion recognition in order to further increase the interest factor. We analyzed seven facial emotions using deep learning technology for facial emotion recognition, and applied the emotional sticker to the face based on it. To implement the proposed MCN platform, emotional stickers were applied to the clients and various servers that can stream the servers were designed.

Robot's Emotion Generation Model based on Generalized Context Input Variables with Personality and Familiarity (성격과 친밀도를 지닌 로봇의 일반화된 상황 입력에 기반한 감정 생성)

  • Kwon, Dong-Soo;Park, Jong-Chan;Kim, Young-Min;Kim, Hyoung-Rock;Song, Hyunsoo
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.3 no.2
    • /
    • pp.91-101
    • /
    • 2008
  • For a friendly interaction between human and robot, emotional interchange has recently been more important. So many researchers who are investigating the emotion generation model tried to naturalize the robot's emotional state and to improve the usability of the model for the designer of the robot. And also the various emotion generation of the robot is needed to increase the believability of the robot. So in this paper we used the hybrid emotion generation architecture, and defined the generalized context input of emotion generation model for the designer to easily implement it to the robot. And we developed the personality and loyalty model based on the psychology for various emotion generation. Robot's personality is implemented with the emotional stability from Big-Five, and loyalty is made of familiarity generation, expression, and learning procedure which are based on the human-human social relationship such as balance theory and social exchange theory. We verify this emotion generation model by implementing it to the 'user calling and scheduling' scenario.

  • PDF

Emotion Recognition Method using Gestures and EEG Signals (제스처와 EEG 신호를 이용한 감정인식 방법)

  • Kim, Ho-Duck;Jung, Tae-Min;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.9
    • /
    • pp.832-837
    • /
    • 2007
  • Electroencephalographic(EEG) is used to record activities of human brain in the area of psychology for many years. As technology develope, neural basis of functional areas of emotion processing is revealed gradually. So we measure fundamental areas of human brain that controls emotion of human by using EEG. Hands gestures such as shaking and head gesture such as nodding are often used as human body languages for communication with each other, and their recognition is important that it is a useful communication medium between human and computers. Research methods about gesture recognition are used of computer vision. Many researchers study Emotion Recognition method which uses one of EEG signals and Gestures in the existing research. In this paper, we use together EEG signals and Gestures for Emotion Recognition of human. And we select the driver emotion as a specific target. The experimental result shows that using of both EEG signals and gestures gets high recognition rates better than using EEG signals or gestures. Both EEG signals and gestures use Interactive Feature Selection(IFS) for the feature selection whose method is based on a reinforcement learning.

Exploration of deep learning facial motions recognition technology in college students' mental health (딥러닝의 얼굴 정서 식별 기술 활용-대학생의 심리 건강을 중심으로)

  • Li, Bo;Cho, Kyung-Duk
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.3
    • /
    • pp.333-340
    • /
    • 2022
  • The COVID-19 has made everyone anxious and people need to keep their distance. It is necessary to conduct collective assessment and screening of college students' mental health in the opening season of every year. This study uses and trains a multi-layer perceptron neural network model for deep learning to identify facial emotions. After the training, real pictures and videos were input for face detection. After detecting the positions of faces in the samples, emotions were classified, and the predicted emotional results of the samples were sent back and displayed on the pictures. The results show that the accuracy is 93.2% in the test set and 95.57% in practice. The recognition rate of Anger is 95%, Disgust is 97%, Happiness is 96%, Fear is 96%, Sadness is 97%, Surprise is 95%, Neutral is 93%, such efficient emotion recognition can provide objective data support for capturing negative. Deep learning emotion recognition system can cooperate with traditional psychological activities to provide more dimensions of psychological indicators for health.

Parting Lyrics Emotion Classification using Word2Vec and LSTM (Word2Vec과 LSTM을 활용한 이별 가사 감정 분류)

  • Lim, Myung Jin;Park, Won Ho;Shin, Ju Hyun
    • Smart Media Journal
    • /
    • v.9 no.3
    • /
    • pp.90-97
    • /
    • 2020
  • With the development of the Internet and smartphones, digital sound sources are easily accessible, and accordingly, interest in music search and recommendation is increasing. As a method of recommending music, research using melodies such as pitch, tempo, and beat to classify genres or emotions is being conducted. However, since lyrics are becoming one of the means of expressing human emotions in music, the role of the lyrics is increasing, so a study of emotion classification based on lyrics is needed. Therefore, in this thesis, we analyze the emotions of the farewell lyrics in order to subdivide the farewell emotions based on the lyrics. After constructing an emotion dictionary by vectoriziong the similarity between words appearing in the parting lyrics through Word2Vec learning, we propose a method of classifying parting lyrics emotions using Word2Vec and LSTM, which classify lyrics by similar emotions by learning lyrics using LSTM.

Effects of the Emotional Environmental Education Salovey through Educational Theatre on Elementary School Students' Environmental Literacy (교육 연극을 활용한 감성 중심 환경교육이 초등학생의 환경 소양에 미치는 영향)

  • Choi, Hye-Ran;Lee, Sang-Won
    • Hwankyungkyoyuk
    • /
    • v.22 no.1
    • /
    • pp.43-55
    • /
    • 2009
  • The purpose of this study was to investigate the influences of environmental education program through educational theatre on student's environmental literacy of 5th graders in an elementary school in Seoul. The students were divided into an experimental group and a control group. Then, the experimental group had the emotion-centered environmental education program using educational theatre, and the control group had the regular education process, which is a general lecture about the environment. The SPSS 12.0 program was used to analyze the results. The major result of the study was as follows. First, the researcher was able to develop and apply professor-learning model for the environment education using the educational theatre by abstracting and recreating environment-related contents from the 5th grade curriculum. Second, the study has showed that applying the emotion-centered program using the education theatre for the environment education certainly contributes to the improvement of the environment knowledge of the elementary students. Third, the emotion-centered environment education program has influenced evenly on the elements of the environment knowledge divided by the four goal levels of the environment education. Finally, the higher the student's emotional quotient is, the more improvement of the environment knowledge the student gets when using the emotion-centered environment education program using the education theatre. In conclusion, the emotion-centered environment education program using the education theatre is a usable method to the elementary schools, and has a positive effect on increasing the environment knowledge of 5th grade students. In the time of requiring the diverse environment education methods, this program is worth to try as a new education method. Therefore, it is necessary to research more on the professor-learning activities related to this study.

  • PDF

Analysis of Emotions in Broadcast News Using Convolutional Neural Networks (CNN을 활용한 방송 뉴스의 감정 분석)

  • Nam, Youngja
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.8
    • /
    • pp.1064-1070
    • /
    • 2020
  • In Korea, video-based news broadcasters are primarily classified into terrestrial broadcasters, general programming cable broadcasters and YouTube broadcasters. Recently, news broadcasters get subjective while targeting the desired specific audience. This violates normative expectations of impartiality and neutrality on journalism from its audience. This phenomenon may have a negative impact on audience perceptions of issues. This study examined whether broadcast news reporting conveys emotions and if so, how news broadcasters differ according to emotion type. Emotion types were classified into neutrality, happiness, sadness and anger using a convolutional neural network which is a class of deep neural networks. Results showed that news anchors or reporters tend to express their emotions during TV broadcasts regardless of broadcast systems. This study provides the first quantative investigation of emotions in broadcasting news. In addition, this study is the first deep learning-based approach to emotion analysis of broadcasting news.

Development of a driver's emotion detection model using auto-encoder on driving behavior and psychological data

  • Eun-Seo, Jung;Seo-Hee, Kim;Yun-Jung, Hong;In-Beom, Yang;Jiyoung, Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.3
    • /
    • pp.35-43
    • /
    • 2023
  • Emotion recognition while driving is an essential task to prevent accidents. Furthermore, in the era of autonomous driving, automobiles are the subject of mobility, requiring more emotional communication with drivers, and the emotion recognition market is gradually spreading. Accordingly, in this research plan, the driver's emotions are classified into seven categories using psychological and behavioral data, which are relatively easy to collect. The latent vectors extracted through the auto-encoder model were also used as features in this classification model, confirming that this affected performance improvement. Furthermore, it also confirmed that the performance was improved when using the framework presented in this paper compared to when the existing EEG data were included. Finally, 81% of the driver's emotion classification accuracy and 80% of F1-Score were achieved only through psychological, personal information, and behavioral data.