• Title/Summary/Keyword: Sadness Layer

Search Result 8, Processing Time 0.024 seconds

The Layer of Emotion that Makes up the Poem "Falling Flowers(落花) " by Cho Ji-Hoon

  • In-Kwa, Park
    • International Journal of Advanced Culture Technology
    • /
    • v.5 no.4
    • /
    • pp.1-9
    • /
    • 2017
  • This study of Cho Ji - Hoon's Poem "Falling Flowers" was attempted to find the mechanism of poetic healing and utilize it for literary therapy. In this study, I examined how Cho Ji-Hoon's poem "Falling Flowers" encoded crying. Especially, we focused on the organic relationship of each layer represented by poem and put emotional codes on the layer of functor and argument. The results are as follow. It represents the Separation Layer of 1-3strophes, 4-6strophes constitute the Time Layer, and 7-9strophes the Sadness Layer. This poem proceeds the encoding of the sentence in which the crying of cuckoo in the 1-3strophes transforms into the crying of the poetic narrator in the last 9strophe. The relation of emotional layers in this poem is in the same function relations as "(1-3strophes) ${\subset}$ (4-6strophes) ${\subset}$ (7-9strophes)". Since these functional relations consist of the encoding of sadness, encrypts emotion signals of sadness as "U+U+U" becomes "UUU". 1-3strophes' U is the cry of the cuckoo, and U of the 4-6strophes is blood cry. Therefore, "UUU" is the blood cry of poetic narrator. This Cho Ji-Hoon's poem has a Han(恨) at its base. So, as Cho Ji-Hoon's poem "Falling Flowers" is uttered, the poetic mechanism of U, the code of sadness, is amplified. Then we get caught up in the emotions we want to cry. The poetic catharsis of "crying" is providing the effect of literary therapy. In the future, it will be possible to develop a more effective literary therapy technique by developing a literary therapy program like this poetic structure.

Pattern Classification of Four Emotions using EEG (뇌파를 이용한 감정의 패턴 분류 기술)

  • Kim, Dong-Jun;Kim, Young-Soo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.3 no.4
    • /
    • pp.23-27
    • /
    • 2010
  • This paper performs emotion classification test to find out the best parameter of electroencyphalogram(EEG) signal. Linear predictor coefficients, band cross-correlation coefficients of fast Fourier transform(FFT) and autoregressive model spectra are used as the parameters of 10-channel EEG signal. A multi-layer neural network is used as the pattern classifier. Four emotions for relaxation, joy, sadness, irritation are induced by four university students of an acting circle. Electrode positions are Fp1, Fp2, F3, F4, T3, T4, P3, P4, O1, O2. As a result, the Linear predictor coefficients showed the best performance.

  • PDF

Facial Expression Classification Using Deep Convolutional Neural Network (깊은 Convolutional Neural Network를 이용한 얼굴표정 분류 기법)

  • Choi, In-kyu;Song, Hyok;Lee, Sangyong;Yoo, Jisang
    • Journal of Broadcast Engineering
    • /
    • v.22 no.2
    • /
    • pp.162-172
    • /
    • 2017
  • In this paper, we propose facial expression recognition using CNN (Convolutional Neural Network), one of the deep learning technologies. To overcome the disadvantages of existing facial expression databases, various databases are used. In the proposed technique, we construct six facial expression data sets such as 'expressionless', 'happiness', 'sadness', 'angry', 'surprise', and 'disgust'. Pre-processing and data augmentation techniques are also applied to improve efficient learning and classification performance. In the existing CNN structure, the optimal CNN structure that best expresses the features of six facial expressions is found by adjusting the number of feature maps of the convolutional layer and the number of fully-connected layer nodes. Experimental results show that the proposed scheme achieves the highest classification performance of 96.88% while it takes the least time to pass through the CNN structure compared to other models.

A Research of Optimized Metadata Extraction and Classification of in Audio (미디어에서의 오디오 메타데이터 최적화 추출 및 분류 방안에 대한 연구)

  • Yoon, Min-hee;Park, Hyo-gyeong;Moon, Il-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.147-149
    • /
    • 2021
  • Recently, the rapid growth of the media market and the expectations of users have been increasing. In this research, tags are extracted through media-derived audio and classified into specific categories using artificial intelligence. This category is a type of emotion including joy, anger, sadness, love, hatred, desire, etc. We use JupyterNotebook to conduct the corresponding study, analyze voice data using the LiBROSA library within JupyterNotebook, and use Neural Network using keras and layer models.

  • PDF

Exploration of deep learning facial motions recognition technology in college students' mental health (딥러닝의 얼굴 정서 식별 기술 활용-대학생의 심리 건강을 중심으로)

  • Li, Bo;Cho, Kyung-Duk
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.3
    • /
    • pp.333-340
    • /
    • 2022
  • The COVID-19 has made everyone anxious and people need to keep their distance. It is necessary to conduct collective assessment and screening of college students' mental health in the opening season of every year. This study uses and trains a multi-layer perceptron neural network model for deep learning to identify facial emotions. After the training, real pictures and videos were input for face detection. After detecting the positions of faces in the samples, emotions were classified, and the predicted emotional results of the samples were sent back and displayed on the pictures. The results show that the accuracy is 93.2% in the test set and 95.57% in practice. The recognition rate of Anger is 95%, Disgust is 97%, Happiness is 96%, Fear is 96%, Sadness is 97%, Surprise is 95%, Neutral is 93%, such efficient emotion recognition can provide objective data support for capturing negative. Deep learning emotion recognition system can cooperate with traditional psychological activities to provide more dimensions of psychological indicators for health.

The Audience Behavior-based Emotion Prediction Model for Personalized Service (고객 맞춤형 서비스를 위한 관객 행동 기반 감정예측모형)

  • Ryoo, Eun Chung;Ahn, Hyunchul;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.73-85
    • /
    • 2013
  • Nowadays, in today's information society, the importance of the knowledge service using the information to creative value is getting higher day by day. In addition, depending on the development of IT technology, it is ease to collect and use information. Also, many companies actively use customer information to marketing in a variety of industries. Into the 21st century, companies have been actively using the culture arts to manage corporate image and marketing closely linked to their commercial interests. But, it is difficult that companies attract or maintain consumer's interest through their technology. For that reason, it is trend to perform cultural activities for tool of differentiation over many firms. Many firms used the customer's experience to new marketing strategy in order to effectively respond to competitive market. Accordingly, it is emerging rapidly that the necessity of personalized service to provide a new experience for people based on the personal profile information that contains the characteristics of the individual. Like this, personalized service using customer's individual profile information such as language, symbols, behavior, and emotions is very important today. Through this, we will be able to judge interaction between people and content and to maximize customer's experience and satisfaction. There are various relative works provide customer-centered service. Specially, emotion recognition research is emerging recently. Existing researches experienced emotion recognition using mostly bio-signal. Most of researches are voice and face studies that have great emotional changes. However, there are several difficulties to predict people's emotion caused by limitation of equipment and service environments. So, in this paper, we develop emotion prediction model based on vision-based interface to overcome existing limitations. Emotion recognition research based on people's gesture and posture has been processed by several researchers. This paper developed a model that recognizes people's emotional states through body gesture and posture using difference image method. And we found optimization validation model for four kinds of emotions' prediction. A proposed model purposed to automatically determine and predict 4 human emotions (Sadness, Surprise, Joy, and Disgust). To build up the model, event booth was installed in the KOCCA's lobby and we provided some proper stimulative movie to collect their body gesture and posture as the change of emotions. And then, we extracted body movements using difference image method. And we revised people data to build proposed model through neural network. The proposed model for emotion prediction used 3 type time-frame sets (20 frames, 30 frames, and 40 frames). And then, we adopted the model which has best performance compared with other models.' Before build three kinds of models, the entire 97 data set were divided into three data sets of learning, test, and validation set. The proposed model for emotion prediction was constructed using artificial neural network. In this paper, we used the back-propagation algorithm as a learning method, and set learning rate to 10%, momentum rate to 10%. The sigmoid function was used as the transform function. And we designed a three-layer perceptron neural network with one hidden layer and four output nodes. Based on the test data set, the learning for this research model was stopped when it reaches 50000 after reaching the minimum error in order to explore the point of learning. We finally processed each model's accuracy and found best model to predict each emotions. The result showed prediction accuracy 100% from sadness, and 96% from joy prediction in 20 frames set model. And 88% from surprise, and 98% from disgust in 30 frames set model. The findings of our research are expected to be useful to provide effective algorithm for personalized service in various industries such as advertisement, exhibition, performance, etc.

A Study on the expression and reader cognition of a Comics character (만화캐릭터의 표정과 독자 인지에 관한 연구)

  • Yoon, Jang-Won
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.11a
    • /
    • pp.227-231
    • /
    • 2006
  • As for comics and animation, the specific gravity came to become still larger in all the art fields together with the importance in various image media now which is useful and goes the time of the 21st century new media. Especially the demand of users to the vision culture which develops day by day, Sensitivity Engineering Department is trying to realize the necessity for a sensitivity design acutely together. The influence of the comics which have toxicity most also in Japanese culture in a geographical position like South Korea on it, and animation is the actual condition in the reason which has reached from youth universally to the layer for years, to be inquired systematic to a Korean comics language. This reserch was conducted as we thought sufficient study on various situations are required, and among them, for the reserch of expressions of cartoons's characters, we've divided the expressions of characters that comes out in Japanese cartoons into catagories of "happiness, anger, sadness, pleasure" and "fear, astonishment and dislike" and based on these catagories, we've drawn out the minimum elements to express emotions in cartoon and prepared image-map by relating them with languages that express emotions of people and based on this, we've made a calculating tools on how our readers would read the expression languages. Samples of Japanese cartoons of which we've chosen for the purpose of drawing out the elements of expressions were limited to only published cartoons and we've made a foot steps for expression analysis of animation characters in the future.

  • PDF

Expression and Reader Cognition of Japanese Comics Character (일본 만화 캐릭터의 표정과 독자 인지)

  • Yoon, Jang-Won
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.246-254
    • /
    • 2007
  • As for comics and animation, the specific gravity came to become still larger in all the art fields together with the importance in various image media now which is useful and goes the time of the 21st century new media. Especially the demand of users to the vision culture which develops day by day, Sensitivity Engineering Department is trying to realize the necessity for a sensitivity design acutely together. The influence of the comics which have toxicity most also in Japanese culture in a geographical position like South Korea on it, and animation is the actual condition in the reason which has reached from youth universally to the layer for years, to be inquired systematic to a Korean comics language. This research was conducted as we thought sufficient study on various situations are required, and among them for the research of expressions of cartoons's characters, we've divided the expressions of characters that comes out in Japanese cartoons into categories of 'happiness, anger, sadness, pleasure' and 'fear, astonishment and dislike' and based on these categories, we've drawn out the minimum elements to express emotions in cartoon and prepared image-map by relating them with languages that express emotions of people and based on this, we've made a calculating tools on how our readers would recognize the expression languages. Samples of Japanese cartoons of which we've chosen for the purpose of drawing out the elements of expressions were limited to only published cartoons and we've made a foot steps for expression analysis of animation characters in the future.