• Title/Summary/Keyword: classification of expressions

Search Result 109, Processing Time 0.023 seconds

A study about the aspect of translation on 'Hu(怖)' in novel 『Kokoro』 - Focusing on novels translated in Korean and English - (소설 『こころ』에 나타난 감정표현 '포(怖)'에 관한 번역 양상 - 한국어 번역 작품과 영어 번역 작품을 중심으로 -)

  • Yang, Jung-soon
    • Cross-Cultural Studies
    • /
    • v.53
    • /
    • pp.131-161
    • /
    • 2018
  • Emotional expressions are expressions that show the internal condition of mind or consciousness. Types of emotional expressions include vocabulary that describes emotion, the composition of sentences that expresses emotion such as an exclamatory sentence and rhetorical question, expressions of interjection, appellation, causative, passive, adverbs of attitude for an idea, and a style of writing. This study focuses on vocabulary that describes emotion and analyzes the aspect of translation when emotional expressions of 'Hu(怖)' is shown on "Kokoro". The aspect of translation was analyzed by three categories as follows; a part of speech, handling of subjects, and classification of meanings. As a result, the aspect of translation for expressions of Hu(怖)' showed that they were translated to vocabulary as they were suggested in the dictionary in some cases. However, they were not always translated as they were suggested in the dictionary. Vocabulary that described the emotion of 'Hu(怖)' in Japanese sentences were mostly translated to their corresponding parts of speech in Korean. Some adverbs needed to add 'verbs' when they were translated. Also, different vocabulary was added or used to maximize emotion. However, the correspondence of a part of speech in English was different from Korean. Examples of Japanese sentences that expressed 'Hu(怖)' by verbs were translated to expression of participles for passive verbs such as 'fear', 'dread', 'worry', and 'terrify' in many cases. Also, idioms were translated with focus on the function of sentences rather than the form of sentences. Examples, what was expressed in adverbs did not accompany verbs of 'Hu (怖)'. Instead, it was translated to the expression of participles for passive verbs and adjectives such as 'dread', 'worry', and 'terrify' in many cases. The main agents of emotion were shown in the first person and the third person in simple sentences. The translation on emotional expressions when a main agent was the first person showed that the fundamental word order of Japanese was translated as it was in Korean. However, adverbs of time and adverbs of degree tended to be added. Also, the first person as the main agent of emotion was positioned at the place of subject when it was translated in English. However, things or the cause of events were positioned at the place of subject in some cases to show the degree of 'Hu(怖)' which the main agent experienced. The expression of conjecture and supposition or a certain visual and auditory basis was added to translate the expression of emotion when the main agent of emotion was the third person. Simple sentences without a main agent of emotion showed that their subjects could be omitted even if they were essential components because they could be known through context in Korean. These omitted subjects were found and translated in English. Those subjects were not necessarily humans who were the main agents of emotion. They could be things or causes of events that specified the expression of emotion.

Weighted Soft Voting Classification for Emotion Recognition from Facial Expressions on Image Sequences (이미지 시퀀스 얼굴표정 기반 감정인식을 위한 가중 소프트 투표 분류 방법)

  • Kim, Kyeong Tae;Choi, Jae Young
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1175-1186
    • /
    • 2017
  • Human emotion recognition is one of the promising applications in the era of artificial super intelligence. Thus far, facial expression traits are considered to be the most widely used information cues for realizing automated emotion recognition. This paper proposes a novel facial expression recognition (FER) method that works well for recognizing emotion from image sequences. To this end, we develop the so-called weighted soft voting classification (WSVC) algorithm. In the proposed WSVC, a number of classifiers are first constructed using different and multiple feature representations. In next, multiple classifiers are used for generating the recognition result (namely, soft voting) of each face image within a face sequence, yielding multiple soft voting outputs. Finally, these soft voting outputs are combined through using a weighted combination to decide the emotion class (e.g., anger) of a given face sequence. The weights for combination are effectively determined by measuring the quality of each face image, namely "peak expression intensity" and "frontal-pose degree". To test the proposed WSVC, CK+ FER database was used to perform extensive and comparative experimentations. The feasibility of our WSVC algorithm has been successfully demonstrated by comparing recently developed FER algorithms.

Application and Analysis of Emotional Attributes using Crowdsourced Method for Hangul Font Recommendation System (한글 글꼴 추천시스템을 위한 크라우드 방식의 감성 속성 적용 및 분석)

  • Kim, Hyun-Young;Lim, Soon-Bum
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.4
    • /
    • pp.704-712
    • /
    • 2017
  • Various researches on content sensibility with the development of digital contents are under way. Emotional research on fonts is also underway in various fields. There is a requirement to use the content expressions in the same way as the content, and to use the font emotion and the textual sensibility of the text in harmony. But it is impossible to select a proper font emotion in Korea because each of more than 6,000 fonts has a certain emotion. In this paper, we analysed emotional classification attributes and constructed the Hangul font recommendation system. Also we verified the credibility and validity of the attributes themselves in order to apply to Korea Hangul fonts. After then, we tested whether general users can find a proper font in a commercial font set through this emotional recommendation system. As a result, when users want to express their emotions in sentences more visually, they can get a recommendation of a Hangul font having a desired emotion by utilizing font-based emotion attribute values collected through the crowdsourced method.

Sign Language Translation Using Deep Convolutional Neural Networks

  • Abiyev, Rahib H.;Arslan, Murat;Idoko, John Bush
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.2
    • /
    • pp.631-653
    • /
    • 2020
  • Sign language is a natural, visually oriented and non-verbal communication channel between people that facilitates communication through facial/bodily expressions, postures and a set of gestures. It is basically used for communication with people who are deaf or hard of hearing. In order to understand such communication quickly and accurately, the design of a successful sign language translation system is considered in this paper. The proposed system includes object detection and classification stages. Firstly, Single Shot Multi Box Detection (SSD) architecture is utilized for hand detection, then a deep learning structure based on the Inception v3 plus Support Vector Machine (SVM) that combines feature extraction and classification stages is proposed to constructively translate the detected hand gestures. A sign language fingerspelling dataset is used for the design of the proposed model. The obtained results and comparative analysis demonstrate the efficiency of using the proposed hybrid structure in sign language translation.

Facial Expression Classification Using Deep Convolutional Neural Network (깊은 Convolutional Neural Network를 이용한 얼굴표정 분류 기법)

  • Choi, In-kyu;Song, Hyok;Lee, Sangyong;Yoo, Jisang
    • Journal of Broadcast Engineering
    • /
    • v.22 no.2
    • /
    • pp.162-172
    • /
    • 2017
  • In this paper, we propose facial expression recognition using CNN (Convolutional Neural Network), one of the deep learning technologies. To overcome the disadvantages of existing facial expression databases, various databases are used. In the proposed technique, we construct six facial expression data sets such as 'expressionless', 'happiness', 'sadness', 'angry', 'surprise', and 'disgust'. Pre-processing and data augmentation techniques are also applied to improve efficient learning and classification performance. In the existing CNN structure, the optimal CNN structure that best expresses the features of six facial expressions is found by adjusting the number of feature maps of the convolutional layer and the number of fully-connected layer nodes. Experimental results show that the proposed scheme achieves the highest classification performance of 96.88% while it takes the least time to pass through the CNN structure compared to other models.

Interpolation on data with multiple attributes by a neural network

  • Azumi, Hiroshi;Hiraoka, Kazuyuki;Mishima, Taketoshi
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.814-817
    • /
    • 2002
  • High-dimensional data with two or more attributes are considered. A typical example of such data is face images of various individuals and expressions. In these cases, collecting a complete data set is often difficult since the number of combinations can be large. In the present study, we propose a method to interpolate data of missing combinations from other data. If this becomes possible, robust recognition of multiple attributes is expectable. The key of this subject is appropriate extraction of the similarity that the face images of same individual or same expression have. Bilinear model [1]has been proposed as a solution of this subjcet. However, experiments on application of bilinear model to classification of face images resulted in low performance [2]. In order to overcome the limit of bilinear model, in this research, a nonlinear model on a neural network is adopted and usefulness of this model is experimentally confirmed.

  • PDF

A Comparison of Effective Feature Vectors for Speech Emotion Recognition (음성신호기반의 감정인식의 특징 벡터 비교)

  • Shin, Bo-Ra;Lee, Soek-Pil
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.10
    • /
    • pp.1364-1369
    • /
    • 2018
  • Speech emotion recognition, which aims to classify speaker's emotional states through speech signals, is one of the essential tasks for making Human-machine interaction (HMI) more natural and realistic. Voice expressions are one of the main information channels in interpersonal communication. However, existing speech emotion recognition technology has not achieved satisfactory performances, probably because of the lack of effective emotion-related features. This paper provides a survey on various features used for speech emotional recognition and discusses which features or which combinations of the features are valuable and meaningful for the emotional recognition classification. The main aim of this paper is to discuss and compare various approaches used for feature extraction and to propose a basis for extracting useful features in order to improve SER performance.

INTERACTION BETWEEN THREE MOVING GRIFFITH CRACKS AT THE INTERFACE OF TWO DISSIMILAR ELASTIC MEDIA

  • Das, S.;Patra, B.;Debnath, L.
    • Journal of applied mathematics & informatics
    • /
    • v.8 no.1
    • /
    • pp.59-69
    • /
    • 2001
  • The paper deals with the interaction between three Griffith cracks propagating under antiplane shear stress at the interface of two dissimilar infinite elastic half-spaces. The Fourier transform technique is used to reduce the elastodynamic problem to the solution of a set of integral equations which has been solved by using the finite Hilbert transform technique and Cooke’s result. The analytical expressions for the stress intensity factors at the crack tips are obtained. Numerical values of the interaction efect have been computed for and results show that interaction effects are either shielding or amplification depending on the location of each crack with respect to other and crack tip spacing. AMS Mathematics Subject Classification : 73M25.

Complementary Discriminant Analysis for Classification of Double Attributes

  • Hiraoka, Kazuyuki;Mishima, Taketoshi
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.806-809
    • /
    • 2002
  • Real-world objects often have two or more significant attributes. For example, face images have attributes of persons, expressions, and so on. Even if we are interested in only one of those attributes, additional informations on auxiliary attributes can help recognition of the main one. In the present paper, the authors propose a method for pattern recognition with double attributes. A pair of classifiers are combined: each classifier makes a guess of its corresponding attribute, and it tells the guess to the other as a hint. Equilibrium point of this iteration can be calculated directly without iterative procedures.

  • PDF

Method of an Assistance for Evaluation of Learning using Expression Recognition based on Deep Learning (심층학습 기반 표정인식을 통한 학습 평가 보조 방법 연구)

  • Lee, Ho-Jung;Lee, Deokwoo
    • Journal of Engineering Education Research
    • /
    • v.23 no.2
    • /
    • pp.24-30
    • /
    • 2020
  • This paper proposes the approaches to the evaluation of learning using concepts of artificial intelligence. Among various techniques, deep learning algorithm is employed to achieve quantitative results of evaluation. In particular, this paper focuses on the process-based evaluation instead of the result-based one using face expression. The expression is simply acquired by digital camera that records face expression when students solve sample test problems. Face expressions are trained using convolutional neural network (CNN) model followed by classification of expression data into three categories, i.e., easy, neutral, difficult. To substantiate the proposed approach, the simulation results show promising results, and this work is expected to open opportunities for intelligent evaluation system in the future.