• Title/Summary/Keyword: emotional modal

Search Result 17, Processing Time 0.019 seconds

Wearing Performance and Comfort Property of PTT/Wool/Modal Air Vortex Yarn Knitted Fabrics (PTT/Wool/Modal Air vortex사 편성물의 의류 착용성능과 쾌적물성)

  • Kim, Hyunah
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.40 no.2
    • /
    • pp.305-314
    • /
    • 2016
  • This paper investigated the applicable possibility of PTT and wool staple fibers to the air vortex system as high quality yarns for a high emotional and comfort garment. It was found that the tactile hand of vortex yarn knitted fabrics was harsher than ring and compact yarns knitted fabrics. It was observed that formability and sewability of air vortex yarn knitted fabrics seemed worse than ring and compact yarns due to low tensile and compressional resilience and high bending and shear hysteresis of air vortex yarn knitted fabrics. It revealed that wicking and drying rates of air vortex yarn knitted fabric were better than ring and compact yarns; in addition, the heat keepability of vortex yarn knitted fabric was higher than ring and compact yarns due to low thermal conductivity and max heat flow rate ($Q_{max}$). Any difference of thermal shrinkage between air vortex and ring yarn knitted fabrics was not shown, but pilling characteristic of air vortex yarn knitted fabric was superior. However, it was shown that wicking, drying, thermal property and pilling characteristics of air vortex yarn knitted fabric were superior due to air vortex yarn structure with parallel fibers in the core part and periodical and fasciated twists in the sheath part of the yarns.

Design of the emotion expression in multimodal conversation interaction of companion robot (컴패니언 로봇의 멀티 모달 대화 인터랙션에서의 감정 표현 디자인 연구)

  • Lee, Seul Bi;Yoo, Seung Hun
    • Design Convergence Study
    • /
    • v.16 no.6
    • /
    • pp.137-152
    • /
    • 2017
  • This research aims to develop the companion robot experience design for elderly in korea based on needs-function deploy matrix of robot and emotion expression research of robot in multimodal interaction. First, Elder users' main needs were categorized into 4 groups based on ethnographic research. Second, the functional elements and physical actuators of robot were mapped to user needs in function- needs deploy matrix. The final UX design prototype was implemented with a robot type that has a verbal non-touch multi modal interface with emotional facial expression based on Ekman's Facial Action Coding System (FACS). The proposed robot prototype was validated through a user test session to analyze the influence of the robot interaction on the cognition and emotion of users by Story Recall Test and face emotion analysis software; Emotion API when the robot changes facial expression corresponds to the emotion of the delivered information by the robot and when the robot initiated interaction cycle voluntarily. The group with emotional robot showed a relatively high recall rate in the delayed recall test and In the facial expression analysis, the facial expression and the interaction initiation of the robot affected on emotion and preference of the elderly participants.

Smart Affect Jewelry based on Multi-modal (멀티 모달 기반의 스마트 감성 주얼리)

  • Kang, Yun-Jeong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.7
    • /
    • pp.1317-1324
    • /
    • 2016
  • Utilizing the Arduino platform to express the emotions that reflect the colors expressed the jewelry. Emotional color expression utilizes Plutchik's Wheel of Emotions model was applied to the similarity of emotions and colors. It receives the recognized value from the temperature, lighting, sound, pulse sensor and gyro sensor of a smart jewelery that can be easily accessible from your smartphone processes that recognize and process the emotion applied the rules of inference based on ontology. The emotional feelings color depending on the color looking for the emotion seen in context and applied to the smart LED jewelry. The emotion and the color combination of contextual information extracted from the recognition sensors are reflected in the built-in smart LED Jewelry depending on the emotions of the wearer. Take a light plus the emotion in a smart jewelery can represent the emotions of the situation, the doctor will be able to be a tool of representation.

Multimodal Emotion Recognition using Face Image and Speech (얼굴영상과 음성을 이용한 멀티모달 감정인식)

  • Lee, Hyeon Gu;Kim, Dong Ju
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.8 no.1
    • /
    • pp.29-40
    • /
    • 2012
  • A challenging research issue that has been one of growing importance to those working in human-computer interaction are to endow a machine with an emotional intelligence. Thus, emotion recognition technology plays an important role in the research area of human-computer interaction, and it allows a more natural and more human-like communication between human and computer. In this paper, we propose the multimodal emotion recognition system using face and speech to improve recognition performance. The distance measurement of the face-based emotion recognition is calculated by 2D-PCA of MCS-LBP image and nearest neighbor classifier, and also the likelihood measurement is obtained by Gaussian mixture model algorithm based on pitch and mel-frequency cepstral coefficient features in speech-based emotion recognition. The individual matching scores obtained from face and speech are combined using a weighted-summation operation, and the fused-score is utilized to classify the human emotion. Through experimental results, the proposed method exhibits improved recognition accuracy of about 11.25% to 19.75% when compared to the most uni-modal approach. From these results, we confirmed that the proposed approach achieved a significant performance improvement and the proposed method was very effective.

Effect of Emotional Incongruence in Negative Emotional Valence & Cross-modality (교차 양상과 부정 정서에서의 정서 불일치 효과에 따른 기억의 차이)

  • Kim, Soyeon;Han, Kwang-Hee
    • Science of Emotion and Sensibility
    • /
    • v.17 no.3
    • /
    • pp.107-116
    • /
    • 2014
  • In the current study, it is suggested that when two emotions are presented through cross-modality, such as auditory and visual, incongruence will influence arousal, recognition, and recall of subjects. The first hypothesis is that incongruent cross-modality does not only increase arousal more than the congruent, but it also increases recall and recognition more than congruent. The second hypothesis is that arousal modulates recall and recognition of subjects. To demonstrate the two hypotheses, our experiment's conditions were manipulated to be congruent and incongruent by presenting positive or negative emotions, visually and acoustically. For dependent variables, we measured recall rate and recognition rates. and arousal was measured by PAD (pleasure-arousal-dominance) scales. After eight days, only recognition was measured repeatedly online. As a result, our behavioral experiment showed that there was a significant difference between arousal before watching a movie clip and after (p<.001), but no difference between the congruent condition and incongruent condition. Also, there was no significant difference between recognition performance in the congruent condition and incongruent condition, but there was a main effect of the clips' emotions. Interestingly after analyzing recognition rates separately depending on clips' emotions, there was a significant difference between congruent and incongruent conditions in the only negative clip (p= .044), not in the positive clip. In a detailed result, recognition in the incongruent condition is more than in the congruent condition. Furthermore, in the case of recall performance, there was a significant interaction between the clips' emotions shown in the clips and congruent conditions (p=.039). Through these results, the effect of incongruence with negative emotion was demonstrated, but an incongruent effect by arousal could not be demonstrated. In conclusion, in our study, we tried to determine the impact of one method to convey a story dramatically and have an effect on memory. These effects are influenced by the subjects' perceived emotions (valence and arousal).

Development of a functional game device and Contents for improving of brain activity through finger exercise (뇌활동 증진을 위한 손가락 운동용 기능성 게임 장치 및 콘텐츠 개발)

  • Ahn, Eun-Young
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.11
    • /
    • pp.1384-1390
    • /
    • 2012
  • It is well known that the exercising and stimulating of fingers have an important bearing on the brain. We take note of the fact and develope a game device for improving one's health and brain ability in respect of the education and training. Especially, we develope the device focused on the balanced exercising of five finger for improving brain function. The game device is possible to used in two-ways, namely online and off-line mode. In online mode, the device is connected with other visual devices such as smart phone and smart TV and communicated with Bluetooth and it is used as a MMI(multi-modal interface) device. Whereas, in off-line mode the game device works independently and it makes possible to enjoy auditorial and tactual games without video images for promotion of brain activity and emotional cultivation. For certification of the device, we implement two games(a fishing game for off-line mode and a shooting game for online mode) for people of all age, especially good for the elderly. It is usable as a game device to offering the elderly a great help for preventing impairment of the cognitive functions.

Emotion Recognition and Expression System of User using Multi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 사용자의 감정 인식 및 표현 시스템)

  • Yeom, Hong-Gi;Joo, Jong-Tae;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.20-26
    • /
    • 2008
  • As they have more and more intelligence robots or computers these days, so the interaction between intelligence robot(computer) - human is getting more and more important also the emotion recognition and expression are indispensable for interaction between intelligence robot(computer) - human. In this paper, firstly we extract emotional features at speech signal and facial image. Secondly we apply both BL(Bayesian Learning) and PCA(Principal Component Analysis), lastly we classify five emotions patterns(normal, happy, anger, surprise and sad) also, we experiment with decision fusion and feature fusion to enhance emotion recognition rate. The decision fusion method experiment on emotion recognition that result values of each recognition system apply Fuzzy membership function and the feature fusion method selects superior features through SFS(Sequential Forward Selection) method and superior features are applied to Neural Networks based on MLP(Multi Layer Perceptron) for classifying five emotions patterns. and recognized result apply to 2D facial shape for express emotion.