• Title/Summary/Keyword: emotional modal

Search Result 17, Processing Time 0.026 seconds

Emotion Recognition and Expression Method using Bi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 감정인식 및 표현기법)

  • Joo, Jong-Tae;Jang, In-Hun;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.754-759
    • /
    • 2007
  • In this paper, we proposed the Bi-Modal Sensor Fusion Algorithm which is the emotional recognition method that be able to classify 4 emotions (Happy, Sad, Angry, Surprise) by using facial image and speech signal together. We extract the feature vectors from speech signal using acoustic feature without language feature and classify emotional pattern using Neural-Network. We also make the feature selection of mouth, eyes and eyebrows from facial image. and extracted feature vectors that apply to Principal Component Analysis(PCA) remakes low dimension feature vector. So we proposed method to fused into result value of emotion recognition by using facial image and speech.

Analysis on Modal Changes and Its Performance Effect in Kim, So-hui's 「Chunhyangga」 (김소희 『춘향가』의 전조에 따른 연행효과 분석)

  • Kim, Sook-Ja
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.11
    • /
    • pp.611-619
    • /
    • 2014
  • The purpose of this study was to examine the technique of modal changes and its performance effect in Manjeong Kim, So-hui's pansori "Chunghyangga" at the portion of jungmori. It is because Manjeong-style "Chunghyangga" effectively presents dramatic scenes, role changes and emotional variations through modal changes of pansori. In this study, such two songs as and in jungmori were examined and study findings are as follows: Modal changes always reflected dramatic scenes, role changes, situational alterations and emotional variations. In addition, all the transpositions of the two songs were related to geunchin-jo and euddeumeum-jo. In , the technique of modal changes was intensively used in the latter half of the song, whereas in , it was used from the middle portion of the song. All of these uses of modal changes resulted in an excellent performance effect, which was musically expressed in order to discriminate it from the portion of role changes or explanations.

A Research of User Experience on Multi-Modal Interactive Digital Art

  • Qianqian Jiang;Jeanhun Chung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.80-85
    • /
    • 2024
  • The concept of single-modal digital art originated in the 20th century and has evolved through three key stages. Over time, digital art has transformed into multi-modal interaction, representing a new era in art forms. Based on multi-modal theory, this paper aims to explore the characteristics of interactive digital art in innovative art forms and its impact on user experience. Through an analysis of practical application of multi-modal interactive digital art, this study summarises the impact of creative models of digital art on the physical and mental aspects of user experience. In creating audio-visual-based art, multi-modal digital art should seamlessly incorporate sensory elements and leverage computer image processing technology. Focusing on user perception, emotional expression, and cultural communication, it strives to establish an immersive environment with user experience at its core. Future research, particularly with emerging technologies like Artificial Intelligence(AR) and Virtual Reality(VR), should not merely prioritize technology but aim for meaningful interaction. Through multi-modal interaction, digital art is poised to continually innovate, offering new possibilities and expanding the realm of interactive digital art.

Sensitivity Lighting System Based on multimodal (멀티모달 기반의 감성 조명 시스템)

  • Kwon, Sun-Min;Jung, In-Bum
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.4
    • /
    • pp.721-729
    • /
    • 2012
  • In this paper, human sensibility is measured on multi-modal environment and a sensitivity lighting system is implemented according to driven emotional indexes. We use LED lighting because it supports ecological circumstance, high efficiency, and long lifetime. In particular, the LED lighting provides various color schemes even in single lighting bulb. To cognize the human sensibility, we use the image information and the arousal state information, which are composed of multi-modal basis and calculates emotional indexes. In experiments, as the LED lighting color vision varies according to users' emotional index, we show that it provides human friendly lighting system compared to the existing systems.

Audio and Video Bimodal Emotion Recognition in Social Networks Based on Improved AlexNet Network and Attention Mechanism

  • Liu, Min;Tang, Jun
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.754-771
    • /
    • 2021
  • In the task of continuous dimension emotion recognition, the parts that highlight the emotional expression are not the same in each mode, and the influences of different modes on the emotional state is also different. Therefore, this paper studies the fusion of the two most important modes in emotional recognition (voice and visual expression), and proposes a two-mode dual-modal emotion recognition method combined with the attention mechanism of the improved AlexNet network. After a simple preprocessing of the audio signal and the video signal, respectively, the first step is to use the prior knowledge to realize the extraction of audio characteristics. Then, facial expression features are extracted by the improved AlexNet network. Finally, the multimodal attention mechanism is used to fuse facial expression features and audio features, and the improved loss function is used to optimize the modal missing problem, so as to improve the robustness of the model and the performance of emotion recognition. The experimental results show that the concordance coefficient of the proposed model in the two dimensions of arousal and valence (concordance correlation coefficient) were 0.729 and 0.718, respectively, which are superior to several comparative algorithms.

A study on the English modal auxiliary Will/Shall (영어의 서법 조동사 Will/Shall에 관한 연구)

  • Kang, Mun-Koo
    • English Language & Literature Teaching
    • /
    • v.12 no.3
    • /
    • pp.99-122
    • /
    • 2006
  • The purpose of this paper is to explain the meanings and uses of the English auxiliaries SHALL/WILL. The complexity of modern usage of SHALL/WILL has been one of the most disputable themes of traditional English grammar. The paper purported to address the study and analysis of diachronic and synchronic approach to the two auxiliaries. A general view of the figures of Fries'(1925) survey was added for further investigation. The results of the study showed that these auxiliaries express some of various modal meanings associated with the volitional or emotional attitude of the speaker without implying futurity. The findings also suggested that the use of SHALL in present-day English is restricted to non-volitional future with the first person but the practice of this use is also diminished by the expansion of the use of WILL, and the original meaning of WILL, 'to desire or wish', has generally been replaced by other verbs or modal forms. But sentences which seem to indicate futurity are often tinged with modal senses. Therefore, WILL/SHALL should be considered to act either as tense auxiliary or as modal auxiliary depending on situational contexts in which it occurs.

  • PDF

Intelligent Emotional Interface for Personal Robot and Its Application to a Humanoid Robot, AMIET

  • Seo, Yong-Ho;Jeong, Il-Woong;Jung, Hye-Won;Yang, Hyun-S.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1764-1768
    • /
    • 2004
  • In the near future, robots will be used for the personal use. To provide useful services to humans, it will be necessary for robots to understand human intentions. Consequently, the development of emotional interfaces for robots is an important expansion of human-robot interactions. We designed and developed an intelligent emotional interface for the robot, and applied the interfaces to our humanoid robot, AMIET. Subsequent human-robot interaction demonstrated that our intelligent emotional interface is very intuitive and friendly

  • PDF

Physical Property of PTT/Wool/Modal Air Vortex Yarns for High Emotional Garment (고감성 의류용 PTT/울/모달 에어 볼텍스 복합사의 물성)

  • Kim, Hyunah
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.39 no.6
    • /
    • pp.877-884
    • /
    • 2015
  • Polytrimethylene Terephthalate (PTT) is an eco-fiber with good elastic properties; however, it requires more detailed studies related to spinnability according to blending of various kinds of fibers. The evolution of spinning technology was focused on improved productivity with good quality; in addition, air vortex spinning was recently invented and applied on the spinning factory as the facility with good productivity and quality. More detail spinning technology according to the blending of various kinds of fibers on the air vortex spinning system is required to obtain good quality yarns for high emotional fabrics. In this paper, the physical properties of air vortex, compact and ring staple yarns using PTT/wool/modal blend fibers were investigated with yarn structure to promote high functional PTT that includes fabrics for high emotional garments. Unevenness of air vortex yarns was higher than those of compact and ring yarns; in addition, imperfections were greater than those of compact and ring yarns, which was attributed to a fascinated vortex yarn structure. Tenacity and breaking strain of air vortex yarns were lower than those of compact and ring yarns, caused by higher unevenness and more imperfections of air vortex yarns compared to compact and ring yarns. Vortex yarns showed the highest initial modulus and ring yarns showed the lowest ones which results in a stiff tactile feeling of air vortex yarns in regards to the initial modulus of yarns. Dry and wet thermal shrinkages of air vortex yarns were lower than ring yarns. Good shape retention of vortex yarns was estimated due to low thermal shrinkage.

Impact Analysis of nonverbal multimodals for recognition of emotion expressed virtual humans (가상 인간의 감정 표현 인식을 위한 비언어적 다중모달 영향 분석)

  • Kim, Jin Ok
    • Journal of Internet Computing and Services
    • /
    • v.13 no.5
    • /
    • pp.9-19
    • /
    • 2012
  • Virtual human used as HCI in digital contents expresses his various emotions across modalities like facial expression and body posture. However, few studies considered combinations of such nonverbal multimodal in emotion perception. Computational engine models have to consider how a combination of nonverbal modal like facial expression and body posture will be perceived by users to implement emotional virtual human, This paper proposes the impacts of nonverbal multimodal in design of emotion expressed virtual human. First, the relative impacts are analysed between different modals by exploring emotion recognition of modalities for virtual human. Then, experiment evaluates the contribution of the facial and postural congruent expressions to recognize basic emotion categories, as well as the valence and activation dimensions. Measurements are carried out to the impact of incongruent expressions of multimodal on the recognition of superposed emotions which are known to be frequent in everyday life. Experimental results show that the congruence of facial and postural expression of virtual human facilitates perception of emotion categories and categorical recognition is influenced by the facial expression modality, furthermore, postural modality are preferred to establish a judgement about level of activation dimension. These results will be used to implementation of animation engine system and behavior syncronization for emotion expressed virtual human.

Korean Emotional Speech and Facial Expression Database for Emotional Audio-Visual Speech Generation (대화 영상 생성을 위한 한국어 감정음성 및 얼굴 표정 데이터베이스)

  • Baek, Ji-Young;Kim, Sera;Lee, Seok-Pil
    • Journal of Internet Computing and Services
    • /
    • v.23 no.2
    • /
    • pp.71-77
    • /
    • 2022
  • In this paper, a database is collected for extending the speech synthesis model to a model that synthesizes speech according to emotions and generating facial expressions. The database is divided into male and female data, and consists of emotional speech and facial expressions. Two professional actors of different genders speak sentences in Korean. Sentences are divided into four emotions: happiness, sadness, anger, and neutrality. Each actor plays about 3300 sentences per emotion. A total of 26468 sentences collected by filming this are not overlap and contain expression similar to the corresponding emotion. Since building a high-quality database is important for the performance of future research, the database is assessed on emotional category, intensity, and genuineness. In order to find out the accuracy according to the modality of data, the database is divided into audio-video data, audio data, and video data.