• Title/Summary/Keyword: Video Emotion

Search Result 146, Processing Time 0.026 seconds

A Study on Flow-emotion-state for Analyzing Flow-situation of Video Content Viewers (영상콘텐츠 시청자의 몰입상황 분석을 위한 몰입감정상태 연구)

  • Kim, Seunghwan;Kim, Cheolki
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.3
    • /
    • pp.400-414
    • /
    • 2018
  • It is required for today's video contents to interact with a viewer in order to provide more personalized experience to viewer(s) than before. In order to do so by providing friendly experience to a viewer from video contents' systemic perspective, understanding and analyzing the situation of the viewer have to be preferentially considered. For this purpose, it is effective to analyze the situation of a viewer by understanding the state of the viewer based on the viewer' s behavior(s) in the process of watching the video contents, and classifying the behavior(s) into the viewer's emotion and state during the flow. The term 'Flow-emotion-state' presented in this study is the state of the viewer to be assumed based on the emotions that occur subsequently in relation to the target video content in a situation which the viewer of the video content is already engaged in the viewing behavior. This Flow-emotion-state of a viewer can be expected to be utilized to identify characteristics of the viewer's Flow-situation by observing and analyzing the gesture and the facial expression that serve as the input modality of the viewer to the video content.

A Study on Sentiment Pattern Analysis of Video Viewers and Predicting Interest in Video using Facial Emotion Recognition (얼굴 감정을 이용한 시청자 감정 패턴 분석 및 흥미도 예측 연구)

  • Jo, In Gu;Kong, Younwoo;Jeon, Soyi;Cho, Seoyeong;Lee, DoHoon
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.2
    • /
    • pp.215-220
    • /
    • 2022
  • Emotion recognition is one of the most important and challenging areas of computer vision. Nowadays, many studies on emotion recognition were conducted and the performance of models is also improving. but, more research is needed on emotion recognition and sentiment analysis of video viewers. In this paper, we propose an emotion analysis system the includes a sentiment analysis model and an interest prediction model. We analyzed the emotional patterns of people watching popular and unpopular videos and predicted the level of interest using the emotion analysis system. Experimental results showed that certain emotions were strongly related to the popularity of videos and the interest prediction model had high accuracy in predicting the level of interest.

Impact of Immediacy and Self-Monitoring on Positive Emotion and Sense of Community of User: Focusing on Social Interactive Video Platform (근접성과 자기 점검이 사용자의 긍정적 감정과 소속감에 미치는 영향: 소셜 인터랙티브 비디오 플랫폼을 중심으로)

  • Kim, Hyun Young;Kim, Bomyeong;Kim, Jinwook;Shin, Hyunsik;Kim, Jinwoo
    • Science of Emotion and Sensibility
    • /
    • v.19 no.2
    • /
    • pp.3-18
    • /
    • 2016
  • This research, through video-based communication in a social video platform environment, studied the influence of the relationship between a video-watching subject and other watchers to that of the user's positive emotion and sense of community. Based on prior psychological theories called Social Impact Theory and Self-Monitoring Theory, the research built an actual video-based social video platform environment in order to verify an alternative utilizing new means of interaction based on videos. The result shows that under video-watching settings, user feels greater positive emotion and sense of community when the screen shows other people's reaction live and when him or her self's face is shown together, compared to when they are not shown. Also, based on the ANOVA analysis, the percentage of increase in positive emotion was greater when the two conditions mentioned above were provided synchronously compared to when they were not. The result of the research is expected to yield insights about a new form of social video platform.

Affective Effect of Video Playback Style and its Assessment Tool Development (영상의 재생 스타일에 따른 감성적 효과와 감성 평가 도구의 개발)

  • Jeong, Kyeong Ah;Suk, Hyeon-Jeong
    • Science of Emotion and Sensibility
    • /
    • v.19 no.3
    • /
    • pp.103-120
    • /
    • 2016
  • This study investigated how video playback styles affect viewers' emotional responses to a video and then suggested emotion assessment tool for playback-edited videos. The study involved two in-lab experiments. In the first experiment, observers were asked to express their feelings while watching videos in both original playback and articulated playback simultaneously. By controlling the speed, direction, and continuity, total of twelve playback styles were created. Each of the twelve playback styles were applied to five kinds of original videos that contains happy, anger, sad, relaxed, and neutral emotion. Thirty college students participated and more than 3,800 words were collected. The collected words were comprised of 899 kinds of emotion terms, and these emotion terms were classified into 52 emotion categories. The second experiment was conducted to develop proper emotion assessment tool for playback-edited video. Total of 38 emotion terms, which were extracted from 899 emotion terms, were employed from the first experiment and used as a scales (given in Korean and scored on a 5-point Likert scale) to assess the affective quality of pre-made video materials. The total of eleven pre-made commercial videos which applied different playback styles were collected. The videos were transformed to initial (un-edited) condition, and participants were evaluated pre-made videos by comparing initial condition videos simultaneously. Thirty college students evaluated playback-edited video in the second study. Based on the judgements, four factors were extracted through the factor analysis, and they were labelled "Happy", "Sad", "Reflective" and "Weird (funny and at the same time weird)." Differently from conventional emotion framework, the positivity and negativity of the valence dimension were independently treated, while the arousal aspect was marginally recognized. With four factors from the second experiment, finally emotion assessment tool for playback-edited video was proposed. The practical value and application of emotion assessment tool were also discussed.

Video Reality Improvement Using Measurement of Emotion for Olfactory Information (후각정보의 감성측정을 이용한 영상실감향상)

  • Lee, Guk-Hee;Kim, ShinWoo
    • Science of Emotion and Sensibility
    • /
    • v.18 no.3
    • /
    • pp.3-16
    • /
    • 2015
  • Will orange scent enhance video reality if it is presented with a video which vividly illustrates orange juice? Or, will romantic scent improve video reality if it is presented along with a date scene? Whereas the former is related to reality improvement when concrete objects or places are present in a video, the latter is related to a case when they are absent. This paper reviews previous research which tested diverse videos and scents in order to answer the above two different questions, and discusses implications, limitations, and future research directions. In particular, this paper focuses on measurement methods and results regarding acceptability of olfactory information, perception of scent similarity, olfactory vividness and video reality, matching between scent vs. color (or color temperature), and description of various scents using emotional adjectives. We expect this paper to help researchers or engineers who are interested in using scents for video reality.

A Movie Recommendation Method based on Emotion Ontology (감정 온톨로지 기반의 영화 추천 기법)

  • Kim, Ok-Seob;Lee, Seok-Won
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.9
    • /
    • pp.1068-1082
    • /
    • 2015
  • Due to the rapid advancement of the mobile technology, smart phones have been widely used in the current society. This lead to an easier way to retrieve video contents using web and mobile services. However, it is not a trivial problem to retrieve particular video contents based on users' specific preferences. The current movie recommendation system is based on the users' preference information. However, this system does not consider any emotional means or perspectives in each movie, which results in the dissatisfaction of user's emotional requirements. In order to address users' preferences and emotional requirements, this research proposes a movie recommendation technology to represent a movie's emotion and its associations. The proposed approach contains the development of emotion ontology by representing the relationship between the emotion and the concepts which cause emotional effects. Based on the current movie metadata ontology, this research also developed movie-emotion ontology based on the representation of the metadata related to the emotion. The proposed movie recommendation method recommends the movie by using movie-emotion ontology based on the emotion knowledge. Using this proposed approach, the user will be able to get the list of movies based on their preferences and emotional requirements.

Development of Emotion Recognition Model Using Audio-video Feature Extraction Multimodal Model (음성-영상 특징 추출 멀티모달 모델을 이용한 감정 인식 모델 개발)

  • Jong-Gu Kim;Jang-Woo Kwon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.4
    • /
    • pp.221-228
    • /
    • 2023
  • Physical and mental changes caused by emotions can affect various behaviors, such as driving or learning behavior. Therefore, recognizing these emotions is a very important task because it can be used in various industries, such as recognizing and controlling dangerous emotions while driving. In this paper, we attempted to solve the emotion recognition task by implementing a multimodal model that recognizes emotions using both audio and video data from different domains. After extracting voice from video data using RAVDESS data, features of voice data are extracted through a model using 2D-CNN. In addition, the video data features are extracted using a slowfast feature extractor. And the information contained in the audio and video data, which have different domains, are combined into one feature that contains all the information. Afterwards, emotion recognition is performed using the combined features. Lastly, we evaluate the conventional methods that how to combine results from models and how to vote two model's results and a method of unifying the domain through feature extraction, then combining the features and performing classification using a classifier.

Exploring the Relationships Between Emotions and State Motivation in a Video-based Learning Environment

  • YU, Jihyun;SHIN, Yunmi;KIM, Dasom;JO, Il-Hyun
    • Educational Technology International
    • /
    • v.18 no.2
    • /
    • pp.101-129
    • /
    • 2017
  • This study attempted to collect learners' emotion and state motivation, analyze their inner states, and measure state motivation using a non-self-reported survey. Emotions were measured by learning segment in detailed learning situations, and they were used to indicate total state motivation with prediction power. Emotion was also used to explain state motivation by learning segment. The purpose of this study was to overcome the limitations of video-based learning environments by verifying whether the emotions measured during individual learning segments can be used to indicate the learner's state motivation. Sixty-eight students participated in a 90-minute to measure their emotions and state motivation, and emotions showed a statistically significant relationship between total state motivation and motivation by learning segment. Although this result is not clear because this was an exploratory study, it is meaningful that this study showed the possibility that emotions during different learning segments can indicate state motivation.