• Title/Summary/Keyword: Video Emotion

Search Result 143, Processing Time 0.025 seconds

A Study on Flow-emotion-state for Analyzing Flow-situation of Video Content Viewers (영상콘텐츠 시청자의 몰입상황 분석을 위한 몰입감정상태 연구)

  • Kim, Seunghwan;Kim, Cheolki
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.3
    • /
    • pp.400-414
    • /
    • 2018
  • It is required for today's video contents to interact with a viewer in order to provide more personalized experience to viewer(s) than before. In order to do so by providing friendly experience to a viewer from video contents' systemic perspective, understanding and analyzing the situation of the viewer have to be preferentially considered. For this purpose, it is effective to analyze the situation of a viewer by understanding the state of the viewer based on the viewer' s behavior(s) in the process of watching the video contents, and classifying the behavior(s) into the viewer's emotion and state during the flow. The term 'Flow-emotion-state' presented in this study is the state of the viewer to be assumed based on the emotions that occur subsequently in relation to the target video content in a situation which the viewer of the video content is already engaged in the viewing behavior. This Flow-emotion-state of a viewer can be expected to be utilized to identify characteristics of the viewer's Flow-situation by observing and analyzing the gesture and the facial expression that serve as the input modality of the viewer to the video content.

A Study on Sentiment Pattern Analysis of Video Viewers and Predicting Interest in Video using Facial Emotion Recognition (얼굴 감정을 이용한 시청자 감정 패턴 분석 및 흥미도 예측 연구)

  • Jo, In Gu;Kong, Younwoo;Jeon, Soyi;Cho, Seoyeong;Lee, DoHoon
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.2
    • /
    • pp.215-220
    • /
    • 2022
  • Emotion recognition is one of the most important and challenging areas of computer vision. Nowadays, many studies on emotion recognition were conducted and the performance of models is also improving. but, more research is needed on emotion recognition and sentiment analysis of video viewers. In this paper, we propose an emotion analysis system the includes a sentiment analysis model and an interest prediction model. We analyzed the emotional patterns of people watching popular and unpopular videos and predicted the level of interest using the emotion analysis system. Experimental results showed that certain emotions were strongly related to the popularity of videos and the interest prediction model had high accuracy in predicting the level of interest.

Impact of Immediacy and Self-Monitoring on Positive Emotion and Sense of Community of User: Focusing on Social Interactive Video Platform (근접성과 자기 점검이 사용자의 긍정적 감정과 소속감에 미치는 영향: 소셜 인터랙티브 비디오 플랫폼을 중심으로)

  • Kim, Hyun Young;Kim, Bomyeong;Kim, Jinwook;Shin, Hyunsik;Kim, Jinwoo
    • Science of Emotion and Sensibility
    • /
    • v.19 no.2
    • /
    • pp.3-18
    • /
    • 2016
  • This research, through video-based communication in a social video platform environment, studied the influence of the relationship between a video-watching subject and other watchers to that of the user's positive emotion and sense of community. Based on prior psychological theories called Social Impact Theory and Self-Monitoring Theory, the research built an actual video-based social video platform environment in order to verify an alternative utilizing new means of interaction based on videos. The result shows that under video-watching settings, user feels greater positive emotion and sense of community when the screen shows other people's reaction live and when him or her self's face is shown together, compared to when they are not shown. Also, based on the ANOVA analysis, the percentage of increase in positive emotion was greater when the two conditions mentioned above were provided synchronously compared to when they were not. The result of the research is expected to yield insights about a new form of social video platform.

Affective Effect of Video Playback Style and its Assessment Tool Development (영상의 재생 스타일에 따른 감성적 효과와 감성 평가 도구의 개발)

  • Jeong, Kyeong Ah;Suk, Hyeon-Jeong
    • Science of Emotion and Sensibility
    • /
    • v.19 no.3
    • /
    • pp.103-120
    • /
    • 2016
  • This study investigated how video playback styles affect viewers' emotional responses to a video and then suggested emotion assessment tool for playback-edited videos. The study involved two in-lab experiments. In the first experiment, observers were asked to express their feelings while watching videos in both original playback and articulated playback simultaneously. By controlling the speed, direction, and continuity, total of twelve playback styles were created. Each of the twelve playback styles were applied to five kinds of original videos that contains happy, anger, sad, relaxed, and neutral emotion. Thirty college students participated and more than 3,800 words were collected. The collected words were comprised of 899 kinds of emotion terms, and these emotion terms were classified into 52 emotion categories. The second experiment was conducted to develop proper emotion assessment tool for playback-edited video. Total of 38 emotion terms, which were extracted from 899 emotion terms, were employed from the first experiment and used as a scales (given in Korean and scored on a 5-point Likert scale) to assess the affective quality of pre-made video materials. The total of eleven pre-made commercial videos which applied different playback styles were collected. The videos were transformed to initial (un-edited) condition, and participants were evaluated pre-made videos by comparing initial condition videos simultaneously. Thirty college students evaluated playback-edited video in the second study. Based on the judgements, four factors were extracted through the factor analysis, and they were labelled "Happy", "Sad", "Reflective" and "Weird (funny and at the same time weird)." Differently from conventional emotion framework, the positivity and negativity of the valence dimension were independently treated, while the arousal aspect was marginally recognized. With four factors from the second experiment, finally emotion assessment tool for playback-edited video was proposed. The practical value and application of emotion assessment tool were also discussed.

Video Reality Improvement Using Measurement of Emotion for Olfactory Information (후각정보의 감성측정을 이용한 영상실감향상)

  • Lee, Guk-Hee;Kim, ShinWoo
    • Science of Emotion and Sensibility
    • /
    • v.18 no.3
    • /
    • pp.3-16
    • /
    • 2015
  • Will orange scent enhance video reality if it is presented with a video which vividly illustrates orange juice? Or, will romantic scent improve video reality if it is presented along with a date scene? Whereas the former is related to reality improvement when concrete objects or places are present in a video, the latter is related to a case when they are absent. This paper reviews previous research which tested diverse videos and scents in order to answer the above two different questions, and discusses implications, limitations, and future research directions. In particular, this paper focuses on measurement methods and results regarding acceptability of olfactory information, perception of scent similarity, olfactory vividness and video reality, matching between scent vs. color (or color temperature), and description of various scents using emotional adjectives. We expect this paper to help researchers or engineers who are interested in using scents for video reality.

A Movie Recommendation Method based on Emotion Ontology (감정 온톨로지 기반의 영화 추천 기법)

  • Kim, Ok-Seob;Lee, Seok-Won
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.9
    • /
    • pp.1068-1082
    • /
    • 2015
  • Due to the rapid advancement of the mobile technology, smart phones have been widely used in the current society. This lead to an easier way to retrieve video contents using web and mobile services. However, it is not a trivial problem to retrieve particular video contents based on users' specific preferences. The current movie recommendation system is based on the users' preference information. However, this system does not consider any emotional means or perspectives in each movie, which results in the dissatisfaction of user's emotional requirements. In order to address users' preferences and emotional requirements, this research proposes a movie recommendation technology to represent a movie's emotion and its associations. The proposed approach contains the development of emotion ontology by representing the relationship between the emotion and the concepts which cause emotional effects. Based on the current movie metadata ontology, this research also developed movie-emotion ontology based on the representation of the metadata related to the emotion. The proposed movie recommendation method recommends the movie by using movie-emotion ontology based on the emotion knowledge. Using this proposed approach, the user will be able to get the list of movies based on their preferences and emotional requirements.

Exploring the Relationships Between Emotions and State Motivation in a Video-based Learning Environment

  • YU, Jihyun;SHIN, Yunmi;KIM, Dasom;JO, Il-Hyun
    • Educational Technology International
    • /
    • v.18 no.2
    • /
    • pp.101-129
    • /
    • 2017
  • This study attempted to collect learners' emotion and state motivation, analyze their inner states, and measure state motivation using a non-self-reported survey. Emotions were measured by learning segment in detailed learning situations, and they were used to indicate total state motivation with prediction power. Emotion was also used to explain state motivation by learning segment. The purpose of this study was to overcome the limitations of video-based learning environments by verifying whether the emotions measured during individual learning segments can be used to indicate the learner's state motivation. Sixty-eight students participated in a 90-minute to measure their emotions and state motivation, and emotions showed a statistically significant relationship between total state motivation and motivation by learning segment. Although this result is not clear because this was an exploratory study, it is meaningful that this study showed the possibility that emotions during different learning segments can indicate state motivation.

Video Analysis System for Action and Emotion Detection by Object with Hierarchical Clustering based Re-ID (계층적 군집화 기반 Re-ID를 활용한 객체별 행동 및 표정 검출용 영상 분석 시스템)

  • Lee, Sang-Hyun;Yang, Seong-Hun;Oh, Seung-Jin;Kang, Jinbeom
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.89-106
    • /
    • 2022
  • Recently, the amount of video data collected from smartphones, CCTVs, black boxes, and high-definition cameras has increased rapidly. According to the increasing video data, the requirements for analysis and utilization are increasing. Due to the lack of skilled manpower to analyze videos in many industries, machine learning and artificial intelligence are actively used to assist manpower. In this situation, the demand for various computer vision technologies such as object detection and tracking, action detection, emotion detection, and Re-ID also increased rapidly. However, the object detection and tracking technology has many difficulties that degrade performance, such as re-appearance after the object's departure from the video recording location, and occlusion. Accordingly, action and emotion detection models based on object detection and tracking models also have difficulties in extracting data for each object. In addition, deep learning architectures consist of various models suffer from performance degradation due to bottlenects and lack of optimization. In this study, we propose an video analysis system consists of YOLOv5 based DeepSORT object tracking model, SlowFast based action recognition model, Torchreid based Re-ID model, and AWS Rekognition which is emotion recognition service. Proposed model uses single-linkage hierarchical clustering based Re-ID and some processing method which maximize hardware throughput. It has higher accuracy than the performance of the re-identification model using simple metrics, near real-time processing performance, and prevents tracking failure due to object departure and re-emergence, occlusion, etc. By continuously linking the action and facial emotion detection results of each object to the same object, it is possible to efficiently analyze videos. The re-identification model extracts a feature vector from the bounding box of object image detected by the object tracking model for each frame, and applies the single-linkage hierarchical clustering from the past frame using the extracted feature vectors to identify the same object that failed to track. Through the above process, it is possible to re-track the same object that has failed to tracking in the case of re-appearance or occlusion after leaving the video location. As a result, action and facial emotion detection results of the newly recognized object due to the tracking fails can be linked to those of the object that appeared in the past. On the other hand, as a way to improve processing performance, we introduce Bounding Box Queue by Object and Feature Queue method that can reduce RAM memory requirements while maximizing GPU memory throughput. Also we introduce the IoF(Intersection over Face) algorithm that allows facial emotion recognized through AWS Rekognition to be linked with object tracking information. The academic significance of this study is that the two-stage re-identification model can have real-time performance even in a high-cost environment that performs action and facial emotion detection according to processing techniques without reducing the accuracy by using simple metrics to achieve real-time performance. The practical implication of this study is that in various industrial fields that require action and facial emotion detection but have many difficulties due to the fails in object tracking can analyze videos effectively through proposed model. Proposed model which has high accuracy of retrace and processing performance can be used in various fields such as intelligent monitoring, observation services and behavioral or psychological analysis services where the integration of tracking information and extracted metadata creates greate industrial and business value. In the future, in order to measure the object tracking performance more precisely, there is a need to conduct an experiment using the MOT Challenge dataset, which is data used by many international conferences. We will investigate the problem that the IoF algorithm cannot solve to develop an additional complementary algorithm. In addition, we plan to conduct additional research to apply this model to various fields' dataset related to intelligent video analysis.