• Title/Summary/Keyword: Emotion Indexing

Search Result 12, Processing Time 0.03 seconds

An Exploratory Investigation on Visual Cues for Emotional Indexing of Image (이미지 감정색인을 위한 시각적 요인 분석에 관한 탐색적 연구)

  • Chung, SunYoung;Chung, EunKyung
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.48 no.1
    • /
    • pp.53-73
    • /
    • 2014
  • Given that emotion-based computing environment has grown recently, it is necessary to focus on emotional access and use of multimedia resources including images. The purpose of this study aims to identify the visual cues for emotion in images. In order to achieve it, this study selected five basic emotions such as love, happiness, sadness, fear, and anger and interviewed twenty participants to demonstrate the visual cues for emotions. A total of 620 visual cues mentioned by participants were collected from the interview results and coded according to five categories and 18 sub-categories for visual cues. Findings of this study showed that facial expressions, actions / behaviors, and syntactic features were found to be significant in terms of perceiving a specific emotion of the image. An individual emotion from visual cues demonstrated distinctive characteristics. The emotion of love showed a higher relation with visual cues such as actions and behaviors, and the happy emotion is substantially related to facial expressions. In addition, the sad emotion was found to be perceived primarily through actions and behaviors and the fear emotion is perceived considerably through facial expressions. The anger emotion is highly related to syntactic features such as lines, shapes, and sizes. Findings of this study implicated that emotional indexing could be effective when content-based features were considered in combination with concept-based features.

A PROPOSAL OF SEMI-AUTOMATIC INDEXING ALGORITHM FOR MULTI-MEDIA DATABASE WITH USERS' SENSIBILITY

  • Mitsuishi, Takashi;Sasaki, Jun;Funyu, Yutaka
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.04a
    • /
    • pp.120-125
    • /
    • 2000
  • We propose a semi-automatic and dynamic indexing algorithm for multi-media database(e.g. movie files, audio files), which are difficult to create indexes expressing their emotional or abstract contents, according to user's sensitivity by using user's histories of access to database. In this algorithm, we simply categorize data at first, create a vector space of each user's interest(user model) from the history of which categories the data belong to, and create vector space of each data(title model) from the history of which users the data had been accessed from. By continuing the above method, we could create suitable indexes, which show emotional content of each data. In this paper, we define the recurrence formulas based on the proposed algorithm. We also show the effectiveness of the algorithm by simulation result.

  • PDF

Towards Next Generation Multimedia Information Retrieval by Analyzing User-centered Image Access and Use (이용자 중심의 이미지 접근과 이용 분석을 통한 차세대 멀티미디어 검색 패러다임 요소에 관한 연구)

  • Chung, EunKyung
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.51 no.4
    • /
    • pp.121-138
    • /
    • 2017
  • As information users seek multimedia with a wide variety of information needs, information environments for multimedia have been developed drastically. More specifically, as seeking multimedia with emotional access points has been popular, the needs for indexing in terms of abstract concepts including emotions have grown. This study aims to analyze the index terms extracted from Getty Image Bank. Five basic emotion terms, which are sadness, love, horror, happiness, anger, were used when collected the indexing terms. A total 22,675 index terms were used for this study. The data are three sets; entire emotion, positive emotion, and negative emotion. For these three data sets, co-word occurrence matrices were created and visualized in weighted network with PNNC clusters. The entire emotion network demonstrates three clusters and 20 sub-clusters. On the other hand, positive emotion network and negative emotion network show 10 clusters, respectively. The results point out three elements for next generation of multimedia retrieval: (1) the analysis on index terms for emotions shown in people on image, (2) the relationship between connotative term and denotative term and possibility for inferring connotative terms from denotative terms using the relationship, and (3) the significance of thesaurus on connotative term in order to expand related terms or synonyms for better access points.

A multidisciplinary analysis of the main actor's conflict emotions in Animation film's Turning Point (장편 애니메이션 극적전환점에서 주인공의 갈등 정서에 대한 다학제적 분석)

  • Lee, Tae Rin;Kim, Jong Dae;Liu, Guoxu;Ingabire, Jesse;Kim, Jae Ho
    • Korea Science and Art Forum
    • /
    • v.34
    • /
    • pp.275-290
    • /
    • 2018
  • The study began with the recognition that the animations movie need objective and reasonable methods to classify conflicts in visual to analyze conflicts centering on narratives. Study the emotions of the hero in conflict. The purpose of the study is to analyze conflict intensity and emotion. The results and contents of the study are as follows. First, we found a Turning Point and suggested a conflict classification model (Conflict 6B Model). Second, Based on the conflict classification model, the conflict based shot DB was extracted. Third, I found strength and emotion in inner and super personal conflicts. Fourth, Experiments and tests of strength and emotion were conducted in internal and super personal conflicts. The results of this study are metadata extracted from the emotional research on conflict. It is expected to be applied to video indexing of conflicts.

An Investigation of the Objectiveness of Image Indexing from Users' Perspectives (이용자 관점에서 본 이미지 색인의 객관성에 대한 연구)

  • 이지연
    • Journal of the Korean Society for information Management
    • /
    • v.19 no.3
    • /
    • pp.123-143
    • /
    • 2002
  • Developing good methods for image description and indexing is fundamental for successful image retrieval, regardless of the content of images. Researchers and practitioners in the field of image indexing have developed a variety of image indexing systems and methods with the consideration of information types delivered by images. Such efforts in developing image indexing systems and methods include Panofsky's levels of image indexing and indexing systems adopting different approaches such as thesauri-based approach, classification approach. description element-based approach, and categorization approach. This study investigated users' perception of the objectiveness of image indexing, especially the iconographical analysis of image information advocated by Panofsky. One of the best examples of subjectiveness and conditional-dependence of image information is emotion. As a result, this study dealt with visual emotional information. Experiments were conducted in two phases : one was to measure the degree of agreement or disagreement about the emotional content of pictures among forty-eight participants and the other was to examine the inter-rater consistency defined as the degree of users' agreement on indexing. The results showed that the experiment participants made fairly subjective interpretation when they were viewing pictures. It was also found that the subjective interpretation made by the participants resulted from the individual differences in terms of their educational or cultural background. The study results emphasize the importance of developing new ways of indexing and/or searching for images, which can alleviate the limitations of access to images due to the subjective interpretation made by different users.

An Expansion of Affective Image Access Points Based on Users' Response on Image (이용자 반응 기반 이미지 감정 접근점 확장에 관한 연구)

  • Chung, Eun Kyung
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.25 no.3
    • /
    • pp.101-118
    • /
    • 2014
  • Given the context of rapid developing ubiquitous computing environment, it is imperative for users to search and use images based on affective meanings. However, it has been difficult to index affective meanings of image since emotions of image are substantially subjective and highly abstract. In addition, utilizing low level features of image for indexing affective meanings of image has been limited for high level concepts of image. To facilitate the access points of affective meanings of image, this study aims to utilize user-provided responses of images. For a data set, emotional words are collected and cleaned from twenty participants with a set of fifteen images, three images for each of basic emotions, love, sad, fear, anger, and happy. A total of 399 unique emotion words are revealed and 1,093 times appeared in this data set. Through co-word analysis and network analysis of emotional words from users' responses, this study demonstrates expanded word sets for five basic emotions. The expanded word sets are characterized with adjective expression and action/behavior expression.

Speaker-Dependent Emotion Recognition For Audio Document Indexing

  • Hung LE Xuan;QUENOT Georges;CASTELLI Eric
    • Proceedings of the IEEK Conference
    • /
    • summer
    • /
    • pp.92-96
    • /
    • 2004
  • The researches of the emotions are currently great interest in speech processing as well as in human-machine interaction domain. In the recent years, more and more of researches relating to emotion synthesis or emotion recognition are developed for the different purposes. Each approach uses its methods and its various parameters measured on the speech signal. In this paper, we proposed using a short-time parameter: MFCC coefficients (Mel­Frequency Cepstrum Coefficients) and a simple but efficient classifying method: Vector Quantification (VQ) for speaker-dependent emotion recognition. Many other features: energy, pitch, zero crossing, phonetic rate, LPC... and their derivatives are also tested and combined with MFCC coefficients in order to find the best combination. The other models: GMM and HMM (Discrete and Continuous Hidden Markov Model) are studied as well in the hope that the usage of continuous distribution and the temporal behaviour of this set of features will improve the quality of emotion recognition. The maximum accuracy recognizing five different emotions exceeds $88\%$ by using only MFCC coefficients with VQ model. This is a simple but efficient approach, the result is even much better than those obtained with the same database in human evaluation by listening and judging without returning permission nor comparison between sentences [8]; And this result is positively comparable with the other approaches.

  • PDF

WebSES : Web Site Sensibility Evaluation System based on Color Combination (WebSES : 배색을 이용한 웹 사이트 감성 평가 시스템)

  • 유헌우;조경자;홍지영;박수이
    • Science of Emotion and Sensibility
    • /
    • v.7 no.1
    • /
    • pp.51-64
    • /
    • 2004
  • In this paper, we propose a web page retrieval system based on the sensibility evaluation induced by the color combination of web pages. The realized system consist of two modules - the indexing module that automatically extracts and indexes the color information from the web page and the retrieval module that retrieves web pages based on the color combination when sensibility adjective is presented. Also, to verify the system usefulness, we analyzed the ranking of web pages retrieved by the system and by human subjects (non-expels and experts for color web page design) using two statistical methods of correlation and paired-t test. Results by non-experts showed the realized system was suitable for 10 sensibility adjectives among 18 sensibility adjectives, and results by experts showed that the realized system was suitable for 14 sensibility adjectives among 18 sensibility adjectives.

  • PDF

텍스타일 영상에서의 감성 기반 검색 시스템

  • Kim, Young-Rae;Shin, Yun-Hee;Kim, Eun-Yi
    • Proceedings of the Korea Society for Industrial Systems Conference
    • /
    • 2009.05a
    • /
    • pp.82-87
    • /
    • 2009
  • 본 논문에서는 감성 기반으로 텍스타일을 자동으로 색인하고 검색 할 수 있는 시스템을 제안한다. 제안된 시스템은 영상 수집기, 감성 색인기, 검색기(Matcher), 질의 인터페이스로 구성되어 있다. 감성 색인기는 텍스타일 영상에 포함된 컬러와 패턴 정보를 기반으로 감성개념을 인식하고, 이를 이용하여 영상을 색인한다. 이때, 감성 어휘로 고바야시가 정의한 8개 (romantic, natural, casual, elegant, chic, classic, dandy, modern)를 사용한다. 질의 인터페이스에서 사용자는 두 가지 방식으로 질의를 선택할 수 있다. 첫 번째 방법은 감성 키워드를 사용하는 것이고, 두 번째는 사용자의 의도를 설명할 수 있는 영상을 이용하는 예제 기반 질의 방식이다. 질의가 주어지면, 검색기는 랭킹 알고리즘을 사용하여 검색 결과를 생성한다. 이 때, 유사도 비교방식은 선택된 질의방식에 따라 달라진다. 제안된 시스템의 성능을 검증하기 위해 웹 검색에 익숙한 50명(남자: 32명, 여자: 18명)을 대상으로 웹에서 수집한 3,416 장에 대해서 3가지 항목으로 사용자 평가를 하였다. 사용자 평가의 항목인 적합도(Relevance), 노력(Search Effort), 만족도(Satisfaction)의 결과로 사용자가 검색한 결과영상에서 적합도의 수치가 낮게 나왔지만, 만족도와 노력의 수치는 높게 평가되었다. 제안된 시스템에서 사용자는 자신이 선호하는 결과 영상을 상위 40개의 영상 내에서 얻을 수 있었다. 이는 제안된 시스템이 사용자들이 원하는 영상을 효율적으로 검색할 수 있다는 것을 증명했다.

  • PDF

Automatic extraction of similar poetry for study of literary texts: An experiment on Hindi poetry

  • Prakash, Amit;Singh, Niraj Kumar;Saha, Sujan Kumar
    • ETRI Journal
    • /
    • v.44 no.3
    • /
    • pp.413-425
    • /
    • 2022
  • The study of literary texts is one of the earliest disciplines practiced around the globe. Poetry is artistic writing in which words are carefully chosen and arranged for their meaning, sound, and rhythm. Poetry usually has a broad and profound sense that makes it difficult to be interpreted even by humans. The essence of poetry is Rasa, which signifies mood or emotion. In this paper, we propose a poetry classification-based approach to automatically extract similar poems from a repository. Specifically, we perform a novel Rasa-based classification of Hindi poetry. For the task, we primarily used lexical features in a bag-of-words model trained using the support vector machine classifier. In the model, we employed Hindi WordNet, Latent Semantic Indexing, and Word2Vec-based neural word embedding. To extract the rich feature vectors, we prepared a repository containing 37 717 poems collected from various sources. We evaluated the performance of the system on a manually constructed dataset containing 945 Hindi poems. Experimental results demonstrated that the proposed model attained satisfactory performance.