• Title/Summary/Keyword: 감정이미지

Search Result 279, Processing Time 0.023 seconds

Development of Emotion Subtitles Broadcast System based on Terrestrial UHD TV for the Hearing-Impaired (청각장애인을 위한 지상파 UHD 기반 감정표현 자막 송출 시스템 개발)

  • Lee, June;Ahn, Chunghyun
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.141-144
    • /
    • 2021
  • 최근 지상파 UHD 방송에서는 자막을 비디오 패킷에 삽입하여 전달하는 기존 방식이 아닌 별도의 전송로로 전달하는 폐쇄형 자막(closed caption) 방식을 적용하고 있다. 또한 자막에는 문자 뿐만 아니라 이미지까지 포함하여 청각장애인의 프로그램 이해도를 높이는데 활용할 수 있다. 따라서 본 논문에서는 청각장애 시청자가 방송 콘텐츠 내용의 이해도를 향상시키기 위하여 지상파 UHD 방송에서 기존의 문자 자막과 움직이는 이미지 자막(=감정표현 자막)을 송출하여 동시에 표현할 수 있는 송출시스템을 제안한다.

  • PDF

An Investigation of the Objectiveness of Image Indexing from Users' Perspectives (이용자 관점에서 본 이미지 색인의 객관성에 대한 연구)

  • 이지연
    • Journal of the Korean Society for information Management
    • /
    • v.19 no.3
    • /
    • pp.123-143
    • /
    • 2002
  • Developing good methods for image description and indexing is fundamental for successful image retrieval, regardless of the content of images. Researchers and practitioners in the field of image indexing have developed a variety of image indexing systems and methods with the consideration of information types delivered by images. Such efforts in developing image indexing systems and methods include Panofsky's levels of image indexing and indexing systems adopting different approaches such as thesauri-based approach, classification approach. description element-based approach, and categorization approach. This study investigated users' perception of the objectiveness of image indexing, especially the iconographical analysis of image information advocated by Panofsky. One of the best examples of subjectiveness and conditional-dependence of image information is emotion. As a result, this study dealt with visual emotional information. Experiments were conducted in two phases : one was to measure the degree of agreement or disagreement about the emotional content of pictures among forty-eight participants and the other was to examine the inter-rater consistency defined as the degree of users' agreement on indexing. The results showed that the experiment participants made fairly subjective interpretation when they were viewing pictures. It was also found that the subjective interpretation made by the participants resulted from the individual differences in terms of their educational or cultural background. The study results emphasize the importance of developing new ways of indexing and/or searching for images, which can alleviate the limitations of access to images due to the subjective interpretation made by different users.

A Two-Stage Learning Method of CNN and K-means RGB Cluster for Sentiment Classification of Images (이미지 감성분류를 위한 CNN과 K-means RGB Cluster 이-단계 학습 방안)

  • Kim, Jeongtae;Park, Eunbi;Han, Kiwoong;Lee, Junghyun;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.139-156
    • /
    • 2021
  • The biggest reason for using a deep learning model in image classification is that it is possible to consider the relationship between each region by extracting each region's features from the overall information of the image. However, the CNN model may not be suitable for emotional image data without the image's regional features. To solve the difficulty of classifying emotion images, many researchers each year propose a CNN-based architecture suitable for emotion images. Studies on the relationship between color and human emotion were also conducted, and results were derived that different emotions are induced according to color. In studies using deep learning, there have been studies that apply color information to image subtraction classification. The case where the image's color information is additionally used than the case where the classification model is trained with only the image improves the accuracy of classifying image emotions. This study proposes two ways to increase the accuracy by incorporating the result value after the model classifies an image's emotion. Both methods improve accuracy by modifying the result value based on statistics using the color of the picture. When performing the test by finding the two-color combinations most distributed for all training data, the two-color combinations most distributed for each test data image were found. The result values were corrected according to the color combination distribution. This method weights the result value obtained after the model classifies an image's emotion by creating an expression based on the log function and the exponential function. Emotion6, classified into six emotions, and Artphoto classified into eight categories were used for the image data. Densenet169, Mnasnet, Resnet101, Resnet152, and Vgg19 architectures were used for the CNN model, and the performance evaluation was compared before and after applying the two-stage learning to the CNN model. Inspired by color psychology, which deals with the relationship between colors and emotions, when creating a model that classifies an image's sentiment, we studied how to improve accuracy by modifying the result values based on color. Sixteen colors were used: red, orange, yellow, green, blue, indigo, purple, turquoise, pink, magenta, brown, gray, silver, gold, white, and black. It has meaning. Using Scikit-learn's Clustering, the seven colors that are primarily distributed in the image are checked. Then, the RGB coordinate values of the colors from the image are compared with the RGB coordinate values of the 16 colors presented in the above data. That is, it was converted to the closest color. Suppose three or more color combinations are selected. In that case, too many color combinations occur, resulting in a problem in which the distribution is scattered, so a situation fewer influences the result value. Therefore, to solve this problem, two-color combinations were found and weighted to the model. Before training, the most distributed color combinations were found for all training data images. The distribution of color combinations for each class was stored in a Python dictionary format to be used during testing. During the test, the two-color combinations that are most distributed for each test data image are found. After that, we checked how the color combinations were distributed in the training data and corrected the result. We devised several equations to weight the result value from the model based on the extracted color as described above. The data set was randomly divided by 80:20, and the model was verified using 20% of the data as a test set. After splitting the remaining 80% of the data into five divisions to perform 5-fold cross-validation, the model was trained five times using different verification datasets. Finally, the performance was checked using the test dataset that was previously separated. Adam was used as the activation function, and the learning rate was set to 0.01. The training was performed as much as 20 epochs, and if the validation loss value did not decrease during five epochs of learning, the experiment was stopped. Early tapping was set to load the model with the best validation loss value. The classification accuracy was better when the extracted information using color properties was used together than the case using only the CNN architecture.

Dynamic Facial Expression of Fuzzy Modeling Using Probability of Emotion (감정확률을 이용한 동적 얼굴표정의 퍼지 모델링)

  • Gang, Hyo-Seok;Baek, Jae-Ho;Kim, Eun-Tae;Park, Min-Yong
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.04a
    • /
    • pp.401-404
    • /
    • 2007
  • 본 논문은 거울 투영을 이용하여 2D의 감정인식 데이터베이스를 3D에 적용 가능하다는 것을 증명한다. 또한, 감정 확률을 이용하여 퍼지 모델링을 기반으로한 얼굴표정을 생성하고, 표정을 움직이는 3가지 기본 움직임에 대한 퍼지이론을 적용하여 얼굴표현함수를 제안한다. 제안된 방법은 거울 투영을 통한 다중 이미지를 이용하여 2D에서 사용되는 감정인식에 대한 특징벡터를 3D에 적용한다. 이로 인해, 2D의 모델링 대상이 되는 실제 모델의 기본감정에 대한 비선형적인 얼굴표정을 퍼지를 기반으로 모델링한다. 그리고 얼굴표정을 표현하는데 기본 감정 6가지인 행복, 슬픔, 혐오, 화남, 놀람, 무서움으로 표현되며 기본 감정의 확률에 대해서 각 감정의 평균값을 사용하고, 6가지 감정 확률을 이용하여 동적 얼굴표정을 생성한다. 제안된 방법을 3D 인간형 아바타에 적용하여 실제 모델의 표정 벡터와 비교 분석한다.

  • PDF

An Android based Contextphone to aware Human Emotion (인간의 감정을 인지하는 안드로이드 기반 컨텍스트폰)

  • Ryu, Yunji;Kim, Sangwook
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2010.04a
    • /
    • pp.558-561
    • /
    • 2010
  • 컨텍스트폰은 사용자의 주변 상황을 실시간으로 수집하고 시각화하는 휴대전화이며 인간의 여섯 번째 감각 도구로써 신체의 일부가 되고 있다. 이에 따라 사용자에 특화된 상황 인지 기능을 지원하는 모바일 플랫폼 기술이 많이 연구되고 있다. 하지만 모바일 기기간의 상호작용이 아니라 사용자간의 소셜 인터랙션을 지원하는 모바일 플랫폼 연구는 미비하며 감정 등의 고수준 정보는 지원하지 않는다. 따라서 본 논문에서는 감정을 포함한 다양한 정보들을 지원하는 컨텍스트폰 플랫폼을 이용하여 사용자간의 감정을 공유 할 수 있는 컨텍스트폰에 대해 기술한다. 또한 사용자의 감정을 인식하기 위해 컨텍스트폰 플랫폼은 휴대전화 카메라를 이용하여 사용자의 얼굴이미지를 수집하고 감정인식기로 전달한다. 감정인식기는 사용자의 얼굴을 특징추출하여 패턴인식에 적용되는 분류분석 알고리즘을 통해 사용자의 감정을 알아내고 컨텍스트 서버를 매개체로 사용자간 감정을 전달하며 모바일 화면에 시각화한다.

Dynamic Facial Expression of Fuzzy Modeling Using Probability of Emotion (감정확률을 이용한 동적 얼굴표정의 퍼지 모델링)

  • Kang, Hyo-Seok;Baek, Jae-Ho;Kim, Eun-Tai;Park, Mignon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.1
    • /
    • pp.1-5
    • /
    • 2009
  • This paper suggests to apply mirror-reflected method based 2D emotion recognition database to 3D application. Also, it makes facial expression of fuzzy modeling using probability of emotion. Suggested facial expression function applies fuzzy theory to 3 basic movement for facial expressions. This method applies 3D application to feature vector for emotion recognition from 2D application using mirror-reflected multi-image. Thus, we can have model based on fuzzy nonlinear facial expression of a 2D model for a real model. We use average values about probability of 6 basic expressions such as happy, sad, disgust, angry, surprise and fear. Furthermore, dynimic facial expressions are made via fuzzy modelling. This paper compares and analyzes feature vectors of real model with 3D human-like avatar.

자기 이미지에 따른 착용의복이미지, 의복추구이미지 및 의복구매행동

  • 염인경;김미숙
    • Proceedings of the Costume Culture Conference
    • /
    • 2003.09a
    • /
    • pp.85-86
    • /
    • 2003
  • 의복은 비언어적 상징으로서 의사를 전달하는 무언의 언어로 이용되어 사람들은 의복을 통해 자기를 나타내며 동시에 다른 사람을 지각하고 평가하게 되는데 이러한 역할은 현대사회에 와서 더욱 강조되었다. 따라서 소비자들은 이미지라는 감정적ㆍ주관적인 요소로써 의류상품을 평가, 구매하는 경우가 많다. 이에 본 연구에서는 여자 대학생을 대상으로 자신이 생각하는 자기 이미지에 따른 실제 착용하는 의복이미지, 추구하는 의복이미지와 의복구매행동을 조사하는 것을 목적으로 하며 의류산업체에서 의류상품개발을 위한 자료를 제공하고자 하였다. (중략)

  • PDF

A Study on Repurchase Intention for the Products of Social Enterprise (사회적 기업의 제품 재구매 의도에 미치는 영향에 관한 연구)

  • Kim, Eun-Jung;Kim, Jong-Weon
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.17 no.1
    • /
    • pp.105-115
    • /
    • 2012
  • This study investigated the effect of willing assistance and social responsibility, which motivate one to buy products from social enterprise, upon functional and emotional values, company image and repurchase. To that end, survey was conducted on the subject of customers buying social enterprise products. 178 responses were used to verify research hypotheses through covariate structural equation model. The study results are as follows: First, willing assistance and social responsibility for buying products from social enterprise were shown to have significant effects on functional and emotional values. Second, functional and emotional values were presented to have significant impacts on company image and repurchase intention. Third, company image has a significant effect on repurchase intention.

Convolutional Neural Network Model Using Data Augmentation for Emotion AI-based Recommendation Systems

  • Ho-yeon Park;Kyoung-jae Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.12
    • /
    • pp.57-66
    • /
    • 2023
  • In this study, we propose a novel research framework for the recommendation system that can estimate the user's emotional state and reflect it in the recommendation process by applying deep learning techniques and emotion AI (artificial intelligence). To this end, we build an emotion classification model that classifies each of the seven emotions of angry, disgust, fear, happy, sad, surprise, and neutral, respectively, and propose a model that can reflect this result in the recommendation process. However, in the general emotion classification data, the difference in distribution ratio between each label is large, so it may be difficult to expect generalized classification results. In this study, since the number of emotion data such as disgust in emotion image data is often insufficient, correction is made through augmentation. Lastly, we propose a method to reflect the emotion prediction model based on data through image augmentation in the recommendation systems.

A study on Emotional Fashion Design Using Light (빛을 활용한 감성 패션디자인 연구)

  • Jo, Min-Yeong;Choe, Gyeong-Hui
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2009.11a
    • /
    • pp.214-217
    • /
    • 2009
  • 현대 패션분야에서 디자이너들의 예술적 표현의 주제이자 매체로서 활용되고 있는 빛은 보다 감성지향적인 방향으로 발전되는 추세로서, 빛을 활용한 감성 패션디자인에 대한 표현 방법과 조형적 특성을 분석하는 것은 매우 의미있는 일이다. 패션디자인에서 빛을 표현하는 방법으로는 빛의 반사, 이미지 표현, 발광, 투사에 의해 독창적으로 응용되어져 표현되어 왔으며, 다양한 빛의 표현을 위해 재료의 물성을 조작함으로써 얻을 수 있는 빛의 유희, 명암이나 형태, 색채를 통한 빛의 재현, 발광물질이나 인공광에 의한 발광, 프로젝터를 통한 빛 이미지 등 다양한 표현방법이 활용되고 있었다. 빛이 활용된 감성 패션디자인의 조형적 특성으로는 상호작용성, 영상성, 투명성, 실험성으로 분류되었다. 상호작용성은 착용자의 행위에 의한 변형과 신체의 변화나 감정의 변화에 의한 불빛이나 영상 패턴이 바뀌는 등 형태나 컬러, 영상의 변형 등 착용자의 메시지 전달이나, 감정표현, 신체보호, 그리고 재미를 유발하는 효과를 자아내는 것으로 나타났다. 영상성은 디지털 이미지를 활용한 것과 내부 광원에 의한 발광성으로 분류되며, 드레스에 장착된 수많은 LED 에 의한 영상을 만들거나 이미지를 확대시키는 방법으로 표현된 영상성은 심미적인 효과가 우선시되었다. 투명성은 주로 비닐, 플라스틱, 기능적 소재 등 투명한 소재들을 이용하여 대부분 재료의 특성이 조형적 특성으로 분석되었고, 외부와의 개방성과 위장가능성의 효과를 준다거나, 투명한 재질에 이미지가 변화하는 이미지의 중첩성과 같은 효과를 나타내었다. 실험성은 새로운 실험적 도구로서의 패션을 표현하고자 할 때, '빛'을 매개체로 하여 관심을 유도하고 재미를 더해주며, 신비주의적 환상이나 호기심을 불러일으키는 효과를 나타내었다. 이처럼 빛을 활용한 패션디자인은 빛을 매개체로 하여 다양한 표현방법으로 활용이 가능하며 특유의 조형적 특성을 가지고 있음을 알 수 있었다.

  • PDF