• Title/Summary/Keyword: Affective Intelligent Environment

Search Result 3, Processing Time 0.019 seconds

Web 2.0 and Strategic Service Quality Management (Web 2.0와 전략적인 서비스 품질경영)

  • Kim, Gye-Su
    • Proceedings of the Korean Society for Quality Management Conference
    • /
    • 2007.04a
    • /
    • pp.267-272
    • /
    • 2007
  • A business is an organized. profit-seeking activity that provides goods and service designed to satisfy customer' needs. For years, Internet users have been dream of intelligent software agents that automatically prowl the online world and bring back high-quality information. In the era of web2.0, Internet users can communicate each other on open platforms(e-mail, Naver Knowledge-In. Cyworld, Wikipedia, and blog etc.). The author investigate the impact of web2.0 user's perceived value. affective commitment. brand equity on loyalty intention. The results suggest that excellent service quality will be alternative solving in web2.0 business environment.

  • PDF

A Study on Developing Interior Color Design based on Psychophysiological Responses (감성생리 실험을 이용한 실내 색채 디자인에 관한 연구)

  • 김주연;이현수
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2003.11a
    • /
    • pp.1141-1144
    • /
    • 2003
  • 본 연구에서는 미래 감성실내 공간색채에 대한 연구로 심리설문반응과 생리 신호 분석을 통하여 감성색채데이터 분석에 목적을 두었다. 생리신호반응 분석 중 뇌파측정과 7점 척도 SD 스케일법을 이용한 설문조사를 실시하여, 정상적인 감성어휘와 정량적인 생리신호 결과의 상호관계를 분석하였다. 기존의 직관적이며 주관적인 색채 배색디자인과 심리분석의 색채연구결과에 비해 본 연구는 객관적이며 과학적인 방법 및 주관적인 감성어휘를 연결짓는 것으로 연구의 의의를 두었다. 이러한 연구는 감성에 미리 반응하는 감성 지능형 환경디자인에 중요한 페이지로 활용될 수 있을 뿐만 아니라 현대인의 건강에 대한 관심 증가에 따라 건강 증진 환경디자인에 도움을 줄 수 있는 색채 배색을 제안할 수 있다.

  • PDF

Discrimination of Emotional States In Voice and Facial Expression

  • Kim, Sung-Ill;Yasunari Yoshitomi;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.2E
    • /
    • pp.98-104
    • /
    • 2002
  • The present study describes a combination method to recognize the human affective states such as anger, happiness, sadness, or surprise. For this, we extracted emotional features from voice signals and facial expressions, and then trained them to recognize emotional states using hidden Markov model (HMM) and neural network (NN). For voices, we used prosodic parameters such as pitch signals, energy, and their derivatives, which were then trained by HMM for recognition. For facial expressions, on the other hands, we used feature parameters extracted from thermal and visible images, and these feature parameters were then trained by NN for recognition. The recognition rates for the combined parameters obtained from voice and facial expressions showed better performance than any of two isolated sets of parameters. The simulation results were also compared with human questionnaire results.