• Title/Summary/Keyword: 지능형 합성 캐릭터

Search Result 3, Processing Time 0.018 seconds

An Intelligent Synthetic Character Based on User Behavior Inference/Prediction for Smartphone (스마트폰 상에서의 사용자 행위추론/예측기반 지능형 합성 캐릭터)

  • 이두호;한상준;조성배
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04b
    • /
    • pp.475-477
    • /
    • 2004
  • 이동전화 가입자 수의 폭발적 증가와 전송속도 향상으로 고성능 이동전화인 스마트폰이 주목을 받고 있으며, 스마트폰상에서 작동하는 지능형 서비스의 필요성이 커지고 있다. 본 논문에서는 스마트폰에서 지능형 서비스를 구현하는 방법으로 지능형 캐릭터를 제안한다. 캐릭터는 사용자가 친숙하게 느낄 수 있어 지능형 서비스의 좋은 인터페이스가 될 수 있다. 제안하는 캐릭터는 베이지안 네트워크를 이용하여 추론된 사용자의 감정 상태, 바쁨의 정보 등의 정보와 스마트폰에서 수집된 디바이스 상태에 기반하여 행동 선택을 행동 선택을 하여 디바이스와 사용자의 상태를 반영한다. 실제 작동 예를 통해 제안하는 캐릭터의 유용성을 보인다.

  • PDF

Usability Test and Behavior Generation of Intelligent Synthetic Character using Bayesian Networks and Behavior Networks (베이지안 네트워크와 행동 네트워크를 이용한 지능형 합성 캐릭터의 행동 생성 및 사용성 평가)

  • Yoon, Jong-Won;Cho, Sung-Bae
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.10
    • /
    • pp.776-780
    • /
    • 2009
  • As smartphones appear as suitable devices to implement ubiquitous computing recently, there are many researchers who study about personalized Intelligent services in smartphones. An intelligent synthetic character is one of them. This paper proposes a method generating behaviors of an intelligent synthetic character. In order to generate more natural behaviors for the character, the Bayesian networks are exploited to infer the user's states and OCC model is utilized to create the character's emotion. After inferring the contexts, the behaviors are generated through the behavior selection networks with using the information. A usability test verifies the usefulness of the proposed method.

Modelling Perceptual Attention for Augmented Reality Agents (증강 현실 에이전트를 위한 지각 주의 모델링)

  • Oh, Se-Jin;Woo, Woon-Tack
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.3
    • /
    • pp.51-58
    • /
    • 2010
  • Since Augmented Reality (AR) enables users to experience computer-generated content embedded in real environments, AR agents can be visualized among physical objects in the environments where the users exist, and directly interact with them in real-time. We model perceptual attention for autonomous agents in AR environments where virtual and physical objects coexist. Since such AR agents must adaptively perceive and attend to surrounded objects relevant to their goals, our model allows the agents to determine currently visible objects from the description of what virtual and physical objects are configured in the camera's viewing area. A degree of attention is assigned to each perceived object based on its relevance to achieve agents' goals. The agents can focus on a reduced set of perceived objects with respect to the estimated degree of attention. To demonstrate the effectiveness of our approach, we implemented an animated character that was overlaid over a miniature version of campus and that attended to buildings relevant to their goals. Experiments showed that our model could reduce the character's perceptual loads even when surroundings change.