• Title/Summary/Keyword: Interactive avatar control

Search Result 8, Processing Time 0.025 seconds

An Interactive Approach based on Genetic Algorithm Using Hidden Population and Simplified Genotypes for Avatar Synthesis

  • Lee, Jayong;Lee, Janghee;Kang Hoon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.120.1-120
    • /
    • 2002
  • In this paper, we propose an interactive genetic algorithm (IGA) to implement an automated 2D avatar synthesis. The IGA technique is capable of expressing user's personality in the avatar synthesis by using the user's response as a candidate for the fitness value. Our suggested IGA method isapplied to creating avatars automatically. Unlike the previous works, we introduce the concepts of 'hidden population', as well as 'primitive avatar' and 'simplified genotype', which are used to overcome the shortcomings of IGA such as human fatigue or reliability, and reasonable rates of convergence with a less number of iterations. The procedure of designing avatar models consists of two steps. The firl...

  • PDF

Emotional Expression through the Selection Control of Gestures of a 3D Avatar (3D 아바타 동작의 선택 제어를 통한 감정 표현)

  • Lee, JiHye;Jin, YoungHoon;Chai, YoungHo
    • Korean Journal of Computational Design and Engineering
    • /
    • v.19 no.4
    • /
    • pp.443-454
    • /
    • 2014
  • In this paper, an intuitive emotional expression of the 3D avatar is presented. Using the motion selection control of 3D avatar, an easy-to-use communication which is more intuitive than emoticon is possible. 12 pieces different emotions of avatar are classified as positive emotions such as cheers, impressive, joy, welcoming, fun, pleasure and negative emotions of anger, jealousy, wrath, frustration, sadness, loneliness. The combination of lower body motion is used to represent additional emotions of amusing, joyous, surprise, enthusiasm, glad, excite, sulk, discomfort, irritation, embarrassment, anxiety, sorrow. In order to get the realistic human posture, BVH format of motion capture data are used and the synthesis of BVH file data are implemented by applying the proposed emotional expression rules of the 3D avatar.

Training Avatars Animated with Human Motion Data (인간 동작 데이타로 애니메이션되는 아바타의 학습)

  • Lee, Kang-Hoon;Lee, Je-Hee
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.4
    • /
    • pp.231-241
    • /
    • 2006
  • Creating controllable, responsive avatars is an important problem in computer games and virtual environments. Recently, large collections of motion capture data have been exploited for increased realism in avatar animation and control. Large motion sets have the advantage of accommodating a broad variety of natural human motion. However, when a motion set is large, the time required to identify an appropriate sequence of motions is the bottleneck for achieving interactive avatar control. In this paper, we present a novel method for training avatar behaviors from unlabelled motion data in order to animate and control avatars at minimal runtime cost. Based on machine learning technique, called Q-teaming, our training method allows the avatar to learn how to act in any given situation through trial-and-error interactions with a dynamic environment. We demonstrate the effectiveness of our approach through examples that include avatars interacting with each other and with the user.

Development of 'Children's Food Avatar' Application for Dietary Education (식생활교육용 '어린이 푸드 아바타' 애플리케이션 개발)

  • Cho, Joo-Han;Kim, Sook-Bae;Kim, Soon-Kyung;Kim, Mi-Hyun;Kim, Gap-Soo;Kim, Se-Na;Kim, So-Young;Kim, Jeong-Weon
    • Korean Journal of Community Nutrition
    • /
    • v.18 no.4
    • /
    • pp.299-311
    • /
    • 2013
  • An educational application (App) called 'Children's Food Avatar' was developed in this study by using a food DB of nutrition and functionality from Rural Development Administration (RDA) as a smart-learning mobile device for elementary school students. This App was designed for the development of children's desirable dietary habits through an on-line activity of food choices for a meal from food DB of RDA provided as Green Water Mill guide. A customized avatar system was introduced as an element of fun and interactive animation for children which provides nutritional evaluation of selected foods by changing its appearance, facial look, and speech balloon, and consequently providing chances of correcting their food choices for balanced diet. In addition, nutrition information menu was included in the App to help children understand various nutrients, their function and healthy dietary life. When the App was applied to 54 elementary school students for a week in November, 2012, significant increases in the levels of knowledge, attitude and behavior in their diet were observed compared with those of the control group (p < 0.05, 0.01). Both elementary students and teachers showed high levels of satisfaction ranging from 4.30 to 4.89 for the App, therefore, it could be widely used for the dietary education for elementary school students as a smart-learning device.

Networked Visualization for a Virtual Bicycle Simulator (가상현실 자전거 시뮬레이터에서 시각화 네트워크)

  • Lee J.H.;Han S.H.
    • Korean Journal of Computational Design and Engineering
    • /
    • v.9 no.3
    • /
    • pp.212-219
    • /
    • 2004
  • This paper presents the visualization method of the KAIST interactive bicycle simulator. The simulator consists of two bicycles of 6 DOF and 4 DOF platforms, force feedback handlebars and pedal resistance systems to generate motion feelings; a real-time visual simulator, a HMD and a beam projection system; and a 3D sound system. The system has an integrating control network with the server-client network structure for multiple simulators. The visual simulator generates dynamic images in real-time while communicating with other modules of the simulator. The operator of the simulator can have realistic visual experience of riding on a velodrome or through the KAIST campus, while being able to watch the other bicycle with an avatar.

Hand motion estimation for interactive image composition (상호작용 영상합성을 위한 손의 움직임 추정)

  • Koo, Ddeo-Ol-Ra;Seo, Yung-Ho;Doo, Kyoung-Soo;Choi, Jong-Soo
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.951-952
    • /
    • 2008
  • This paper proposes a new method for image composition which estimates the rotation angle of human hand and uses the reserved image in real-time camera images. First, we capture a background image and extract a interesting region by background subtraction. Next, we estimate the skin region from the interesting region and calculate the rotation angle of estimated skin region using PCA(Principal Components Analysis). Finally, we composite the reserved image for the calculated rotation angle in camera images. The proposed method can be applied to control the 3D avatar for marker-less augmented reality.

  • PDF

A Brain-Computer Interface Based Human-Robot Interaction Platform (Brain-Computer Interface 기반 인간-로봇상호작용 플랫폼)

  • Yoon, Joongsun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.11
    • /
    • pp.7508-7512
    • /
    • 2015
  • We propose a brain-machine interface(BMI) based human-robot interaction(HRI) platform which operates machines by interfacing intentions by capturing brain waves. Platform consists of capture, processing/mapping, and action parts. A noninvasive brain wave sensor, PC, and robot-avatar/LED/motor are selected as capture, processing/mapping, and action part(s), respectively. Various investigations to ensure the relations between intentions and brainwave sensing have been explored. Case studies-an interactive game, on-off controls of LED(s), and motor control(s) are presented to show the design and implementation process of new BMI based HRI platform.

3D Character Motion Synthesis and Control Method for Navigating Virtual Environment Using Depth Sensor (깊이맵 센서를 이용한 3D캐릭터 가상공간 내비게이션 동작 합성 및 제어 방법)

  • Sung, Man-Kyu
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.6
    • /
    • pp.827-836
    • /
    • 2012
  • After successful advent of Microsoft's Kinect, many interactive contents that control user's 3D avatar motions in realtime have been created. However, due to the Kinect's intrinsic IR projection problem, users are restricted to face the sensor directly forward and to perform all motions in a standing-still position. These constraints are main reasons that make it almost impossible for the 3D character to navigate the virtual environment, which is one of the most required functionalities in games. This paper proposes a new method that makes 3D character navigate the virtual environment with highly realistic motions. First, in order to find out the user's intention of navigating the virtual environment, the method recognizes walking-in-place motion. Second, the algorithm applies the motion splicing technique which segments the upper and the lower motions of character automatically and then switches the lower motion with pre-processed motion capture data naturally. Since the proposed algorithm can synthesize realistic lower-body walking motion while using motion capture data as well as capturing upper body motion on-line puppetry manner, it allows the 3D character to navigate the virtual environment realistically.