• Title/Summary/Keyword: Multi-User Interaction

Search Result 149, Processing Time 0.031 seconds

An Interface Technique for Avatar-Object Behavior Control using Layered Behavior Script Representation (계층적 행위 스크립트 표현을 통한 아바타-객체 행위 제어를 위한 인터페이스 기법)

  • Choi Seung-Hyuk;Kim Jae-Kyung;Lim Soon-Bum;Choy Yoon-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.9
    • /
    • pp.751-775
    • /
    • 2006
  • In this paper, we suggested an avatar control technique using the high-level behavior. We separated behaviors into three levels according to level of abstraction and defined layered scripts. Layered scripts provide the user with the control over the avatar behaviors at the abstract level and the reusability of scripts. As the 3D environment gets complicated, the number of required avatar behaviors increases accordingly and thus controlling the avatar-object behaviors gets even more challenging. To solve this problem, we embed avatar behaviors into each environment object, which informs how the avatar can interact with the object. Even with a large number of environment objects, our system can manage avatar-object interactions in an object-oriented manner Finally, we suggest an easy-to-use user interface technique that allows the user to control avatars based on context menus. Using the avatar behavior information that is embedded into the object, the system can analyze the object state and filter the behaviors. As a result, context menu shows the behaviors that the avatar can do. In this paper, we made the virtual presentation environment and applied our model to the system. In this paper, we suggested the technique that we controling an the avatar control technique using the high-level behavior. We separated behaviors into three levels byaccording to level of abstract levelion and defined multi-levellayered script. Multi-leveILayered script offers that the user can control avatar behavior at the abstract level and reuses script easily. We suggested object models for avatar-object interaction. Because, TtThe 3D environment is getting more complicated very quickly, so that the numberss of avatar behaviors are getting more variableincreased. Therefore, controlling avatar-object behavior is getting complex and difficultWe need tough processing for handling avatar-object interaction. To solve this problem, we suggested object models that embedded avatar behaviors into object for avatar-object interaction. insert embedded ail avatar behaviors into object. Even though the numbers of objects areis large bigger, it can manage avatar-object interactions by very efficientlyobject-oriented manner. Finally Wewe suggested context menu for ease ordering. User can control avatar throughusing not avatar but the object-oriented interfaces. To do this, Oobject model is suggested by analyzeing object state and filtering the behavior, behavior and context menu shows the behaviors that avatar can do. The user doesn't care about the object or avatar state through the related object.

Fuzzy Cognitive Map Construction Support System based on User Interaction (사용자 상호작용에 의한 퍼지 인식도 구축 지원 시스템)

  • Shin, Hyoung-Wook;Jung, Jeong-Mun;Cheah, Wooi Ping;Yang, Hyung-Jeong;Kim, Kyoung-Yun
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.12
    • /
    • pp.1-9
    • /
    • 2008
  • Fuzzy Cognitive Map, one of ways to model, describe and infer reasoning relations, is widely used in the field of reasoning knowledge engineering. Despite of the natural and easy understanding of decision and smooth explanation of relation between front and rear, reasoning relation is organized with mathematical haziness and complex algorithm and rarely has an interactive user interface. This paper suggests an interactive Fuzzy Cognitive Map(FCM) construction support system. It builds a FCM increasingly concerning multiple experts' knowledge. Futhermore, it supports user-supportive environment by dynamically displaying the structure of Fuzzy Cognitive Map which is constructed by the interaction between experts and the system.

Analyzing Patterns in User's Information Seeking Behavior on the Web (웹 이용자의 정보탐색행위 패턴 분석)

  • Kim, Sung-Jin
    • Journal of the Korean Society for information Management
    • /
    • v.23 no.4 s.62
    • /
    • pp.197-214
    • /
    • 2006
  • A Web-based environment has very various and heterogeneous users. The emphasis on their individual characteristics may make it hard to reach the general understanding of how they seek and use information on the Web. The purpose of this study is to find common patterns in information seeking behavior on the Web by analyzing a series of cognitive movement of users in interaction with the Web. Based on Dervin's concept and Timeline interview methodology, this study collected 37 Web experience descriptions from 21 respondents, which consisted of 302 steps. Findings addressed that Web information seeking behavior can be classified into seven types : Starting, Searching, Viewing/B row sing , Examining/comparing, Finding/compiling, Deciding/Acting, and Ending. Movement paths in the seven-type information seeking process showed that user's interaction with the Web was repeated and circulated at the Viewing/Browsing step and that information seeking behavior on the Web was multi-directional and non-linear.

Detection of Gaze Direction for the Hearing-impaired in the Intelligent Space (지능형 공간에서 청각장애인의 시선 방향 검출)

  • Oh, Young-Joon;Hong, Kwang-Jin;Kim, Jong-In;Jung, Kee-Chul
    • The KIPS Transactions:PartB
    • /
    • v.18B no.6
    • /
    • pp.333-340
    • /
    • 2011
  • The Human-Computer Interaction(HCI) is a study of the method for interaction between human and computers that merges the ergonomics and the information technology. The intelligent space, which is a part of the HCI, is an important area to provide effective user interface for the disabled, who are alienated from the information-oriented society. In the intelligent space for the disabled, the method supporting information depends on types of disability. In this paper, we only support the hearing-impaired. It is material to the gaze direction detection method because it is very efficient information provide method to present information on gazing direction point, except for the information provide location perception method through directly contact with the hearing-impaired. We proposed the gaze direction detection method must be necessary in order to provide the residence life application to the hearing-impaired like this. The proposed method detects the region of the user from multi-view camera images, generates candidates for directions of gaze for horizontal and vertical from each camera, and calculates the gaze direction of the user through the comparison with the size of each candidate. In experimental results, the proposed method showed high detection rate with gaze direction and foot sensing rate with user's position, and showed the performance possibility of the scenario for the disabled.

A Multimodal Interface for Telematics based on Multimodal middleware (미들웨어 기반의 텔레매틱스용 멀티모달 인터페이스)

  • Park, Sung-Chan;Ahn, Se-Yeol;Park, Seong-Soo;Koo, Myoung-Wan
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.41-44
    • /
    • 2007
  • In this paper, we introduce a system in which car navigation scenario is plugged multimodal interface based on multimodal middleware. In map-based system, the combination of speech and pen input/output modalities can offer users better expressive power. To be able to achieve multimodal task in car environments, we have chosen SCXML(State Chart XML), a multimodal authoring language of W3C standard, to control modality components as XHTML, VoiceXML and GPS. In Network Manager, GPS signals from navigation software are converted to EMMA meta language, sent to MultiModal Interaction Runtime Framework(MMI). Not only does MMI handles GPS signals and a user's multimodal I/Os but also it combines them with information of device, user preference and reasoned RDF to give the user intelligent or personalized services. The self-simulation test has shown that middleware accomplish a navigational multimodal task over multiple users in car environments.

  • PDF

Large-scale Ambient Display Environment for providing Multi Spatial Interaction Interface (멀티 공간 인터랙션 인터페이스 제공을 위한 대규모 앰비언트 디스플레이 환경)

  • Yun, Chang Ok;Park, Jung Pil;Yun, Tae Soo;Lee, Dong Hoon
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2009.05a
    • /
    • pp.30-34
    • /
    • 2009
  • Recently, systems providing the interaction different according to an interval between a user and the display were developed in order to construct the ambient or the ubiquitous computing environment. Therefore, we propose a new type of spatial interaction system; our main goal is to provide the interactive domain in the large-scale ambient display environment. So, we divide into two zones of interaction dependent on the distance from the interaction surface interactive zone and ambient zone. In interactive zone, the users can approach the interaction surface and interact with natural hand-touch. When the users are outside the range of the interactive zone, the display shows only general information. Therefore, this system offers the various interactions and information to users in the ubiquitous ambient environment.

  • PDF

Emotion Recognition and Expression System of User using Multi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 사용자의 감정 인식 및 표현 시스템)

  • Yeom, Hong-Gi;Joo, Jong-Tae;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.20-26
    • /
    • 2008
  • As they have more and more intelligence robots or computers these days, so the interaction between intelligence robot(computer) - human is getting more and more important also the emotion recognition and expression are indispensable for interaction between intelligence robot(computer) - human. In this paper, firstly we extract emotional features at speech signal and facial image. Secondly we apply both BL(Bayesian Learning) and PCA(Principal Component Analysis), lastly we classify five emotions patterns(normal, happy, anger, surprise and sad) also, we experiment with decision fusion and feature fusion to enhance emotion recognition rate. The decision fusion method experiment on emotion recognition that result values of each recognition system apply Fuzzy membership function and the feature fusion method selects superior features through SFS(Sequential Forward Selection) method and superior features are applied to Neural Networks based on MLP(Multi Layer Perceptron) for classifying five emotions patterns. and recognized result apply to 2D facial shape for express emotion.

The Design and Implementation of Multiple Digital Signage Video Sync Technology for Ultra-high Resolution (초고해상도 멀티 디지털 사이니지 영상 동기화 기술의 설계와 구현)

  • Park, Hyoungyill;Yoo, Sunkyu;Moon, Youngtai;Kim, Miok;Shin, Yongtae
    • Journal of Broadcast Engineering
    • /
    • v.21 no.5
    • /
    • pp.651-661
    • /
    • 2016
  • 최근의 디지털 사이니지는 대형 고해상도 디스플레이를 이용해 사용자에게 초고해상도 파노라마형태의 볼거리를 제공하거나 사용자와 인터렉션 등을 통해 맞춤형광고를 제공하는 등 다양한 형태로 발전하고 있다. 또 개방형 디지털 사이니지의 발전과 함께 대형 멀티 사이니지를 이용한 초고화질 영상콘텐츠를 다양한 서비스 단말장치의 연동과 웹기반의 상호운용성을 갖춘 관리시스템이 활발히 연구될 것으로 보인다. 본 논문에서는 수십대에서 백여대 이상의 고해상도 디스플레이를 연동하여 초고해상도 영상표출의 기술의 구현방안과 정형화된 영상 해상도가 아닌 초고화질 영상의 개별 콘텐츠에 대한 동기화 시키는 기술을 이용하여 멀티 콘텐츠 플레이 단말기에서 고해상도 영상을 Play Sync하는 기술을 연구한다.

Hierarchy Visualization method of SNS User using Fuzzy Relational (퍼지 연관 곱을 이용한 SNS 사용자의 계층적 시각화 방법)

  • Park, Sun;Kwon, JangWoo;Jeong, Min-A;Lee, Yeonwoo;Lee, Seong Ro
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.9
    • /
    • pp.76-84
    • /
    • 2012
  • Visualizations have played an important role in understanding new insights of users of social network for social network analysis. Most of the previous works of visualization focus on representing user's relationship on social network by a complex multi dimension graph. However, this method is difficult to identify the important of relationship to focus on personal user intuitively. Besides, the user's messages to reflect the interrelation between users is insufficient, since most of visualization methods represent the user relationship using information of interaction between nodes on networks. In order to resolve above problem, this paper proposes a new visualization method to visualize user based hierarchy that uses internal relationship of users by fuzzy relational product and external access information of network.

Augmented Visualization of Modeling & Simulation Analysis Results (모델링 & 시뮬레이션 해석 결과 증강가시화)

  • Kim, Minseok;Seo, Dong Woo;Lee, Jae Yeol;Kim, Jae Sung
    • Korean Journal of Computational Design and Engineering
    • /
    • v.22 no.2
    • /
    • pp.202-214
    • /
    • 2017
  • The augmented visualization of analysis results can play an import role as a post-processing tool for the modeling & simulation (M&S) technology. In particular, it is essential to develop such an M&S tool which can run on various multi-devices. This paper presents an augmented reality (AR) approach to visualizing and interacting with M&S post-processing results through mobile devices. The proposed approach imports M&S data, extracts analysis information, and converts the extracted information into the one used for AR-based visualization. Finally, the result can be displayed on the mobile device through an AR marker tracking and a shader-based realistic rendering. In particular, the proposed method can superimpose AR-based realistic scenes onto physical objects such as 3D printing-based physical prototypes in a seamless manner, which can provide more immersive visualization and natural interaction of M&S results than conventional VR or AR-based approaches. A user study has been performed to analyze the qualitative usability. Implementation results will also be given to show the advantage and effectiveness of the proposed approach.