• Title/Summary/Keyword: Multi-User Interaction

Search Result 149, Processing Time 0.046 seconds

A Collaborative Visual Language

  • Kim, Kyung-Deok
    • Journal of information and communication convergence engineering
    • /
    • v.1 no.2
    • /
    • pp.74-81
    • /
    • 2003
  • There are many researches on visual languages, but the most of them are difficult to support various collaborative interactions on a distributed multimedia environment. So, this paper suggests a collaborative visual language for interaction between multi-users. The visual language can describe a conceptual model for collaborative interactions between multi-users. Using the visual language, generated visual sentences consist of object icons and interaction operators. An object icon represents a user who is responsible for a collaborative activity, has dynamic attributes of a user, and supports flexible interaction between multi-users. An interaction operator represents an interactive relation between multi-users and supports various collaborative interactions. Merits of the visual language are as follows: supporting of both asynchronous interaction and synchronous interaction, supporting flexible interaction between multi-users according to participation or leave of users, supporting a user oriented modeling, etc. For example, an application to a workflow system for document approval is illustrated. So we could be found that the visual language shows a collaborative interaction.

Audience Interaction for Virtual Reality Theater (VR 극장을 위한 관객인터랙션)

  • 안상철;김익재;김형곤
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.40 no.1
    • /
    • pp.50-58
    • /
    • 2003
  • Recently we have built a VR(Virtual Reality) theater in Kyongju, Korea, which combines the advantages of VR and IMAX theater. The VR theater provides a virtual environment for several hundreds of people at the same time. The VR theater can be characterized by a single shared screen and by multiple inputs from several hundreds of people. In this case, multi-user interaction is different from that of networked VR systems and must be reconsidered. This paper defines the multi-user interaction in such a VR theater as Audience Interaction, and discusses key issues for the implementation of the Audience Interaction. This paper also presents a real implementation example in the Kyongju VR theater.

A Research of User Experience on Multi-Modal Interactive Digital Art

  • Qianqian Jiang;Jeanhun Chung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.80-85
    • /
    • 2024
  • The concept of single-modal digital art originated in the 20th century and has evolved through three key stages. Over time, digital art has transformed into multi-modal interaction, representing a new era in art forms. Based on multi-modal theory, this paper aims to explore the characteristics of interactive digital art in innovative art forms and its impact on user experience. Through an analysis of practical application of multi-modal interactive digital art, this study summarises the impact of creative models of digital art on the physical and mental aspects of user experience. In creating audio-visual-based art, multi-modal digital art should seamlessly incorporate sensory elements and leverage computer image processing technology. Focusing on user perception, emotional expression, and cultural communication, it strives to establish an immersive environment with user experience at its core. Future research, particularly with emerging technologies like Artificial Intelligence(AR) and Virtual Reality(VR), should not merely prioritize technology but aim for meaningful interaction. Through multi-modal interaction, digital art is poised to continually innovate, offering new possibilities and expanding the realm of interactive digital art.

Ubiquitous Context-aware Modeling and Multi-Modal Interaction Design Framework (유비쿼터스 환경의 상황인지 모델과 이를 활용한 멀티모달 인터랙션 디자인 프레임웍 개발에 관한 연구)

  • Kim, Hyun-Jeong;Lee, Hyun-Jin
    • Archives of design research
    • /
    • v.18 no.2 s.60
    • /
    • pp.273-282
    • /
    • 2005
  • In this study, we proposed Context Cube as a conceptual model of user context, and a Multi-modal Interaction Design framework to develop ubiquitous service through understanding user context and analyzing correlation between context awareness and multi-modality, which are to help infer the meaning of context and offer services to meet user needs. And we developed a case study to verify Context Cube's validity and proposed interaction design framework to derive personalized ubiquitous service. We could understand context awareness as information properties which consists of basic activity, location of a user and devices(environment), time, and daily schedule of a user. And it enables us to construct three-dimensional conceptual model, Context Cube. Also, we developed ubiquitous interaction design process which encloses multi-modal interaction design by studying the features of user interaction presented on Context Cube.

  • PDF

Survey: Tabletop Display Techniques for Multi-Touch Recognition (멀티터치를 위한 테이블-탑 디스플레이 기술 동향)

  • Kim, Song-Gook;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.84-91
    • /
    • 2007
  • Recently, the researches based on vision about user attention and action awareness are being pushed actively for human computer interaction. Among them, various applications of tabletop display system are developed more in accordance with touch sensing technique, co-located and collaborative work. Formerly, although supported only one user, support multi-user at present. Therefore, collaborative work and interaction of four elements (human, computer, displayed objects, physical objects) that is ultimate goal of tabletop display are realizable. Generally, tabletop display system designs according to four key aspects. 1)multi-touch interaction using bare hands. 2)implementation of collaborative work, simultaneous user interaction. 3)direct touch interaction. 4)use of physical objects as an interaction tool. In this paper, we describe a critical analysis of the state-of-the-art in advanced multi-touch sensing techniques for tabletop display system according to the four methods: vision based method, non-vision based method, top-down projection system and rear projection system. And we also discuss some problems and practical applications in the research field.

A Development of Multi-Emotional Signal Receiving Modules for Cellphone Using Robotic Interaction

  • Jung, Yong-Rae;Kong, Yong-Hae;Um, Tai-Joon;Kim, Seung-Woo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2231-2236
    • /
    • 2005
  • CP (Cellular Phone) is currently one of the most attractive technologies and RT (Robot Technology) is also considered as one of the most promising next generation technology. We present a new technological concept named RCP (Robotic Cellular Phone), which combines RT and CP. RCP consists of 3 sub-modules, $RCP^{Mobility}$, $RCP^{Interaction}$, and $RCP^{Integration}$. $RCP^{Interaction}$ is the main focus of this paper. It is an interactive emotion system which provides CP with multi-emotional signal receiving functionalities. $RCP^{Interaction}$ is linked with communication functions of CP in order to interface between CP and user through a variety of emotional models. It is divided into a tactile, an olfactory and a visual mode. The tactile signal receiving module is designed by patterns and beat frequencies which are made by mechanical-vibration conversion of the musical melody, rhythm and harmony. The olfactory signal receiving module is designed by switching control of perfume-injection nozzles which are able to give the signal receiving to the CP-called user through a special kind of smell according to the CP-calling user. The visual signal receiving module is made by motion control of DC-motored wheel-based system which can inform the CP-called user of the signal receiving through a desired motion according to the CP-calling user. In this paper, a prototype system is developed for multi-emotional signal receiving modes of CP. We describe an overall structure of the system and provide experimental results of the functional modules.

  • PDF

A Development of Multi-Emotional Signal Receiving Modules for Ubiquitous RCP Interaction (유비쿼터스 RCP 상호작용을 위한 다감각 착신기능모듈의 개발)

  • Jang Kyung-Jun;Jung Yong-Rae;Kim Dong-Wook;Kim Seung-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.1
    • /
    • pp.33-40
    • /
    • 2006
  • We present a new technological concept named RCP (Robotic Cellular Phone), which combines RT and CP. That is an ubiquitous robot. RCP consists of 3 sub-modules, RCP Mobility, RCP interaction, and RCP Integration. RCP Interaction is the main focus of this paper. It is an interactive emotion system which provides CP with multi-emotional signal receiving functionalities. RCP Interaction is linked with communication functions of CP in order to interface between CP and user through a variety of emotional models. It is divided into a tactile, an olfactory and a visual mode. The tactile signal receiving module is designed by patterns and beat frequencies which are made by mechanical-vibration conversion of the musical melody, rhythm and harmony. The olfactory signal receiving module is designed by switching control of perfume-injection nozzles which are able to give the signal receiving to the CP-called user through a special kind of smell according to the CP-calling user. The visual signal receiving module is made by motion control of DC-motored wheel-based system which can inform the CP-called user of the signal receiving through a desired motion according to the CP-calling user. In this paper, a prototype system is developed far multi-emotional signal receiving modes of CP. We describe an overall structure of the system and provide experimental results of the functional modules.

A Methodology for Consistent Design of User Interaction (일관성 있는 사용자 인터랙션 설계를 위한 방법론 개발)

  • Kim, Dong-San;Yoon, Wan-Chul
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.961-970
    • /
    • 2009
  • Over the last decade, interactive devices such as mobile phones have become complicated drastically mainly because of feature creep, the tendency for the number of features in a product to rise with each release of the product. One of the ways to reduce the complexity of a multi-functional device is to design it consistently. Although the definition of consistency is elusive and it is sometimes beneficial to be inconsistent, in general, consistently designed systems are easier to learn, easier to remember, and causing less errors. In practice, however, it is often not easy to design the user interaction or interface of a multi-functional device consistently. Since the interaction design of a multi-functional device should deal with a large number of design variables and relations among them, solving this problem might be very time-consuming and error-prone. Therefore, there is a strong need for a well-developed methodology that supports the complex design process. This study has developed an effective and efficient methodology, called CUID (Consistent Design of User Interaction), which focuses on logical consistency rather than physical or visual consistency. CUID deals with three main problems in interaction design: procedure design for each task, decisions of available operations(or functions) for each system state, and the mapping of available operations(functions) and interface controls. It includes a process for interaction design and a software tool for supporting the process. This paper also demonstrates how CUID supports the consistent design of user interaction by presenting a case study. It shows that the logical inconsistencies of a multi-functional device can be resolved by using the CUID methodology.

  • PDF

Contents Sharing Model in Distributed Collaboration Environment (분산 협업 환경에서의 콘텐츠 공유 모델)

  • Hur, Hye-Jung;Lee, Ju-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.5
    • /
    • pp.79-87
    • /
    • 2014
  • In this paper, we propose contents sharing model for consolidating distributed collaboration environments. We provide the combined model by integrating different features-scalable resolution display walls resource, sharing contents on local and remote, multi-user interaction, and access control. There is a benefit that every user can interact within an environment, but overlaps would occur because of multi-user interaction. To manage overlap issue, the model includes access control. Access control would interfere flow of work process. Therefore, we conduct user study to evaluate these two factors of the model. The result shows that the proposed contents sharing model makes it possible to concentrate in a work.

Survey: The Tabletop Display Techniques for Collaborative Interaction (협력적인 상호작용을 위한 테이블-탑 디스플레이 기술 동향)

  • Kim, Song-Gook;Lee, Chil-Woo
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.11a
    • /
    • pp.616-621
    • /
    • 2006
  • Recently, the researches based on vision about user attention and action awareness are being pushed actively for human computer interaction. Among them, various applications of tabletop display system are developed more in accordance with touch sensing technique, co-located and collaborative work. Formerly, although supported only one user, support multi-user at present. Therefore, collaborative work and interaction of four elements (human, computer, displayed objects, physical objects) that is ultimate goal of tabletop display are realizable. Generally, tabletop display system designs according to four key aspects. 1)multi-touch interaction using bare hands. 2)implementation of collaborative work, simultaneous user interaction. 3)direct touch interaction. 4)use of physical objects as an interaction tool. In this paper, we describe a critical analysis of the state-of-the-art in advanced multi-touch sensing techniques for tabletop display system according to the four methods: vision based method, non-vision based method, top-down projection system and rear projection system. And we also discuss some problems and practical applications in the research field.

  • PDF