• Title/Summary/Keyword: Multi-user interaction

Search Result 149, Processing Time 0.021 seconds

A Study on Presence of Collaboration based Multi-user Interaction in Immersive Virtual Reality (몰입형 가상현실에서의 협업 기반 다수 사용자 상호작용의 현존감에 관한 연구)

  • Park, Seongjun;Park, Wonjun;Heo, Hayoung;Kim, Jinmo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.3
    • /
    • pp.11-20
    • /
    • 2018
  • This study proposes collaboration based multi-user interaction to provide improved presence and satisfying experience to multiple HMD users in immersive virtual reality. The core of the proposed multi-user interaction is to present the direction in which HMD users in virtual reality environment can interact through communication and collaboration with each other based on their separate roles and behaviors. Each user in the virtual collaborative environment uses a hand to interact with a virtual environment or virtual objects. Based on the basic structure, this study designs methods (synchronization and communication) that can be achieved a common goal together through communication and collaboration. Then, this study creates a virtual reality application of arcade genre that considers communication and collaboration in order to verify whether the proposed multi-user interaction provides improved presence and satisfaction experience. And, the survey experiments are conducted for participants. Through these processes, it is confirmed that the interaction in the proposed immersive virtual collaborative environments provide the presence and experience that is satisfactory to all users.

Interactive Digital Storytelling Based on Interests (흥미도를 반영한 인터렉티브 디지털 스토리텔링)

  • Kim, Yang-Wook;Kim, Jong-Hun;Park, Jun
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.508-511
    • /
    • 2009
  • In Interactive Storytelling, storyline is developed according to the user's interaction. Diffrerent from linear, fixed storytelling, users may select an event or make decisions which affect on the story plotting. Therefore user's feeling of immersion and interest may be greatly enhanced. In this paper, we used markers and multi-touch pad for user's interaction for interactive storytelling. Users could present his/her level of interest and provide feedback through markers and multi-touch pad, through which storyline was differently developed.

  • PDF

Multi Spatial Interaction Interface in Large-scale Interactive Display Environment (대규모 인터랙티브 디스플레이 환경에서의 멀티 공간 인터랙션 인터페이스)

  • Yun, Chang-Ok;Park, Jung-Pil;Yun, Tae-Soo;Lee, Dong-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.2
    • /
    • pp.43-53
    • /
    • 2010
  • The interactive display is providing various interaction modes to users through various ubiquitous computing technologies. These methods were studied for their interactions, but the limits that it is provided to only single user and the device usability were generated. In this paper, we propose a new type of spatial multi interaction interface that provide the various spatial touch interactive to multi users in the ambient display environment. Therefore, we generate the interaction surface so that a user can interact through the IR-LEDs Array Bar installed in the ceiling of the ambient display environment. At this time, a user can experience the various interactions through the spatial touch in an interaction surface. Consequently, this system offers the interactive display and interface method that the users can interact through natural hand movement without the portable devices.

The Effects of Multi-Modality on the Use of Smart Phones

  • Lee, Gaeun;Kim, Seongmin;Choe, Jaeho;Jung, Eui Seung
    • Journal of the Ergonomics Society of Korea
    • /
    • v.33 no.3
    • /
    • pp.241-253
    • /
    • 2014
  • Objective: The objective of this study was to examine multi-modal interaction effects of input-mode switching on the use of smart phones. Background: Multi-modal is considered as an efficient alternative for input and output of information in mobile environments. However, there are various limitations in current mobile UI (User Interface) system that overlooks the transition between different modes or the usability of a combination of multi modal uses. Method: A pre-survey determined five representative tasks from smart phone tasks by their functions. The first experiment involved the use of a uni-mode for five single tasks; the second experiment involved the use of a multi-mode for three dual tasks. The dependent variables were user preference and task completion time. The independent variable in the first experiment was the type of modes (i.e., Touch, Pen, or Voice) while the variable in the second experiment was the type of tasks (i.e., internet searching, subway map, memo, gallery, and application store). Results: In the first experiment, there was no difference between the uses of pen and touch devices. However, a specific mode type was preferred depending on the functional characteristics of the tasks. In the second experiment, analysis of results showed that user preference depended on the order and combination of modes. Even with the transition of modes, users preferred the use of multi-modes including voice. Conclusion: The order of combination of modes may affect the usability of multi-modes. Therefore, when designing a multi-modal system, the fact that there are frequent transitions between various mobile contents in different modes should be properly considered. Application: It may be utilized as a user-centered design guideline for mobile multi modal UI system.

Design and Implementation of User Standing Posture Recognition-Based Interaction System Using Multi-Channel Large Area Pressure Sensors

  • Park, HyungSoo;Kim, HoonKi;Kwak, Jaekyung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.3
    • /
    • pp.155-162
    • /
    • 2020
  • Among the fourth industrial revolution technologies, products related to healthcare using IoT and sensors are currently being developed. We design and develop an interaction system based on user standing posture recognition using multi-channel large area pressure sensors in this paper. To this end, first of all we investigate major sensor markets of the sensor industry and review technology trends and the current and future of smart healthcare. Based on this survey, we examine and compare cases developed at home and abroad for multi-channel large-area pressure sensors, which are key components of the system that we want to develop. We recognize the standing posture status of the user through the developed system and experiment with how effective it is actually in user posture calibration and apply the research results to various healthcare devices' medical fields based on this.

Implementation of Metaverse User-Avatar Interaction using Real-time Motion Data (실시간 모션 데이터를 활용한 메타버스 사용자-아바타 상호작용 구현)

  • Gang In Lee;Eun Hye Noh;Young Jae Jo;Yong-Hwan Lee
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.4
    • /
    • pp.172-178
    • /
    • 2023
  • With the expansion of metaverse content and hardware platforms, various interactions in the virtual world have been built, raising expectations for an increase in immersion which is a major element of the metaverse. However, among hardware platforms that increase virtual immersion elements, the typical HMD platform can be a barrier to new user inflows due to its high cost. Thus, this paper focused on improving virtual-to-real interactions by extracting motion data using relatively inexpensive webcam equipment in PC environments, utilizing Unity game engines, Photon unity network, multi-platform implementations, and Barracuda neural network inference libraries.

  • PDF

Design and Development of Multiple Input Device and Multiscale Interaction for GOCI Observation Satellite Imagery on the Tiled Display (타일드 디스플레이에서의 천리안 해양관측 위성영상을 위한 다중 입력 장치 및 멀티 스케일 인터랙션 설계 및 구현)

  • Park, Chan-Sol;Lee, Kwan-Ju;Kim, Nak-Hoon;Lee, Sang-Ho;Seo, Ki-Young;Park, Kyoung Shin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.3
    • /
    • pp.541-550
    • /
    • 2014
  • This paper describes a multi-scale user interaction based tiled display visualization system using multiple input devices for monitoring and analyzing Geostationary Ocean Color Imager (GOCI) observation satellite imagery. This system provides multi-touch screen, Kinect motion sensing, and moblie interface for multiple users to control the satellite imagery either in front of the tiled display screen or far away from a distance to view marine environmental or climate changes around Korean peninsular more effectively. Due to a large amount of memory required for loading high-resolution GOCI satellite images, we employed the multi-level image load technique where the image was divided into small tiled images in order to reduce the load on the system and to be operated smoothly by user manipulation. This system performs the abstraction of common input information from multi-user Kinect motion and gestures, multi-touch points and mobile interaction information to enable a variety of user interactions for any tiled display application. In addition, the unit of time corresponding to the selected date of the satellite images are sequentially displayed on the screen and multiple users can zoom-in/out, move the imagery and select buttons to trigger functions.

Analysis of multi-dimensional interaction among SNS users (Analysis of multi-dimensional interaction among SNS users)

  • Lee, Kyung-Min;Namgoong, Hyun;Kim, Eung-Hee;Lee, Kang-Yong;Kim, Hong-Gee
    • Journal of Internet Computing and Services
    • /
    • v.12 no.2
    • /
    • pp.113-122
    • /
    • 2011
  • Social Network Service(SNS) has become a hot trend as a web service which helps users construct social relationships in the web and enables online communication. The information about user activities and behaviors obtained from the SNSs is expected to be an useful knowledge source for other services such as recommendation services. Most of previous researches on SNS rely on analyzing overall network topology and surveying the activities in a one-dimensional aspect. This paper propose a system for measuring multi-dimensional interaction through the activities in a SNS. The proposed system delivers an unified profile (consisting of profile and multi-dimension interaction) model from user-activities in Twitter.com. At the experimental section, some meaningful perspectives on a set of the unified profiles are described.

A Study on the Intention to Use a Robot-based Learning System with Multi-Modal Interaction (멀티모달 상호작용 중심의 로봇기반교육 콘텐츠를 활용한 r-러닝 시스템 사용의도 분석)

  • Oh, Junseok;Cho, Hye-Kyung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.6
    • /
    • pp.619-624
    • /
    • 2014
  • This paper introduces a robot-based learning system which is designed to teach multiplication to children. In addition to a small humanoid and a smart device delivering educational content, we employ a type of mixed-initiative operation which provides enhanced multi-modal cognition to the r-learning system through human intervention. To investigate major factors that influence people's intention to use the r-learning system and to see how the multi-modality affects the connections, we performed a user study based on TAM (Technology Acceptance Model). The results support the fact that the quality of the system and the natural interaction are key factors for the r-learning system to be used, and they also reveal very interesting implications related to the human behaviors.