• Title/Summary/Keyword: 다중 사용자 인터랙션

Search Result 24, Processing Time 0.018 seconds

A Study on the Gesture Based Virtual Object Manipulation Method in Multi-Mixed Reality

  • Park, Sung-Jun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.2
    • /
    • pp.125-132
    • /
    • 2021
  • In this paper, We propose a study on the construction of an environment for collaboration in mixed reality and a method for working with wearable IoT devices. Mixed reality is a mixed form of virtual reality and augmented reality. We can view objects in the real and virtual world at the same time. And unlike VR, MR HMD does not occur the motion sickness. It is using a wireless and attracting attention as a technology to be applied in industrial fields. Myo wearable device is a device that enables arm rotation tracking and hand gesture recognition by using a triaxial sensor, an EMG sensor, and an acceleration sensor. Although various studies related to MR are being progressed, discussions on developing an environment in which multiple people can participate in mixed reality and manipulating virtual objects with their own hands are insufficient. In this paper, We propose a method of constructing an environment where collaboration is possible and an interaction method for smooth interaction in order to apply mixed reality in real industrial fields. As a result, two people could participate in the mixed reality environment at the same time to share a unified object for the object, and created an environment where each person could interact with the Myo wearable interface equipment.

Evaluation of Car Prototype using CAVE-like Systems (케이브 기반 자동차 시제품 평가)

  • 고희동;안희갑;김진욱;김종국;송재복;어홍준;윤명환;우인수;박연동
    • Science of Emotion and Sensibility
    • /
    • v.5 no.4
    • /
    • pp.77-84
    • /
    • 2002
  • In this paper, we propose the NAVER, a general framework for multipurpose virtual environments, and introduce the case study of evaluating car prototypes using cave-like systems. As a framework to implement variant applications in virtual environment, NAVER is extensible, reconfigurable and scalable. NAVER consists of several external modules (Render Server, Control Server and Device Server), which communicate each other to share states and user-provided data and to perform their own functions. NAVER supports its own scripting language based on XML which allows a user to define variant interactions between objects in virtual environments as well as describe the scenario of an application. We used NAVER to implement the system for evaluating car prototyes in a CAVE-like virtual environment system. The CAVE-like virtual environment system at KIST consists three side screens and a floor screen (each of them is a square with side of 2.2m), four CRT projectors displays stereoscopic images to the screens, a haptic armmaster, and a 5.1 channel sound system. The system can provide a sense of reality by displaying auditory and tactile senses as well as visual images at the same time. We evaluate car prototypes in a CAVE-like system in which a user can observe, touch and manipulate the virtual installation of car interior.

  • PDF

A Study on Interactive Animation Production as Public Art : Focusing on an Case of the Live Window Animation, (공공예술로서의 인터랙티브 애니메이션 제작 연구 : 라이브 윈도우 애니메이션 <북극곰 파오> 사례를 중심으로)

  • Chang, Wook-Sang;Yu, Seung-Cheol
    • Cartoon and Animation Studies
    • /
    • s.33
    • /
    • pp.153-172
    • /
    • 2013
  • There are many cases that messages of boring contents of most contents with public interests appear on the surface. Audiences don't think these contents are interesting. It is true that animations cannot be generally boring when delivering messages of public interests. was produced to focus on making audiences experience that a global warming story, the boring and textbook contents is interesting. And it was composed by the multiform story to realize narration through audiences' participation by utilizing the characteristics of live windows, not just watching the animation. This paper examines the differences between theaters and live window through the case that was produced and examples which utilized interaction for audiences' participation based on this. It analyzes the differences between environments according to characteristics of places and audiences in the differences between the theaters and live window, examines the examples to utilize interaction focusing on the process that narration is gradually changed as response to user environment design and interaction for unspecified individuals, and suggests direction that animation should move forward as public art based on the results to show the animation in Millano Piazza. According to the characteristics of live windows, the audiences of are people in the streets who are heading for different destinations, not the ones who come to theaters to watch the animation. Showing the animation with narration to them was a new attempt. When it began to show it in Millano Piazza, the audiences were very satisfied with the experiences that the stories were changed as they participated in it by themselves and naturally thought of global warming problems. You cannot know how the message of change people's habits and thoughts for the present, but this attempt was an opportunity that animations play the social role. Many animations are being produced in the world. Most of them are being done to aim at theaters, TVs, and film festivals. They should meet audiences through more various methods. One of them is animations as public art. And can be the new attempt in this sense. And in the future, animations as public art should make efforts to show you interesting experiences that you can share thoughts to be able to live together. As art of various media is changing to the one which considers public interests, animations can be new types of public art by integrating them with various technologies.

The Individual Discrimination Location Tracking Technology for Multimodal Interaction at the Exhibition (전시 공간에서 다중 인터랙션을 위한 개인식별 위치 측위 기술 연구)

  • Jung, Hyun-Chul;Kim, Nam-Jin;Choi, Lee-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.19-28
    • /
    • 2012
  • After the internet era, we are moving to the ubiquitous society. Nowadays the people are interested in the multimodal interaction technology, which enables audience to naturally interact with the computing environment at the exhibitions such as gallery, museum, and park. Also, there are other attempts to provide additional service based on the location information of the audience, or to improve and deploy interaction between subjects and audience by analyzing the using pattern of the people. In order to provide multimodal interaction service to the audience at the exhibition, it is important to distinguish the individuals and trace their location and route. For the location tracking on the outside, GPS is widely used nowadays. GPS is able to get the real time location of the subjects moving fast, so this is one of the important technologies in the field requiring location tracking service. However, as GPS uses the location tracking method using satellites, the service cannot be used on the inside, because it cannot catch the satellite signal. For this reason, the studies about inside location tracking are going on using very short range communication service such as ZigBee, UWB, RFID, as well as using mobile communication network and wireless lan service. However these technologies have shortcomings in that the audience needs to use additional sensor device and it becomes difficult and expensive as the density of the target area gets higher. In addition, the usual exhibition environment has many obstacles for the network, which makes the performance of the system to fall. Above all these things, the biggest problem is that the interaction method using the devices based on the old technologies cannot provide natural service to the users. Plus the system uses sensor recognition method, so multiple users should equip the devices. Therefore, there is the limitation in the number of the users that can use the system simultaneously. In order to make up for these shortcomings, in this study we suggest a technology that gets the exact location information of the users through the location mapping technology using Wi-Fi and 3d camera of the smartphones. We applied the signal amplitude of access point using wireless lan, to develop inside location tracking system with lower price. AP is cheaper than other devices used in other tracking techniques, and by installing the software to the user's mobile device it can be directly used as the tracking system device. We used the Microsoft Kinect sensor for the 3D Camera. Kinect is equippedwith the function discriminating the depth and human information inside the shooting area. Therefore it is appropriate to extract user's body, vector, and acceleration information with low price. We confirm the location of the audience using the cell ID obtained from the Wi-Fi signal. By using smartphones as the basic device for the location service, we solve the problems of additional tagging device and provide environment that multiple users can get the interaction service simultaneously. 3d cameras located at each cell areas get the exact location and status information of the users. The 3d cameras are connected to the Camera Client, calculate the mapping information aligned to each cells, get the exact information of the users, and get the status and pattern information of the audience. The location mapping technique of Camera Client decreases the error rate that occurs on the inside location service, increases accuracy of individual discrimination in the area through the individual discrimination based on body information, and establishes the foundation of the multimodal interaction technology at the exhibition. Calculated data and information enables the users to get the appropriate interaction service through the main server.