• Title/Summary/Keyword: Multi-Person Interaction

Search Result 19, Processing Time 0.02 seconds

A Study on Applying the Concepts of Interaction Design to Space (공간에서의 인터랙션 디자인 개념 적용에 대한 연구)

  • Kang Sung-Joong;Kwon Young-Gull
    • Korean Institute of Interior Design Journal
    • /
    • v.14 no.3 s.50
    • /
    • pp.234-242
    • /
    • 2005
  • Interface is a medium or channel to communicate between human and things, while interaction is the manner of communication between them. Interaction design is designing experience of user through the interaction process for human, thing, system, and space. Richard Buchanan suggests four kinds of interaction: interface (person to thing interaction), transaction (person to person interaction), human interaction (human and environment interaction) and participation (human to cosmos interaction). With digital technology, architecture and space design have made various experiments at form, function, and content of space. Space evolves from a physical container to a stage to provide narrative and create new experience to users. Since understanding users, creating experience, efficient space design, content planning, and applicable technology are required for interaction design in space, multi-disciplinary research and cooperation is needed.

Mixed reality multi-person interaction research based on the calibration of the HoloLens devices

  • Qin, Zi Jie;Li, Ao Xuan;Lim, Hyotaek;Lee, Byung Gook
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.9
    • /
    • pp.1261-1267
    • /
    • 2021
  • Currently, the application of virtual reality technology is becoming more and more popular in all aspects of life. From virtual entertainment to industrial simulation, the new operation and working methods brought by virtual reality visualization technology have greater appeal and advantages. With the renewal and iteration of related equipment, more and more functions make its limitations continue to decrease, but its applicability continues to improve. Take the optically transparent head-mounted device as an example. It integrates more computer functions, presents and interacts in a virtual way, further integrates with daily behaviors, and shortens the distance between users and digital information.

Two person Interaction Recognition Based on Effective Hybrid Learning

  • Ahmed, Minhaz Uddin;Kim, Yeong Hyeon;Kim, Jin Woo;Bashar, Md Rezaul;Rhee, Phill Kyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.751-770
    • /
    • 2019
  • Action recognition is an essential task in computer vision due to the variety of prospective applications, such as security surveillance, machine learning, and human-computer interaction. The availability of more video data than ever before and the lofty performance of deep convolutional neural networks also make it essential for action recognition in video. Unfortunately, limited crafted video features and the scarcity of benchmark datasets make it challenging to address the multi-person action recognition task in video data. In this work, we propose a deep convolutional neural network-based Effective Hybrid Learning (EHL) framework for two-person interaction classification in video data. Our approach exploits a pre-trained network model (the VGG16 from the University of Oxford Visual Geometry Group) and extends the Faster R-CNN (region-based convolutional neural network a state-of-the-art detector for image classification). We broaden a semi-supervised learning method combined with an active learning method to improve overall performance. Numerous types of two-person interactions exist in the real world, which makes this a challenging task. In our experiment, we consider a limited number of actions, such as hugging, fighting, linking arms, talking, and kidnapping in two environment such simple and complex. We show that our trained model with an active semi-supervised learning architecture gradually improves the performance. In a simple environment using an Intelligent Technology Laboratory (ITLab) dataset from Inha University, performance increased to 95.6% accuracy, and in a complex environment, performance reached 81% accuracy. Our method reduces data-labeling time, compared to supervised learning methods, for the ITLab dataset. We also conduct extensive experiment on Human Action Recognition benchmarks such as UT-Interaction dataset, HMDB51 dataset and obtain better performance than state-of-the-art approaches.

Incorporating "Kansei Engineering" Approach on Traditional Textiles - A Proposed Method for Identifying Multi-Sensorial Experiences on the Kansei Attributes of Traditional Textiles -

  • Syarief, Achmad
    • The Research Journal of the Costume Culture
    • /
    • v.20 no.1
    • /
    • pp.121-127
    • /
    • 2012
  • When people are asked to described certain textiles, they frequently refer to the expressions of its properties such as attractiveness, uniqueness, shininess, robustness, comfortability, and so on. It shows how senses play important role in it. Human employs their senses when interacting with textiles, most notably visual and tactile/ haptic to absorb its expressive properties. Yet, our sensorial experiences may amplify when interacting with those of traditional textiles, such as batik, as we can entice sensations when seeing its motifs and patterns, smelling its materials, and touching its surfaces. The multi-sensorial importance of seeing, smelling, and touching in the interaction with and experience of textiles suggests that one should address senses in a systematic way when evaluating users' perception on traditional textiles. To address this issue, the paper proposes the incorporation of Kansei Engineering (KE) approach for identifying multi-sensorial experiences on the expressive properties of traditional textiles, using batik as a case of study. KE approach address person's psychological understanding when observing things in order to analyze and study the inherent relationship between person's perceptual knowledge and objects evaluated. This paper outlines the use of KE approach in correlating sensorial perceptions when experience with traditional textiles and ultimately expose users' preferences toward them. Background of KE approach on textiles will be explored and its application for the multi-sensorial investigation of traditional textiles will be discussed.

A Study on the Gesture Based Virtual Object Manipulation Method in Multi-Mixed Reality

  • Park, Sung-Jun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.2
    • /
    • pp.125-132
    • /
    • 2021
  • In this paper, We propose a study on the construction of an environment for collaboration in mixed reality and a method for working with wearable IoT devices. Mixed reality is a mixed form of virtual reality and augmented reality. We can view objects in the real and virtual world at the same time. And unlike VR, MR HMD does not occur the motion sickness. It is using a wireless and attracting attention as a technology to be applied in industrial fields. Myo wearable device is a device that enables arm rotation tracking and hand gesture recognition by using a triaxial sensor, an EMG sensor, and an acceleration sensor. Although various studies related to MR are being progressed, discussions on developing an environment in which multiple people can participate in mixed reality and manipulating virtual objects with their own hands are insufficient. In this paper, We propose a method of constructing an environment where collaboration is possible and an interaction method for smooth interaction in order to apply mixed reality in real industrial fields. As a result, two people could participate in the mixed reality environment at the same time to share a unified object for the object, and created an environment where each person could interact with the Myo wearable interface equipment.

Ontological Robot System for Communication

  • Yamaguchi, Toru;Sato, Eri;Higuchi, Katsutaka
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.130-133
    • /
    • 2003
  • The robot has recently emerged as a factor in the daily lives of humans, taking the form of a mechanical pet or similar source of entertainment. A robot system that is designed to co-exist with humans, i.e., a coexistence-type robot system, is important to be "it exists in various environments with the person, and robot system by which the interaction of a physical, informational emotion with the person etc. was valued". When studying the impact of intimacy in the human/robot relationship, we have to examine the problems that can arise as a result of physical intimacy(coordination on safety in the hardware side and a soft side). Furthermore, We should also consider the informational aspects of intimacy (recognition technology, and information transport and sharing). This paper reports the interim results of the research of a system configuration that enhances the physical intimacy relationship in the symbiosis of the human and the robot.

  • PDF

Interaction Intent Analysis of Multiple Persons using Nonverbal Behavior Features (인간의 비언어적 행동 특징을 이용한 다중 사용자의 상호작용 의도 분석)

  • Yun, Sang-Seok;Kim, Munsang;Choi, Mun-Taek;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.8
    • /
    • pp.738-744
    • /
    • 2013
  • According to the cognitive science research, the interaction intent of humans can be estimated through an analysis of the representing behaviors. This paper proposes a novel methodology for reliable intention analysis of humans by applying this approach. To identify the intention, 8 behavioral features are extracted from the 4 characteristics in human-human interaction and we outline a set of core components for nonverbal behavior of humans. These nonverbal behaviors are associated with various recognition modules including multimodal sensors which have each modality with localizing sound source of the speaker in the audition part, recognizing frontal face and facial expression in the vision part, and estimating human trajectories, body pose and leaning, and hand gesture in the spatial part. As a post-processing step, temporal confidential reasoning is utilized to improve the recognition performance and integrated human model is utilized to quantitatively classify the intention from multi-dimensional cues by applying the weight factor. Thus, interactive robots can make informed engagement decision to effectively interact with multiple persons. Experimental results show that the proposed scheme works successfully between human users and a robot in human-robot interaction.

A Study on the Design Characteristic and Improvement of the Studio Type Urban Lifestyle Housing in Seoul (서울시 도시형 생활주택 원룸형 주거의 계획특성 및 개선방안 연구)

  • Cho, Min-Jung
    • Korean Institute of Interior Design Journal
    • /
    • v.20 no.2
    • /
    • pp.156-166
    • /
    • 2011
  • A studio type urban lifestyle housing was recently introduced as a new urban multi-housing typology. It was particularly created to meet the increasing housing demand of one-person households due to the population change and the shortage of housing supply. However, some concerns have been raised, because the government's policy has been focused on expanding housing supply by easing certain legal regulations in construction. Poorly planned and managed urban lifestyle housings might degrade living conditions for one-person households and ultimately harm urban environments. As such, this research is conducted to investigate the design characteristics of the studio type urban lifestyle housing from selected construction precedents in Seoul. Critical evaluations are made for the facilities and uses in site plans, unit plans, and shared public spaces. As a result, problem areas are found in the lack of design varieties, privacy protection in units, control of natural environment conditions, and the absence of community spaces. Improvement strategies can be suggested by comparing with some overseas' housing precedents: Design variations can be extended through flexible structure, facility, and furniture systems. Privacy and natural environment can be controled through the integration of interior space configurations and exterior envelope systems. The housing policy needs to be reconsidered to improve a variety in design, residents' social interaction, security, and management. Thereby, the studio type urban lifestyle housing should be holistically approached in terms of design and policy to enrich urban living experiences by residents and communities.

Multi-modal Sensor System and Database for Human Detection and Activity Learning of Robot in Outdoor (실외에서 로봇의 인간 탐지 및 행위 학습을 위한 멀티모달센서 시스템 및 데이터베이스 구축)

  • Uhm, Taeyoung;Park, Jeong-Woo;Lee, Jong-Deuk;Bae, Gi-Deok;Choi, Young-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.12
    • /
    • pp.1459-1466
    • /
    • 2018
  • Robots which detect human and recognize action are important factors for human interaction, and many researches have been conducted. Recently, deep learning technology has developed and learning based robot's technology is a major research area. These studies require a database to learn and evaluate for intelligent human perception. In this paper, we propose a multi-modal sensor-based image database condition considering the security task by analyzing the image database to detect the person in the outdoor environment and to recognize the behavior during the running of the robot.

Hand Gesture based Manipulation of Meeting Data in Teleconference (핸드제스처를 이용한 원격미팅 자료 인터페이스)

  • Song, Je-Hoon;Choi, Ki-Ho;Kim, Jong-Won;Lee, Yong-Gu
    • Korean Journal of Computational Design and Engineering
    • /
    • v.12 no.2
    • /
    • pp.126-136
    • /
    • 2007
  • Teleconferences have been used in business sectors to reduce traveling costs. Traditionally, specialized telephones that enabled multiparty conversations were used. With the introduction of high speed networks, we now have high definition videos that add more realism in the presence of counterparts who could be thousands of miles away. This paper presents a new technology that adds even more realism by telecommunicating with hand gestures. This technology is part of a teleconference system named SMS (Smart Meeting Space). In SMS, a person can use hand gestures to manipulate meeting data that could be in the form of text, audio, video or 3D shapes. Fer detecting hand gestures, a machine learning algorithm called SVM (Support Vector Machine) has been used. For the prototype system, a 3D interaction environment has been implemented with $OpenGL^{TM}$, where a 3D human skull model can be grasped and moved in 6-DOF during a remote conversation between distant persons.