• Title/Summary/Keyword: 멀티모달 인터랙션

Search Result 16, Processing Time 0.024 seconds

The Effect of AI Agent's Multi Modal Interaction on the Driver Experience in the Semi-autonomous Driving Context : With a Focus on the Existence of Visual Character (반자율주행 맥락에서 AI 에이전트의 멀티모달 인터랙션이 운전자 경험에 미치는 효과 : 시각적 캐릭터 유무를 중심으로)

  • Suh, Min-soo;Hong, Seung-Hye;Lee, Jeong-Myeong
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.8
    • /
    • pp.92-101
    • /
    • 2018
  • As the interactive AI speaker becomes popular, voice recognition is regarded as an important vehicle-driver interaction method in case of autonomous driving situation. The purpose of this study is to confirm whether multimodal interaction in which feedback is transmitted by auditory and visual mode of AI characters on screen is more effective in user experience optimization than auditory mode only. We performed the interaction tasks for the music selection and adjustment through the AI speaker while driving to the experiment participant and measured the information and system quality, presence, the perceived usefulness and ease of use, and the continuance intention. As a result of analysis, the multimodal effect of visual characters was not shown in most user experience factors, and the effect was not shown in the intention of continuous use. Rather, it was found that auditory single mode was more effective than multimodal in information quality factor. In the semi-autonomous driving stage, which requires driver 's cognitive effort, multimodal interaction is not effective in optimizing user experience as compared to single mode interaction.

Design of dataglove based multimodal interface for 3D object manipulation in virtual environment (3 차원 오브젝트 직접조작을 위한 데이터 글러브 기반의 멀티모달 인터페이스 설계)

  • Lim, Mi-Jung;Park, Peom
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.1011-1018
    • /
    • 2006
  • 멀티모달 인터페이스는 인간의 제스처, 시선, 손의 움직임, 행동의 패턴, 음성, 물리적인 위치 등 인간의 자연스러운 행동들에 대한 정보를 해석하고 부호화하는 인지기반 기술이다. 본 논문에서는 제스처와 음성, 터치를 이용한 3D 오브젝트 기반의 멀티모달 인터페이스를 설계, 구현한다. 서비스 도메인은 스마트 홈이며 사용자는 3D 오브젝트 직접조작을 통해 원격으로 가정의 오브젝트들을 모니터링하고 제어할 수 있다. 멀티모달 인터랙션 입출력 과정에서는 여러 개의 모달리티를 병렬적으로 인지하고 처리해야 하기 때문에 입출력 과정에서 각 모달리티의 조합과 부호화 방법, 입출력 형식 등이 문제시된다. 본 연구에서는 모달리티들의 특징과 인간의 인지구조 분석을 바탕으로 제스처, 음성, 터치 모달리티 간의 입력조합방식을 제시하고 멀티모달을 이용한 효율적인 3D Object 인터랙션 프로토타입을 설계한다.

  • PDF

Ubiquitous Context-aware Modeling and Multi-Modal Interaction Design Framework (유비쿼터스 환경의 상황인지 모델과 이를 활용한 멀티모달 인터랙션 디자인 프레임웍 개발에 관한 연구)

  • Kim, Hyun-Jeong;Lee, Hyun-Jin
    • Archives of design research
    • /
    • v.18 no.2 s.60
    • /
    • pp.273-282
    • /
    • 2005
  • In this study, we proposed Context Cube as a conceptual model of user context, and a Multi-modal Interaction Design framework to develop ubiquitous service through understanding user context and analyzing correlation between context awareness and multi-modality, which are to help infer the meaning of context and offer services to meet user needs. And we developed a case study to verify Context Cube's validity and proposed interaction design framework to derive personalized ubiquitous service. We could understand context awareness as information properties which consists of basic activity, location of a user and devices(environment), time, and daily schedule of a user. And it enables us to construct three-dimensional conceptual model, Context Cube. Also, we developed ubiquitous interaction design process which encloses multi-modal interaction design by studying the features of user interaction presented on Context Cube.

  • PDF

Development for Multi-modal Realistic Experience I/O Interaction System (멀티모달 실감 경험 I/O 인터랙션 시스템 개발)

  • Park, Jae-Un;Whang, Min-Cheol;Lee, Jung-Nyun;Heo, Hwan;Jeong, Yong-Mu
    • Science of Emotion and Sensibility
    • /
    • v.14 no.4
    • /
    • pp.627-636
    • /
    • 2011
  • The purpose of this study is to develop the multi-modal interaction system. This system provides realistic and an immersive experience through multi-modal interaction. The system recognizes user behavior, intention, and attention, which overcomes the limitations of uni-modal interaction. The multi-modal interaction system is based upon gesture interaction methods, intuitive gesture interaction and attention evaluation technology. The gesture interaction methods were based on the sensors that were selected to analyze the accuracy of the 3-D gesture recognition technology using meta-analysis. The elements of intuitive gesture interaction were reflected through the results of experiments. The attention evaluation technology was developed by the physiological signal analysis. This system is divided into 3 modules; a motion cognitive system, an eye gaze detecting system, and a bio-reaction sensing system. The first module is the motion cognitive system which uses the accelerator sensor and flexible sensors to recognize hand and finger movements of the user. The second module is an eye gaze detecting system that detects pupil movements and reactions. The final module consists of a bio-reaction sensing system or attention evaluating system which tracks cardiovascular and skin temperature reactions. This study will be used for the development of realistic digital entertainment technology.

  • PDF

Camera-based Interaction for Handheld Virtual Reality (카메라의 상대적 추적을 사용한 핸드헬드 가상현실 인터랙션)

  • Hwang, Jane;Kim, Gerard Joung-Hyun;Kim, Nam-Gyu
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.619-625
    • /
    • 2006
  • 핸드헬드 가상현실 시스템이란 멀티모달 센서와 멀티모달 디스플레이 장치가 내장되어 가상환경을 제공하는 한 손으로 들고 다닐 수 있는 핸드헬드 시스템을 의미한다. 이런 핸드헬드 가상현실 시스템에서는 일반적으로 제한된 입력수단 (예> 버튼, 터치스크린)을 제공하기 때문에 이를 사용해서 3 차원 인터랙션을 행하기가 쉽지 않다. 그래서 본 연구에서는 일반 핸드헬드 기기에 대부분 내장되어 있는 장치인 카메라를 사용해서 핸드헬드 가상환경에서 3 차원 인터랙션을 수행하는 방법을 제안하고 구현, 평가한다.

  • PDF

A study on AR(Augmented Reality) game platform design using multimodal interaction (멀티모달 인터렉션을 이용한 증강현실 게임 플랫폼 설계에 관한 연구)

  • Kim, Chi-Jung;Hwang, Min-Cheol;Park, Gang-Ryeong;Kim, Jong-Hwa;Lee, Ui-Cheol;U, Jin-Cheol;Kim, Yong-U;Kim, Ji-Hye;Jeong, Yong-Mu
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2009.11a
    • /
    • pp.87-90
    • /
    • 2009
  • 본 연구는 HMD(Head Mounted Display), 적외선 카메라, 웹 카메라, 데이터 글러브, 그리고 생리신호 측정 센서를 이용한 증강현실 게임 플랫폼 설계를 목적으로 하고 있다. HMD 는 사용자의 머리의 움직임을 파악하고, 사용자에게 가상 물체를 디스플레이화면에 제공한다. 적외선 카메라는 HMD 하단에 부착하여 사용자의 시선을 추적한다. 웹 카메라는 HMD 상단에 부착하여 전방 영상을 취득 후, 현실영상을 HMD 디스플레이를 통하여 사용자에게 제공한다. 데이터 글러브는 사용자의 손동작을 파악한다. 자율신경계반응은 GSR(Galvanic Skin Response), PPG(PhotoPlethysmoGraphy), 그리고 SKT(SKin Temperature) 센서로 측정한다. 측정된 피부전기반응, 맥파, 그리고 피부온도는 실시간 데이터분석을 통하여 집중 정도를 파악하게 된다. 사용자의 머리 움직임, 시선, 그리고 손동작은 직관적 인터랙션에 사용되고, 집중 정도는 직관적 인터랙션과 결합하여 사용자의 의도파악에 사용된다. 따라서, 본 연구는 멀티모달 인터랙션을 이용하여 직관적 인터랙션 구현과 집중력 분석을 통하여 사용자의 의도를 파악할 수 있는 새로운 증강현실 게임 플랫폼을 설계하였다.

  • PDF

Multimodal Interface Control Module for Immersive Virtual Education (몰입형 가상교육을 위한 멀티모달 인터페이스 제어모듈)

  • Lee, Jaehyub;Im, SungMin
    • The Journal of Korean Institute for Practical Engineering Education
    • /
    • v.5 no.1
    • /
    • pp.40-44
    • /
    • 2013
  • This paper suggests a multimodal interface control module which allows a student to naturally interact with educational contents in virtual environment. The suggested module recognizes a user's motion when he/she interacts with virtual environment and then conveys the user's motion to the virtual environment via wireless communication. Futhermore, a haptic actuator is incorporated into the proposed module in order to create haptic information. Due to the proposed module, a user can haptically sense the virtual object as if the virtual object is exists in real world.

  • PDF

Convergence evaluation method using multisensory and matching painting and music using deep learning based on imaginary soundscape (Imaginary Soundscape 기반의 딥러닝을 활용한 회화와 음악의 매칭 및 다중 감각을 이용한 융합적 평가 방법)

  • Jeong, Hayoung;Kim, Youngjun;Cho, Jundong
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.11
    • /
    • pp.175-182
    • /
    • 2020
  • In this study, we introduced the technique of matching classical music using deep learning to design soundscape that can help the viewer appreciate painting and proposed an evaluation index to evaluate how well matching painting and music. The evaluation index was conducted with suitability evaluation through the Likeard 5-point scale and evaluation in a multimodal aspect. The suitability evaluation score of the 13 test participants for the deep learning based best match between painting and music was 3.74/5.0 and band the average cosine similarity of the multimodal evaluation of 13 participants was 0.79. We expect multimodal evaluation to be an evaluation index that can measure a new user experience. In addition, this study aims to improve the experience of multisensory artworks by proposing the interaction between visual and auditory. The proposed matching of painting and music method can be used in multisensory artwork exhibition and furthermore it will increase the accessibility of visually impaired people to appreciate artworks.

Multimodal based Storytelling Experience Using Virtual Reality in Museum (가상현실을 이용한 박물관 내 멀티모달 스토리텔링 경험 연구)

  • Lee, Ji-Hye
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.10
    • /
    • pp.11-19
    • /
    • 2018
  • This paper is about multimodal storytelling experience applying Virtual Reality technology in museum. Specifically, this research argues virtual reality in both intuitive understanding of history also multimodal experience in the space. This research investigates cases regarding use of virtual reality in museum sector. As a research method, this paper conducts a literature review regarding multimodal experience and examples applying virtual reality related technologies in museum. Based on the literature review to investigate the concept necessary with its related cases. Based on the investigation, this paper suggests constructing elements for multimodal storytelling based on VR. Ultimately, this paper suggests the elements of building VR storytelling where dynamic audio-visual and interaction mode combines with historical resources for diverse audiences.