• 제목/요약/키워드: Multi-modal Interaction

Search Result 39, Processing Time 0.021 seconds

A Research of User Experience on Multi-Modal Interactive Digital Art

  • Qianqian Jiang;Jeanhun Chung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.80-85
    • /
    • 2024
  • The concept of single-modal digital art originated in the 20th century and has evolved through three key stages. Over time, digital art has transformed into multi-modal interaction, representing a new era in art forms. Based on multi-modal theory, this paper aims to explore the characteristics of interactive digital art in innovative art forms and its impact on user experience. Through an analysis of practical application of multi-modal interactive digital art, this study summarises the impact of creative models of digital art on the physical and mental aspects of user experience. In creating audio-visual-based art, multi-modal digital art should seamlessly incorporate sensory elements and leverage computer image processing technology. Focusing on user perception, emotional expression, and cultural communication, it strives to establish an immersive environment with user experience at its core. Future research, particularly with emerging technologies like Artificial Intelligence(AR) and Virtual Reality(VR), should not merely prioritize technology but aim for meaningful interaction. Through multi-modal interaction, digital art is poised to continually innovate, offering new possibilities and expanding the realm of interactive digital art.

A Study on Developmental Direction of Interface Design for Gesture Recognition Technology

  • Lee, Dong-Min;Lee, Jeong-Ju
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.499-505
    • /
    • 2012
  • Objective: Research on the transformation of interaction between mobile machines and users through analysis on current gesture interface technology development trend. Background: For smooth interaction between machines and users, interface technology has evolved from "command line" to "mouse", and now "touch" and "gesture recognition" have been researched and being used. In the future, the technology is destined to evolve into "multi-modal", the fusion of the visual and auditory senses and "3D multi-modal", where three dimensional virtual world and brain waves are being used. Method: Within the development of computer interface, which follows the evolution of mobile machines, actively researching gesture interface and related technologies' trend and development will be studied comprehensively. Through investigation based on gesture based information gathering techniques, they will be separated in four categories: sensor, touch, visual, and multi-modal gesture interfaces. Each category will be researched through technology trend and existing actual examples. Through this methods, the transformation of mobile machine and human interaction will be studied. Conclusion: Gesture based interface technology realizes intelligent communication skill on interaction relation ship between existing static machines and users. Thus, this technology is important element technology that will transform the interaction between a man and a machine more dynamic. Application: The result of this study may help to develop gesture interface design currently in use.

A Study on the Multi-Modal Browsing System by Integration of Browsers Using lava RMI (자바 RMI를 이용한 브라우저 통합에 의한 멀티-모달 브라우징 시스템에 관한 연구)

  • Jang Joonsik;Yoon Jaeseog;Kim Gukboh
    • Journal of Internet Computing and Services
    • /
    • v.6 no.1
    • /
    • pp.95-103
    • /
    • 2005
  • Recently researches about multi-modal system has been studied widely and actively, Such multi-modal systems are enable to increase possibility of HCI(Human-computer Interaction) realization, enable to provide information in various ways and also enable to be applicable in e-business application, If ideal multi-modal system can be realized in future, eventually user can maximize interactive usability between information instrument and men in hands-free and eyes-free, In this paper, a new multi-modal browsing system using Java RMI as communication interface, which integrated by HTML browser and voice browser is suggested and also English-English dictionary search application system is implemented as example.

  • PDF

Development for Multi-modal Realistic Experience I/O Interaction System (멀티모달 실감 경험 I/O 인터랙션 시스템 개발)

  • Park, Jae-Un;Whang, Min-Cheol;Lee, Jung-Nyun;Heo, Hwan;Jeong, Yong-Mu
    • Science of Emotion and Sensibility
    • /
    • v.14 no.4
    • /
    • pp.627-636
    • /
    • 2011
  • The purpose of this study is to develop the multi-modal interaction system. This system provides realistic and an immersive experience through multi-modal interaction. The system recognizes user behavior, intention, and attention, which overcomes the limitations of uni-modal interaction. The multi-modal interaction system is based upon gesture interaction methods, intuitive gesture interaction and attention evaluation technology. The gesture interaction methods were based on the sensors that were selected to analyze the accuracy of the 3-D gesture recognition technology using meta-analysis. The elements of intuitive gesture interaction were reflected through the results of experiments. The attention evaluation technology was developed by the physiological signal analysis. This system is divided into 3 modules; a motion cognitive system, an eye gaze detecting system, and a bio-reaction sensing system. The first module is the motion cognitive system which uses the accelerator sensor and flexible sensors to recognize hand and finger movements of the user. The second module is an eye gaze detecting system that detects pupil movements and reactions. The final module consists of a bio-reaction sensing system or attention evaluating system which tracks cardiovascular and skin temperature reactions. This study will be used for the development of realistic digital entertainment technology.

  • PDF

The Effects of Multi-Modality on the Use of Smart Phones

  • Lee, Gaeun;Kim, Seongmin;Choe, Jaeho;Jung, Eui Seung
    • Journal of the Ergonomics Society of Korea
    • /
    • v.33 no.3
    • /
    • pp.241-253
    • /
    • 2014
  • Objective: The objective of this study was to examine multi-modal interaction effects of input-mode switching on the use of smart phones. Background: Multi-modal is considered as an efficient alternative for input and output of information in mobile environments. However, there are various limitations in current mobile UI (User Interface) system that overlooks the transition between different modes or the usability of a combination of multi modal uses. Method: A pre-survey determined five representative tasks from smart phone tasks by their functions. The first experiment involved the use of a uni-mode for five single tasks; the second experiment involved the use of a multi-mode for three dual tasks. The dependent variables were user preference and task completion time. The independent variable in the first experiment was the type of modes (i.e., Touch, Pen, or Voice) while the variable in the second experiment was the type of tasks (i.e., internet searching, subway map, memo, gallery, and application store). Results: In the first experiment, there was no difference between the uses of pen and touch devices. However, a specific mode type was preferred depending on the functional characteristics of the tasks. In the second experiment, analysis of results showed that user preference depended on the order and combination of modes. Even with the transition of modes, users preferred the use of multi-modes including voice. Conclusion: The order of combination of modes may affect the usability of multi-modes. Therefore, when designing a multi-modal system, the fact that there are frequent transitions between various mobile contents in different modes should be properly considered. Application: It may be utilized as a user-centered design guideline for mobile multi modal UI system.

Ubiquitous Context-aware Modeling and Multi-Modal Interaction Design Framework (유비쿼터스 환경의 상황인지 모델과 이를 활용한 멀티모달 인터랙션 디자인 프레임웍 개발에 관한 연구)

  • Kim, Hyun-Jeong;Lee, Hyun-Jin
    • Archives of design research
    • /
    • v.18 no.2 s.60
    • /
    • pp.273-282
    • /
    • 2005
  • In this study, we proposed Context Cube as a conceptual model of user context, and a Multi-modal Interaction Design framework to develop ubiquitous service through understanding user context and analyzing correlation between context awareness and multi-modality, which are to help infer the meaning of context and offer services to meet user needs. And we developed a case study to verify Context Cube's validity and proposed interaction design framework to derive personalized ubiquitous service. We could understand context awareness as information properties which consists of basic activity, location of a user and devices(environment), time, and daily schedule of a user. And it enables us to construct three-dimensional conceptual model, Context Cube. Also, we developed ubiquitous interaction design process which encloses multi-modal interaction design by studying the features of user interaction presented on Context Cube.

  • PDF

A Study on the Intention to Use a Robot-based Learning System with Multi-Modal Interaction (멀티모달 상호작용 중심의 로봇기반교육 콘텐츠를 활용한 r-러닝 시스템 사용의도 분석)

  • Oh, Junseok;Cho, Hye-Kyung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.6
    • /
    • pp.619-624
    • /
    • 2014
  • This paper introduces a robot-based learning system which is designed to teach multiplication to children. In addition to a small humanoid and a smart device delivering educational content, we employ a type of mixed-initiative operation which provides enhanced multi-modal cognition to the r-learning system through human intervention. To investigate major factors that influence people's intention to use the r-learning system and to see how the multi-modality affects the connections, we performed a user study based on TAM (Technology Acceptance Model). The results support the fact that the quality of the system and the natural interaction are key factors for the r-learning system to be used, and they also reveal very interesting implications related to the human behaviors.

Telepresence Robotic Technology for Individuals with Visual Impairments Through Real-time Haptic Rendering (실시간 햅틱 렌더링 기술을 통한 시각 장애인을 위한 원격현장감(Telepresence) 로봇 기술)

  • Park, Chung Hyuk;Howard, Ayanna M.
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.3
    • /
    • pp.197-205
    • /
    • 2013
  • This paper presents a robotic system that provides telepresence to the visually impaired by combining real-time haptic rendering with multi-modal interaction. A virtual-proxy based haptic rendering process using a RGB-D sensor is developed and integrated into a unified framework for control and feedback for the telepresence robot. We discuss the challenging problem of presenting environmental perception to a user with visual impairments and our solution for multi-modal interaction. We also explain the experimental design and protocols, and results with human subjects with and without visual impairments. Discussion on the performance of our system and our future goals are presented toward the end.

Modal Analysis of Resonance and Stable Domain Calculation of Active Damping in Multi-inverter Grid-connected Systems

  • Wu, Jian;Chen, Tao;Han, Wanqin;Zhao, Jiaqi;Li, Binbin;Xu, Dianguo
    • Journal of Power Electronics
    • /
    • v.18 no.1
    • /
    • pp.185-194
    • /
    • 2018
  • Interaction among multiple grid-connected inverters has a negative impact on the stable operations and power quality of a power grid. The interrelated influences of inverter inductor-capacitor-inductor filters constitute a high-order power network, and consequently, excite complex resonances at various frequencies. This study first establishes a micro-grid admittance matrix, in which inverters use deadbeat control. Multiple resonances can then be evaluated via modal analysis. For the active damping method applied to deadbeat control, the sampling frequency and the stable domain of the virtual damping ratio are also presented by analyzing system stability in the discrete domain. Simulation and experimental results confirm the efficiency of modal analysis and stable domain calculation in multi-inverter grid-connected systems.

Multi-Modal Instruction Recognition System using Speech and Gesture (음성 및 제스처를 이용한 멀티 모달 명령어 인식 시스템)

  • Kim, Jung-Hyun;Rho, Yong-Wan;Kwon, Hyung-Joon;Hong, Kwang-Seok
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2006.06a
    • /
    • pp.57-62
    • /
    • 2006
  • 휴대용 단말기의 소형화 및 지능화와 더불어 차세대 PC 기반의 유비쿼터스 컴퓨팅에 대한 관심이 높아짐에 따라 최근에는 펜이나 음성 입력 멀티미디어 등 여러 가지 대화 모드를 구비한 멀티 모달 상호작용 (Multi-Modal Interaction MMI)에 대한 연구가 활발히 진행되고 있다. 따라서, 본 논문에서는 잡음 환경에서의 명확한 의사 전달 및 휴대용 단말기에서의 음성-제스처 통합 인식을 위한 인터페이스의 연구를 목적으로 Voice-XML과 Wearable Personal Station(WPS) 기반의 음성 및 내장형 수화 인식기를 통합한 멀티 모달 명령어 인식 시스템 (Multi-Modal Instruction Recognition System : MMIRS)을 제안하고 구현한다. 제안되어진 MMIRS는 한국 표준 수화 (The Korean Standard Sign Language : KSSL)에 상응하는 문장 및 단어 단위의 명령어 인식 모델에 대하여 음성뿐만 아니라 화자의 수화제스처 명령어를 함께 인식하고 사용함에 따라 잡음 환경에서도 규정된 명령어 모델에 대한 인식 성능의 향상을 기대할 수 있다. MMIRS의 인식 성능을 평가하기 위하여, 15인의 피험자가 62개의 문장형 인식 모델과 104개의 단어인식 모델에 대하여 음성과 수화 제스처를 연속적으로 표현하고, 이를 인식함에 있어 개별 명령어 인식기 및 MMIRS의 평균 인식율을 비교하고 분석하였으며 MMIRS는 문장형 명령어 인식모델에 대하여 잡음환경에서는 93.45%, 비잡음환경에서는 95.26%의 평균 인식율을 나타내었다.

  • PDF