• 제목/요약/키워드: multimodal input

검색결과 34건 처리시간 0.023초

멀티모달 인터랙션을 위한 사용자 병렬 모달리티 입력방식 및 입력 동기화 방법 설계 (Design of Parallel Input Pattern and Synchronization Method for Multimodal Interaction)

  • 임미정;박범
    • 대한인간공학회지
    • /
    • 제25권2호
    • /
    • pp.135-146
    • /
    • 2006
  • Multimodal interfaces are recognition-based technologies that interpret and encode hand gestures, eye-gaze, movement pattern, speech, physical location and other natural human behaviors. Modality is the type of communication channel used for interaction. It also covers the way an idea is expressed or perceived, or the manner in which an action is performed. Multimodal Interfaces are the technologies that constitute multimodal interaction processes which occur consciously or unconsciously while communicating between human and computer. So input/output forms of multimodal interfaces assume different aspects from existing ones. Moreover, different people show different cognitive styles and individual preferences play a role in the selection of one input mode over another. Therefore to develop an effective design of multimodal user interfaces, input/output structure need to be formulated through the research of human cognition. This paper analyzes the characteristics of each human modality and suggests combination types of modalities, dual-coding for formulating multimodal interaction. Then it designs multimodal language and input synchronization method according to the granularity of input synchronization. To effectively guide the development of next-generation multimodal interfaces, substantially cognitive modeling will be needed to understand the temporal and semantic relations between different modalities, their joint functionality, and their overall potential for supporting computation in different forms. This paper is expected that it can show multimodal interface designers how to organize and integrate human input modalities while interacting with multimodal interfaces.

Adaptive Multimodal In-Vehicle Information System for Safe Driving

  • Park, Hye Sun;Kim, Kyong-Ho
    • ETRI Journal
    • /
    • 제37권3호
    • /
    • pp.626-636
    • /
    • 2015
  • This paper proposes an adaptive multimodal in-vehicle information system for safe driving. The proposed system filters input information based on both the priority assigned to the information and the given driving situation, to effectively manage input information and intelligently provide information to the driver. It then interacts with the driver using an adaptive multimodal interface by considering both the driving workload and the driver's cognitive reaction to the information it provides. It is shown experimentally that the proposed system can promote driver safety and enhance a driver's understanding of the information it provides by filtering the input information. In addition, the system can reduce a driver's workload by selecting an appropriate modality and corresponding level with which to communicate. An analysis of subjective questionnaires regarding the proposed system reveals that more than 85% of the respondents are satisfied with it. The proposed system is expected to provide prioritized information through an easily understood modality.

TV 가이드 영역에서의 음성기반 멀티모달 사용 유형 분석 (Speech-Oriented Multimodal Usage Pattern Analysis for TV Guide Application Scenarios)

  • 김지영;이경님;홍기형
    • 대한음성학회지:말소리
    • /
    • 제58호
    • /
    • pp.101-117
    • /
    • 2006
  • The development of efficient multimodal interfaces and fusion algorithms requires knowledge of usage patterns that show how people use multiple modalities. We analyzed multimodal usage patterns for TV-guide application scenarios (or tasks). In order to collect usage patterns, we implemented a multimodal usage pattern collection system having two input modalities: speech and touch-gesture. Fifty-four subjects participated in our study. Analysis of the collected usage patterns shows a positive correlation between the task type and multimodal usage patterns. In addition, we analyzed the timing between speech-utterances and their corresponding touch-gestures that shows the touch-gesture occurring time interval relative to the duration of speech utterance. We believe that, for developing efficient multimodal fusion algorithms on an application, the multimodal usage pattern analysis for the given application, similar to our work for TV guide application, have to be done in advance.

  • PDF

미들웨어 기반의 텔레매틱스용 멀티모달 인터페이스 (A Multimodal Interface for Telematics based on Multimodal middleware)

  • 박성찬;안세열;박성수;구명완
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2007년도 한국음성과학회 공동학술대회 발표논문집
    • /
    • pp.41-44
    • /
    • 2007
  • In this paper, we introduce a system in which car navigation scenario is plugged multimodal interface based on multimodal middleware. In map-based system, the combination of speech and pen input/output modalities can offer users better expressive power. To be able to achieve multimodal task in car environments, we have chosen SCXML(State Chart XML), a multimodal authoring language of W3C standard, to control modality components as XHTML, VoiceXML and GPS. In Network Manager, GPS signals from navigation software are converted to EMMA meta language, sent to MultiModal Interaction Runtime Framework(MMI). Not only does MMI handles GPS signals and a user's multimodal I/Os but also it combines them with information of device, user preference and reasoned RDF to give the user intelligent or personalized services. The self-simulation test has shown that middleware accomplish a navigational multimodal task over multiple users in car environments.

  • PDF

음성기반 멀티모달 인터페이스 기술 현황 및 과제 (The Status and Research Themes of Speech based Multimodal Interface Technology)

  • 이지근;이은숙;이혜정;김봉완;정석태;정성태;이용주;한문성
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2002년도 11월 학술대회지
    • /
    • pp.111-114
    • /
    • 2002
  • Complementary use of several modalities in human-to-human communication ensures high accuracy, and only few communication problem occur. Therefore, multimodal interface is considered as the next generation interface between human and computer. This paper presents the current status and research themes of speech-based multimodal interface technology, It first introduces about the concept of multimodal interface. It surveys the recognition technologies of input modalities and synthesis technologies of output modalities. After that it surveys integration technology of modality. Finally, it presents research themes of speech-based multimodal interface technology.

  • PDF

Design and Implementation of a Multimodal Input Device Using a Web Camera

  • Na, Jong-Whoa;Choi, Won-Suk;Lee, Dong-Woo
    • ETRI Journal
    • /
    • 제30권4호
    • /
    • pp.621-623
    • /
    • 2008
  • We propose a novel input pointing device called the multimodal mouse (MM) which uses two modalities: face recognition and speech recognition. From an analysis of Microsoft Office workloads, we find that 80% of Microsoft Office Specialist test tasks are compound tasks using both the keyboard and the mouse together. When we use the optical mouse (OM), operation is quick, but it requires a hand exchange delay between the keyboard and the mouse. This takes up a significant amount of the total execution time. The MM operates more slowly than the OM, but it does not consume any hand exchange time. As a result, the MM shows better performance than the OM in many cases.

  • PDF

Multimodal Attention-Based Fusion Model for Context-Aware Emotion Recognition

  • Vo, Minh-Cong;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • 제18권3호
    • /
    • pp.11-20
    • /
    • 2022
  • Human Emotion Recognition is an exciting topic that has been attracting many researchers for a lengthy time. In recent years, there has been an increasing interest in exploiting contextual information on emotion recognition. Some previous explorations in psychology show that emotional perception is impacted by facial expressions, as well as contextual information from the scene, such as human activities, interactions, and body poses. Those explorations initialize a trend in computer vision in exploring the critical role of contexts, by considering them as modalities to infer predicted emotion along with facial expressions. However, the contextual information has not been fully exploited. The scene emotion created by the surrounding environment, can shape how people perceive emotion. Besides, additive fusion in multimodal training fashion is not practical, because the contributions of each modality are not equal to the final prediction. The purpose of this paper was to contribute to this growing area of research, by exploring the effectiveness of the emotional scene gist in the input image, to infer the emotional state of the primary target. The emotional scene gist includes emotion, emotional feelings, and actions or events that directly trigger emotional reactions in the input image. We also present an attention-based fusion network, to combine multimodal features based on their impacts on the target emotional state. We demonstrate the effectiveness of the method, through a significant improvement on the EMOTIC dataset.

음성인식 및 영상처리 기반 멀티모달 입력장치의 설계 (Design of the Multimodal Input System using Image Processing and Speech Recognition)

  • 최원석;이동우;김문식;나종화
    • 제어로봇시스템학회논문지
    • /
    • 제13권8호
    • /
    • pp.743-748
    • /
    • 2007
  • Recently, various types of camera mouse are developed using the image processing. The camera mouse showed limited performance compared to the traditional optical mouse in terms of the response time and the usability. These problems are caused by the mismatch between the size of the monitor and that of the active pixel area of the CMOS Image Sensor. To overcome these limitations, we designed a new input device that uses the face recognition as well as the speech recognition simultaneously. In the proposed system, the area of the monitor is partitioned into 'n' zones. The face recognition is performed using the web-camera, so that the mouse pointer follows the movement of the face of the user in a particular zone. The user can switch the zone by speaking the name of the zone. The multimodal mouse is analyzed using the Keystroke Level Model and the initial experiments was performed to evaluate the feasibility and the performance of the proposed system.

휴대폰용 멀티모달 인터페이스 개발 - 키패드, 모션, 음성인식을 결합한 멀티모달 인터페이스 (Development of a multimodal interface for mobile phones)

  • 김원우
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2008년도 학술대회 1부
    • /
    • pp.559-563
    • /
    • 2008
  • 휴대폰은 현대 생활에 없어서는 안 될 개인화 단말기가 되었으며, 그 위에서 다양한 디바이스, 컨텐츠 및 서비스의 컨버전스가 이루어지고 있다. 그러한 다양하고 복잡한 기능과 대용량 컨텐츠 및 정보를 효과적으로 검색하고 사용할 수 있는 수단에 대한 연구도 활발히 진행되고 있다. 본 연구는 휴대폰 상에서 음성, 키패드, 모션을 이용하여 한글 단어를 입력하는 새로운 인터페이스를 개발하고, 이를 응용한 전화걸기 애플리케이션을 통하여 그 그사용성과 효과를 검증하는 것을 목적으로 한다. 개발된 멀티모달 인터페이스는 복잡한 메뉴 트리와 깊이를 한 번에 접근할 수 있는 음성 인터페이스의 장점을 수용하면서 인식률 및 인식시간을 개선하였다.

  • PDF

Using Spatial Ontology in the Semantic Integration of Multimodal Object Manipulation in Virtual Reality

  • Irawati, Sylvia;Calderon, Daniela;Ko, Hee-Dong
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2006년도 학술대회 1부
    • /
    • pp.884-892
    • /
    • 2006
  • This paper describes a framework for multimodal object manipulation in virtual environments. The gist of the proposed framework is the semantic integration of multimodal input using spatial ontology and user context to integrate the interpretation results from the inputs into a single one. The spatial ontology, describing the spatial relationships between objects, is used together with the current user context to solve ambiguities coming from the user's commands. These commands are used to reposition the objects in the virtual environments. We discuss how the spatial ontology is defined and used to assist the user to perform object placements in the virtual environment as it will be in the real world.

  • PDF