• Title/Summary/Keyword: Multimodal Contents

Search Result 35, Processing Time 0.027 seconds

Design and Implementation of Multimodal Middleware for Mobile Environments (모바일 환경을 위한 멀티모달 미들웨어의 설계 및 구현)

  • Park, Seong-Soo;Ahn, Se-Yeol;Kim, Won-Woo;Koo, Myoung-Wan;Park, Sung-Chan
    • MALSORI
    • /
    • no.60
    • /
    • pp.125-144
    • /
    • 2006
  • W3C announced a standard software architecture for multimodal context-aware middleware that emphasizes modularity and separates structure, contents, and presentation. We implemented a distributed multimodal interface system followed the W3C architecture, based on SCXML. SCXML uses parallel states to invoke both XHTML and VoiceXML contents as well as to gather composite or sequential multimodal inputs through man-machine interactions. We also hire Delivery Context Interface(DCI) module and an external service bundle enabling middleware to support context-awareness services for real world environments. The provision of personalized user interfaces for mobile devices is expected to be used for different devices with a wide variety of capabilities and interaction modalities. We demonstrated the implemented middleware could maintain multimodal scenarios in a clear, concise and consistent manner by some experiments.

  • PDF

Designing a Framework of Multimodal Contents Creation and Playback System for Immersive Textbook (실감형 교과서를 위한 멀티모달 콘텐츠 저작 및 재생 프레임워크 설계)

  • Kim, Seok-Yeol;Park, Jin-Ah
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.8
    • /
    • pp.1-10
    • /
    • 2010
  • For virtual education, the multimodal learning environment with haptic feedback, termed 'immersive textbook', is necessary to enhance the learning effectiveness. However, the learning contents for immersive textbook are not widely available due to the constraints in creation and playback environments. To address this problem, we propose a framework for producing and displaying the multimodal contents for immersive textbook. Our framework provides an XML-based meta-language to produce the multimodal learning contents in the form of intuitive script. Thus it can help the user, without any prior knowledge of multimodal interactions, produce his or her own learning contents. The contents are then interpreted by script engine and delivered to the user by visual and haptic rendering loops. Also we implemented a prototype based on the aforementioned proposals and performed user evaluation to verify the validity of our framework.

Multimodal Media Content Classification using Keyword Weighting for Recommendation (추천을 위한 키워드 가중치를 이용한 멀티모달 미디어 콘텐츠 분류)

  • Kang, Ji-Soo;Baek, Ji-Won;Chung, Kyungyong
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.5
    • /
    • pp.1-6
    • /
    • 2019
  • As the mobile market expands, a variety of platforms are available to provide multimodal media content. Multimodal media content contains heterogeneous data, accordingly, user requires much time and effort to select preferred content. Therefore, in this paper we propose multimodal media content classification using keyword weighting for recommendation. The proposed method extracts keyword that best represent contents through keyword weighting in text data of multimodal media contents. Based on the extracted data, genre class with subclass are generated and classify appropriate multimodal media contents. In addition, the user's preference evaluation is performed for personalized recommendation, and multimodal content is recommended based on the result of the user's content preference analysis. The performance evaluation verifies that it is superiority of recommendation results through the accuracy and satisfaction. The recommendation accuracy is 74.62% and the satisfaction rate is 69.1%, because it is recommended considering the user's favorite the keyword as well as the genre.

Adaptive Multimodal In-Vehicle Information System for Safe Driving

  • Park, Hye Sun;Kim, Kyong-Ho
    • ETRI Journal
    • /
    • v.37 no.3
    • /
    • pp.626-636
    • /
    • 2015
  • This paper proposes an adaptive multimodal in-vehicle information system for safe driving. The proposed system filters input information based on both the priority assigned to the information and the given driving situation, to effectively manage input information and intelligently provide information to the driver. It then interacts with the driver using an adaptive multimodal interface by considering both the driving workload and the driver's cognitive reaction to the information it provides. It is shown experimentally that the proposed system can promote driver safety and enhance a driver's understanding of the information it provides by filtering the input information. In addition, the system can reduce a driver's workload by selecting an appropriate modality and corresponding level with which to communicate. An analysis of subjective questionnaires regarding the proposed system reveals that more than 85% of the respondents are satisfied with it. The proposed system is expected to provide prioritized information through an easily understood modality.

A Study on the Liability System of Multimodal Transport Operator in the UN Convention on Multimodal Transport of Goods, 1980 and Multimodal Transport Document. (UN국제물건복합운송조직과 복합운송인의 책임에 관한 연구)

  • 박상갑
    • Journal of the Korean Institute of Navigation
    • /
    • v.19 no.4
    • /
    • pp.41-61
    • /
    • 1995
  • The international trade is basically founded on the contract of international sale of goods and backed up by the contract of international carriage of goods and the contract of insurance in the goods carried. For the efficient development of international trade, it is essential to incorporate the above three fields closely together. Economic growth has developed international trade which has accelerated the development of international carriage of goods. As a result of rapid expansion of international carriage of goods, rationalization of transport was required, which has brought about the International Multimodal Transport System(herein after referred to as 'IMT') through containerization. International multimodal transport system has affected international trade a lot, especially the field of insurance a great deal. The aim of this paper is to analyze contents of Multimodal Transport Operator's(MTO's) liability system in the UN Convention on International Multimodal Transport of Goods, 1980 and FIATA Bill of Lading(FBL) as one of current Multimodal Transport Documents. The analysis of MTO's liability system will be a good introductory concept for the further study of insurance problems for the development of IMT.

  • PDF

The Effect of AI Agent's Multi Modal Interaction on the Driver Experience in the Semi-autonomous Driving Context : With a Focus on the Existence of Visual Character (반자율주행 맥락에서 AI 에이전트의 멀티모달 인터랙션이 운전자 경험에 미치는 효과 : 시각적 캐릭터 유무를 중심으로)

  • Suh, Min-soo;Hong, Seung-Hye;Lee, Jeong-Myeong
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.8
    • /
    • pp.92-101
    • /
    • 2018
  • As the interactive AI speaker becomes popular, voice recognition is regarded as an important vehicle-driver interaction method in case of autonomous driving situation. The purpose of this study is to confirm whether multimodal interaction in which feedback is transmitted by auditory and visual mode of AI characters on screen is more effective in user experience optimization than auditory mode only. We performed the interaction tasks for the music selection and adjustment through the AI speaker while driving to the experiment participant and measured the information and system quality, presence, the perceived usefulness and ease of use, and the continuance intention. As a result of analysis, the multimodal effect of visual characters was not shown in most user experience factors, and the effect was not shown in the intention of continuous use. Rather, it was found that auditory single mode was more effective than multimodal in information quality factor. In the semi-autonomous driving stage, which requires driver 's cognitive effort, multimodal interaction is not effective in optimizing user experience as compared to single mode interaction.

Multimodal based Storytelling Experience Using Virtual Reality in Museum (가상현실을 이용한 박물관 내 멀티모달 스토리텔링 경험 연구)

  • Lee, Ji-Hye
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.10
    • /
    • pp.11-19
    • /
    • 2018
  • This paper is about multimodal storytelling experience applying Virtual Reality technology in museum. Specifically, this research argues virtual reality in both intuitive understanding of history also multimodal experience in the space. This research investigates cases regarding use of virtual reality in museum sector. As a research method, this paper conducts a literature review regarding multimodal experience and examples applying virtual reality related technologies in museum. Based on the literature review to investigate the concept necessary with its related cases. Based on the investigation, this paper suggests constructing elements for multimodal storytelling based on VR. Ultimately, this paper suggests the elements of building VR storytelling where dynamic audio-visual and interaction mode combines with historical resources for diverse audiences.

Multimodal Attention-Based Fusion Model for Context-Aware Emotion Recognition

  • Vo, Minh-Cong;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • v.18 no.3
    • /
    • pp.11-20
    • /
    • 2022
  • Human Emotion Recognition is an exciting topic that has been attracting many researchers for a lengthy time. In recent years, there has been an increasing interest in exploiting contextual information on emotion recognition. Some previous explorations in psychology show that emotional perception is impacted by facial expressions, as well as contextual information from the scene, such as human activities, interactions, and body poses. Those explorations initialize a trend in computer vision in exploring the critical role of contexts, by considering them as modalities to infer predicted emotion along with facial expressions. However, the contextual information has not been fully exploited. The scene emotion created by the surrounding environment, can shape how people perceive emotion. Besides, additive fusion in multimodal training fashion is not practical, because the contributions of each modality are not equal to the final prediction. The purpose of this paper was to contribute to this growing area of research, by exploring the effectiveness of the emotional scene gist in the input image, to infer the emotional state of the primary target. The emotional scene gist includes emotion, emotional feelings, and actions or events that directly trigger emotional reactions in the input image. We also present an attention-based fusion network, to combine multimodal features based on their impacts on the target emotional state. We demonstrate the effectiveness of the method, through a significant improvement on the EMOTIC dataset.

HomeN manager system based on multimodal context-aware middleware (멀티모달 상황인지 미들웨어 기반의 홈앤(HomeN) 매니저 시스템)

  • Ahn, Se-Yeol;Park, Sung-Chan;Park, Seong-Soo;Koo, Myung-Wan;Jeong, Yeong-Joon;Kim, Myung-Sook
    • Proceedings of the KSPS conference
    • /
    • 2006.11a
    • /
    • pp.120-123
    • /
    • 2006
  • The provision of personalized user interfaces for mobile devices is expected to be used for different devices with a wide variety of capabilities and interaction modalities. In this paper, we implemented a multimodal context-aware middleware incorporating XML-based languages such as XHTML, VoiceXML. SCXML uses parallel states to invoke both XHTML and VoiceXML contents as well as to gather composite multimodal inputs or synchronize inter-modalities through man-machine I/Os. We developed home networking service named "HomeN" based on our middleware framework. It demonstrates that users could maintain multimodal scenarios in a clear, concise and consistent manner under various user's interactions.

  • PDF

A Pattern of Multimodal Transport Liability and its Adaptation on Practice (복합운송인(複合運送人)의 책임(責任) 한계(限界)에 대한 형태별(形態別) 분류(分類)와 실무상(實務上) 적용(適用))

  • Kim, Joong-Kwan
    • THE INTERNATIONAL COMMERCE & LAW REVIEW
    • /
    • v.13
    • /
    • pp.257-281
    • /
    • 2000
  • The world economy is becoming increasingly globalized. The globalization has resulted in far reaching agreements to deepen trade liberalization and enlarge its scope to cover new areas in addition to strengthening its supporting institutional base. Economic growth has developed international trade which has accelerated the development of international carriage of goods in 21st century. The international trade is basically founded on the contract of international sale of goods and backed up by the contract of international carriage of goods and the insurance on the goods carried. It is essential to incorporate each other sections for the efficient development of international trade. As a result of rapid expansion of international carriage of goods, rationalization of transport was required, which has brought about the International Multimodal Transport System through containerization. The approach to liability system will be a right way to solve the insurance problems for the development and enlargement of world trade volume. International multimodal transport system has affected international trade a lot, especially the field of insurance a grate deal. This paper is to analyze contents of liability system on Multimodal Transport with in the UN Convention on International Multimodal Transport of goods.

  • PDF