• Title/Summary/Keyword: multimodal

Search Result 646, Processing Time 0.022 seconds

A Study on the Liability Principle of the Multimodal Transporter (복합운송인(複合運送人)의 책임원칙(責任原則) - UN복합운송조약(複合運送條約)과 UNCTAD/ICC통일규칙(統一規則)을 중심(中心)으로 -)

  • Song, Chae-Hun
    • THE INTERNATIONAL COMMERCE & LAW REVIEW
    • /
    • v.13
    • /
    • pp.303-328
    • /
    • 2000
  • International Trade has led to the increase of the demand of international transport, and also the development of transport vehicles has been promoting the volumes of international trade. Therefore, the development of international transport not only incurs claims concerning transportation but also establishes various international rules to settle the claims between the shippers and the carriers in the course of transport. With a view to settling the claims successfully, the men who are concerned in the transport have to know the principle and scope of carrier's Liability. In this paper, I would like to find out the principle of Liability for the shippers. Therefore, I classify the Liability principle of the international transporter under the UNs Convention on International Multimodal Transport of Good(1980) and UNCTDAD/ICC Rules(1991) in three system - Network Liability System, Uniform Liability System and Modified Uniform Liability System.

  • PDF

Multimodal Curvature Discrimination of 3D Objects

  • Kim, Kwang-Taek;Lee, Hyuk-Soo
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.14 no.4
    • /
    • pp.212-216
    • /
    • 2013
  • As virtual reality technologies are advanced rapidly, how to render 3D objects across modalities is becoming an important issue. This study is therefore aimed to investigate human discriminability on the curvature of 3D polygonal surfaces with focusing on the vision and touch senses because they are most dominant when explore 3D shapes. For the study, we designed a psychophysical experiment using signal detection theory to determine curvature discrimination for three conditions: haptic only, visual only, and both haptic and visual. The results show that there is no statistically significant difference among the conditions although the threshold in the haptic condition is the lowest. The results also indicate that rendering using both visual and haptic channels could degrade the performance of discrimination on a 3D global shape. These results must be considered when a multimodal rendering system is designed in near future.

A Study on Niching Genetic Algorithm for Multimodal Function Optimization (Multimodal 함수 최적화를 위한 Niching 유전 알고리즘에 대한 연구)

  • Lee, Chul-Gyun;Cho, Dong-Hyeok;Jung, Hyun-Kyo
    • Proceedings of the KIEE Conference
    • /
    • 1998.07a
    • /
    • pp.76-78
    • /
    • 1998
  • Niching methods extend genetic algorithms to domains that require the location of multiple solutions. But, current niching methods have some of drawbacks in the ability of search and preservation of solutions. So, this paper presents a new technique, named as Restricted Competition Selection(RCS). Then, RCS method is compared with sharing and deterministic crowding by applying to some multimodal problems in order to verify that it has more favorable properties.

  • PDF

Design and Implementation of a Multimodal Input Device Using a Web Camera

  • Na, Jong-Whoa;Choi, Won-Suk;Lee, Dong-Woo
    • ETRI Journal
    • /
    • v.30 no.4
    • /
    • pp.621-623
    • /
    • 2008
  • We propose a novel input pointing device called the multimodal mouse (MM) which uses two modalities: face recognition and speech recognition. From an analysis of Microsoft Office workloads, we find that 80% of Microsoft Office Specialist test tasks are compound tasks using both the keyboard and the mouse together. When we use the optical mouse (OM), operation is quick, but it requires a hand exchange delay between the keyboard and the mouse. This takes up a significant amount of the total execution time. The MM operates more slowly than the OM, but it does not consume any hand exchange time. As a result, the MM shows better performance than the OM in many cases.

  • PDF

Multimodal Interface Based on Novel HMI UI/UX for In-Vehicle Infotainment System

  • Kim, Jinwoo;Ryu, Jae Hong;Han, Tae Man
    • ETRI Journal
    • /
    • v.37 no.4
    • /
    • pp.793-803
    • /
    • 2015
  • We propose a novel HMI UI/UX for an in-vehicle infotainment system. Our proposed HMI UI comprises multimodal interfaces that allow a driver to safely and intuitively manipulate an infotainment system while driving. Our analysis of a touchscreen interface-based HMI UI/UX reveals that a driver's use of such an interface while driving can cause the driver to be seriously distracted. Our proposed HMI UI/UX is a novel manipulation mechanism for a vehicle infotainment service. It consists of several interfaces that incorporate a variety of modalities, such as speech recognition, a manipulating device, and hand gesture recognition. In addition, we provide an HMI UI framework designed to be manipulated using a simple method based on four directions and one selection motion. Extensive quantitative and qualitative in-vehicle experiments demonstrate that the proposed HMI UI/UX is an efficient mechanism through which to manipulate an infotainment system while driving.

Walking Assistance System for Sight Impaired People Based on a Multimodal Information Transformation Technique (멀티모달 정보변환을 통한 시각장애우 보행 보조 시스템)

  • Yu, Jae-Hyoung;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.465-472
    • /
    • 2009
  • This paper proposes a multimodal information transformation system that converts the image information to the voice information to provide the sight impaired people with walking area and obstacles, which are extracted by an acquired image from a single CCD camera. Using a chain-code line detection algorithm, the walking area is found from the vanishing point and boundary of a sidewalk on the edge image. And obstacles are detected by Gabor filter of extracting vertical lines on the walking area. The proposed system expresses the voice information of pre-defined sentences, consisting of template words which mean walking area and obstacles. The multi-modal information transformation system serves the useful voice information to the sight impaired that intend to reach their destination. The experiments of the proposed algorithm has been implemented on the indoor and outdoor environments, and verified its superiority to exactly provide walking parameters sentences.

Multimodal Biometric Using a Hierarchical Fusion of a Person's Face, Voice, and Online Signature

  • Elmir, Youssef;Elberrichi, Zakaria;Adjoudj, Reda
    • Journal of Information Processing Systems
    • /
    • v.10 no.4
    • /
    • pp.555-567
    • /
    • 2014
  • Biometric performance improvement is a challenging task. In this paper, a hierarchical strategy fusion based on multimodal biometric system is presented. This strategy relies on a combination of several biometric traits using a multi-level biometric fusion hierarchy. The multi-level biometric fusion includes a pre-classification fusion with optimal feature selection and a post-classification fusion that is based on the similarity of the maximum of matching scores. The proposed solution enhances biometric recognition performances based on suitable feature selection and reduction, such as principal component analysis (PCA) and linear discriminant analysis (LDA), as much as not all of the feature vectors components support the performance improvement degree.

A Survey of Multimodal Systems and Techniques for Motor Learning

  • Tadayon, Ramin;McDaniel, Troy;Panchanathan, Sethuraman
    • Journal of Information Processing Systems
    • /
    • v.13 no.1
    • /
    • pp.8-25
    • /
    • 2017
  • This survey paper explores the application of multimodal feedback in automated systems for motor learning. In this paper, we review the findings shown in recent studies in this field using rehabilitation and various motor training scenarios as context. We discuss popular feedback delivery and sensing mechanisms for motion capture and processing in terms of requirements, benefits, and limitations. The selection of modalities is presented via our having reviewed the best-practice approaches for each modality relative to motor task complexity with example implementations in recent work. We summarize the advantages and disadvantages of several approaches for integrating modalities in terms of fusion and frequency of feedback during motor tasks. Finally, we review the limitations of perceptual bandwidth and provide an evaluation of the information transfer for each modality.

Optimization of Multimodal Function Using An Enhanced Genetic Algorithm and Simplex Method (향상된 유전알고리듬과 Simplex method을 이용한 다봉성 함수의 최적화)

  • Kim, Young-Chan;Yang, Bo-Suk
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2000.11a
    • /
    • pp.587-592
    • /
    • 2000
  • The optimization method based on an enhanced genetic algorithms is proposed for multimodal function optimization in this paper. This method is consisted of two main steps. The first step is global search step using the genetic algorithm(GA) and function assurance criterion(FAC). The belonging of an population to initial solution group is decided according to the FAC. The second step is to decide the similarity between individuals, and to research the optimum solutions by simplex method in reconstructive search space. Two numerical examples are also presented in this paper to comparing with conventional methods.

  • PDF

Trend of Technology for Outdoor Security Robots based on Multimodal Sensors (멀티모달 센서 기반 실외 경비로봇 기술 개발 현황)

  • Chang, J.H.;Na, K.I.;Shin, H.C.
    • Electronics and Telecommunications Trends
    • /
    • v.37 no.1
    • /
    • pp.1-9
    • /
    • 2022
  • With the development of artificial intelligence, many studies have focused on evaluating abnormal situations by using various sensors, as industries try to automate some of the surveillance and security tasks traditionally performed by humans. In particular, mobile robots using multimodal sensors are being used for pilot operations aimed at helping security robots cope with various outdoor situations. Multiagent systems, which combine fixed and mobile systems, can provide more efficient coverage (than that provided by other systems), but network bottlenecks resulting from increased data processing and communication are encountered. In this report, we will examine recent trends in object recognition and abnormal-situation determination in various changing outdoor security robot environments, and describe an outdoor security robot platform that operates as a multiagent equipped with a multimodal sensor.