• Title/Summary/Keyword: Multimodal environment

Search Result 73, Processing Time 0.023 seconds

Estimating Suitable Probability Distribution Function for Multimodal Traffic Distribution Function

  • Yoo, Sang-Lok;Jeong, Jae-Yong;Yim, Jeong-Bin
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.21 no.3
    • /
    • pp.253-258
    • /
    • 2015
  • The purpose of this study is to find suitable probability distribution function of complex distribution data like multimodal. Normal distribution is broadly used to assume probability distribution function. However, complex distribution data like multimodal are very hard to be estimated by using normal distribution function only, and there might be errors when other distribution functions including normal distribution function are used. In this study, we experimented to find fit probability distribution function in multimodal area, by using AIS(Automatic Identification System) observation data gathered in Mokpo port for a year of 2013. By using chi-squared statistic, gaussian mixture model(GMM) is the fittest model rather than other distribution functions, such as extreme value, generalized extreme value, logistic, and normal distribution. GMM was found to the fit model regard to multimodal data of maritime traffic flow distribution. Probability density function for collision probability and traffic flow distribution will be calculated much precisely in the future.

A Analysis on the Usage Status and Promotion of Multimodal Transport Logistics Terms in Incoterms, 2010 (Incoterms, 2010의 복합운송물류조건의 이용실태 분석과 활성화)

  • Song, Gyeeui
    • Journal of Korea Port Economic Association
    • /
    • v.29 no.1
    • /
    • pp.123-141
    • /
    • 2013
  • The purpose of this paper is to suggest a plan on promoting use of multimodal transport logistics terms in Incoterms, 2010. This study deals with the terms of three promotion factors which are a user's subjective factors, a trade transport logistics environment factors, and a term content factors. According to analysis results of the factors, a user's subjective factors(3.87 score) are scored the most ones of promotion factors of using multimodal transport logistics terms in Incoterms, 2010, to be compared with a trade transport logistics environment factors(3.60 score). with a term content factors(3.74 score). Therefore, first of all, it is important to promoting use of multimodal transport logistics terms in Incoterms, 2010 through as follows, a user's subjective factors : (1) to understand correlation of door to door multimodal transport and terms of Incoterms, 2010, (2) to promote use of multimodal transport logistics terms in Incoterms, 2010 in door to door multimodal transport, (3) to restrain customary use of FOB, CFR, CIF terms. And, the next, we have to promote use of multimodal transport logistics terms in Incoterms, 2010 through considering a trade transport logistics environment factors and a term content factors.

Multimodal Supervised Contrastive Learning for Crop Disease Diagnosis (멀티 모달 지도 대조 학습을 이용한 농작물 병해 진단 예측 방법)

  • Hyunseok Lee;Doyeob Yeo;Gyu-Sung Ham;Kanghan Oh
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.6
    • /
    • pp.285-292
    • /
    • 2023
  • With the wide spread of smart farms and the advancements in IoT technology, it is easy to obtain additional data in addition to crop images. Consequently, deep learning-based crop disease diagnosis research utilizing multimodal data has become important. This study proposes a crop disease diagnosis method using multimodal supervised contrastive learning by expanding upon the multimodal self-supervised learning. RandAugment method was used to augment crop image and time series of environment data. These augmented data passed through encoder and projection head for each modality, yielding low-dimensional features. Subsequently, the proposed multimodal supervised contrastive loss helped features from the same class get closer while pushing apart those from different classes. Following this, the pretrained model was fine-tuned for crop disease diagnosis. The visualization of t-SNE result and comparative assessments of crop disease diagnosis performance substantiate that the proposed method has superior performance than multimodal self-supervised learning.

AI Multimodal Sensor-based Pedestrian Image Recognition Algorithm (AI 멀티모달 센서 기반 보행자 영상인식 알고리즘)

  • Seong-Yoon Shin;Seung-Pyo Cho;Gwanghung Jo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.01a
    • /
    • pp.407-408
    • /
    • 2023
  • In this paper, we intend to develop a multimodal algorithm that secures recognition performance of over 95% in daytime illumination environments and secures recognition performance of over 90% in bad weather (rainfall and snow) and night illumination environments.

  • PDF

An analysis of Europe Multimodal Transport System and Development of Model in Northeast Multimodal Transport (유럽 복합운송체계 분석을 통한 동북아 복합운송모델 개발)

  • 배민주;김환성
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2004.04a
    • /
    • pp.421-426
    • /
    • 2004
  • Increasing of the multinational corporation brought into the international multimodal Increasing of the multinational corporation brought into the international multimodal transport on the logistics environment. In case of Europe which have a great infrastructure, they are tried to develope a second of the silk road constantly. This paper emphasized the importance of international multimodal transport and proposed the model for northeast multimodal transport. For this research, we analyzed the multimodal transport system in Europe and north corridor of TAR. We are expecting economic effect of the route is including republic of korea and developed a model for connecting with sea, air and road. Actually, this research can not be enough data of numerical value for proving this effectiveness. but we developed and proposed a specific route of multimodal transport that was never suggested. Consequently, we established basic ground for comparing each transport route in the future research.

  • PDF

Using Spatial Ontology in the Semantic Integration of Multimodal Object Manipulation in Virtual Reality

  • Irawati, Sylvia;Calderon, Daniela;Ko, Hee-Dong
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.884-892
    • /
    • 2006
  • This paper describes a framework for multimodal object manipulation in virtual environments. The gist of the proposed framework is the semantic integration of multimodal input using spatial ontology and user context to integrate the interpretation results from the inputs into a single one. The spatial ontology, describing the spatial relationships between objects, is used together with the current user context to solve ambiguities coming from the user's commands. These commands are used to reposition the objects in the virtual environments. We discuss how the spatial ontology is defined and used to assist the user to perform object placements in the virtual environment as it will be in the real world.

  • PDF

Enhancement of Authentication Performance based on Multimodal Biometrics for Android Platform (안드로이드 환경의 다중생체인식 기술을 응용한 인증 성능 개선 연구)

  • Choi, Sungpil;Jeong, Kanghun;Moon, Hyeonjoon
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.3
    • /
    • pp.302-308
    • /
    • 2013
  • In this research, we have explored personal authentication system through multimodal biometrics for mobile computing environment. We have selected face and speaker recognition for the implementation of multimodal biometrics system. For face recognition part, we detect the face with Modified Census Transform (MCT). Detected face is pre-processed through eye detection module based on k-means algorithm. Then we recognize the face with Principal Component Analysis (PCA) algorithm. For speaker recognition part, we extract features using the end-point of voice and the Mel Frequency Cepstral Coefficient (MFCC). Then we verify the speaker through Dynamic Time Warping (DTW) algorithm. Our proposed multimodal biometrics system shows improved verification rate through combining two different biometrics described above. We implement our proposed system based on Android environment using Galaxy S hoppin. Proposed system presents reduced false acceptance ratio (FAR) of 1.8% which shows improvement from single biometrics system using the face and the voice (presents 4.6% and 6.7% respectively).

An Implementation of Multimodal Speaker Verification System using Teeth Image and Voice on Mobile Environment (이동환경에서 치열영상과 음성을 이용한 멀티모달 화자인증 시스템 구현)

  • Kim, Dong-Ju;Ha, Kil-Ram;Hong, Kwang-Seok
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.5
    • /
    • pp.162-172
    • /
    • 2008
  • In this paper, we propose a multimodal speaker verification method using teeth image and voice as biometric trait for personal verification in mobile terminal equipment. The proposed method obtains the biometric traits using image and sound input devices of smart-phone that is one of mobile terminal equipments, and performs verification with biometric traits. In addition, the proposed method consists the multimodal-fashion of combining two biometric authentication scores for totally performance enhancement, the fusion method is accompanied a weighted-summation method which has comparative simple structure and superior performance for considering limited resources of system. The performance evaluation of proposed multimodal speaker authentication system conducts using a database acquired in smart-phone for 40 subjects. The experimental result shows 8.59% of EER in case of teeth verification 11.73% in case of voice verification and the multimodal speaker authentication result presented the 4.05% of EER. In the experimental result, we obtain the enhanced performance more than each using teeth and voice by using the simple weight-summation method in the multimodal speaker verification system.

Multimodal Interface Control Module for Immersive Virtual Education (몰입형 가상교육을 위한 멀티모달 인터페이스 제어모듈)

  • Lee, Jaehyub;Im, SungMin
    • The Journal of Korean Institute for Practical Engineering Education
    • /
    • v.5 no.1
    • /
    • pp.40-44
    • /
    • 2013
  • This paper suggests a multimodal interface control module which allows a student to naturally interact with educational contents in virtual environment. The suggested module recognizes a user's motion when he/she interacts with virtual environment and then conveys the user's motion to the virtual environment via wireless communication. Futhermore, a haptic actuator is incorporated into the proposed module in order to create haptic information. Due to the proposed module, a user can haptically sense the virtual object as if the virtual object is exists in real world.

  • PDF

Designing a Framework of Multimodal Contents Creation and Playback System for Immersive Textbook (실감형 교과서를 위한 멀티모달 콘텐츠 저작 및 재생 프레임워크 설계)

  • Kim, Seok-Yeol;Park, Jin-Ah
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.8
    • /
    • pp.1-10
    • /
    • 2010
  • For virtual education, the multimodal learning environment with haptic feedback, termed 'immersive textbook', is necessary to enhance the learning effectiveness. However, the learning contents for immersive textbook are not widely available due to the constraints in creation and playback environments. To address this problem, we propose a framework for producing and displaying the multimodal contents for immersive textbook. Our framework provides an XML-based meta-language to produce the multimodal learning contents in the form of intuitive script. Thus it can help the user, without any prior knowledge of multimodal interactions, produce his or her own learning contents. The contents are then interpreted by script engine and delivered to the user by visual and haptic rendering loops. Also we implemented a prototype based on the aforementioned proposals and performed user evaluation to verify the validity of our framework.