• Title/Summary/Keyword: Multimodal Network

Search Result 74, Processing Time 0.027 seconds

A Methodology of Multimodal Public Transportation Network Building and Path Searching Using Transportation Card Data (교통카드 기반자료를 활용한 복합대중교통망 구축 및 경로탐색 방안 연구)

  • Cheon, Seung-Hoon;Shin, Seong-Il;Lee, Young-Ihn;Lee, Chang-Ju
    • Journal of Korean Society of Transportation
    • /
    • v.26 no.3
    • /
    • pp.233-243
    • /
    • 2008
  • Recognition for the importance and roles of public transportation is increasing because of traffic problems in many cities. In spite of this paradigm change, previous researches related with public transportation trip assignment have limits in some aspects. Especially, in case of multimodal public transportation networks, many characters should be considered such as transfers. operational time schedules, waiting time and travel cost. After metropolitan integrated transfer discount system was carried out, transfer trips are increasing among traffic modes and this takes the variation of users' route choices. Moreover, the advent of high-technology public transportation card called smart card, public transportation users' travel information can be recorded automatically and this gives many researchers new analytical methodology for multimodal public transportation networks. In this paper, it is suggested that the methodology for establishment of brand new multimodal public transportation networks based on computer programming methods using transportation card data. First, we propose the building method of integrated transportation networks based on bus and urban railroad stations in order to make full use of travel information from transportation card data. Second, it is offered how to connect the broken transfer links by computer-based programming techniques. This is very helpful to solve the transfer problems that existing transportation networks have. Lastly, we give the methodology for users' paths finding and network establishment among multi-modes in multimodal public transportation networks. By using proposed methodology in this research, it becomes easy to build multimodal public transportation networks with existing bus and urban railroad station coordinates. Also, without extra works including transfer links connection, it is possible to make large-scaled multimodal public transportation networks. In the end, this study can contribute to solve users' paths finding problem among multi-modes which is regarded as an unsolved issue in existing transportation networks.

Study on Evaluation Criteria for Multimodal Transport Routing Selection (복합운송경로 선정을 위한 평가기준에 관한 연구)

  • Kim So-Yeon;Choi Hyung-Rim;Kim Hyun-Soo;Park Nam-Kyu;Cho Jae-Hyung;Park Yong-Sung;Cho Min-Je
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2006.06b
    • /
    • pp.265-271
    • /
    • 2006
  • According to globalization of world economy by extension of production, sales and distribution all around the world and international transportation system changed into a transport system that puts great importance on speediness and value-added services, international transport system is changing into a Multimodal Transport Routing centered system that systematically connects marine, air and rail transports. Due to such changes production, sales and distribution must be provided in time and Multimodal Transport Routing, which can provide multi-dimensional logistics services to customers of global network, is needed but information connection for international transport and connection system between transport modes are insufficient and can not be activated. Especially in Korea, selection standard of 3rd party logistics companies and transport companies is presented, but logistics exclusive companies, which plan and execute the transportation, can't present a systematic evaluation standard for international Multimodal Transport Routing, selection. Thus, this research surveys important previous studies about Multimodal Transport Routing, selection, derives an evaluation standard for Multimodal Transport Routing, selection through interview with company officials, and presents a theoretical basis for Multimodal Transport Routing, selection through systematic analysis of Multimodal Transport Routing, selection using Analytical Hierarchy Process (AHP).

  • PDF

Multimedia Information and Authoring for Personalized Media Networks

  • Choi, Insook;Bargar, Robin
    • Journal of Multimedia Information System
    • /
    • v.4 no.3
    • /
    • pp.123-144
    • /
    • 2017
  • Personalized media includes user-targeted and user-generated content (UGC) exchanged through social media and interactive applications. The increased consumption of UGC presents challenges and opportunities to multimedia information systems. We work towards modeling a deep structure for content networks. To gain insights, a hybrid practice with Media Framework (MF) is presented for network creation of personalized media, which leverages the authoring methodology with user-generated semantics. The system's vertical integration allows users to audition their personalized media networks in the context of a global system network. A navigation scheme with dynamic GUI shifts the interaction paradigm for content query and sharing. MF adopts a multimodal architecture anticipating emerging use cases and genres. To model diversification of platforms, information processing is robust across multiple technology configurations. Physical and virtual networks are integrated with distributed services and transactions, IoT, and semantic networks representing media content. MF applies spatiotemporal and semantic signal processing to differentiate action responsiveness and information responsiveness. The extension of multimedia information processing into authoring enables generating interactive and impermanent media on computationally enabled devices. The outcome of this integrated approach with presented methodologies demonstrates a paradigmatic shift of the concept of UGC as personalized media network, which is dynamical and evolvable.

A Model for Evaluating the Connectivity of Multimodal Transit Networks (복합수단 대중교통 네트워크의 연계성 평가 모형)

  • Park, Jun-Sik;Gang, Seong-Cheol
    • Journal of Korean Society of Transportation
    • /
    • v.28 no.3
    • /
    • pp.85-98
    • /
    • 2010
  • As transit networks are becoming more multimodal, the concept of connectivity of transit networks becomes important. This study aims to develop a quantitative model for measuring the connectivity of multimodal transit networks. To that end, we select, as evaluation measures of a transit line, its length, capacity, and speed. We then define the connecting power of a transit line as the product of those measures. The degree centrality of a node, which is a widely used centrality measure in social network analysis, is employed with appropriate modifications suited for transit networks. Using the degree centrality of a transit stop and the connecting powers of transit lines serving the transit stop, we develop an index quantifying the level of connectivity of the transit stop. From the connectivity indexes of transit stops, we derive the connectivity index of a transit line as well as an area of a multimodal transit network. In addition, we present a method to evaluate the connectivity of a transfer center using the connectivity indexes of transit stops and passenger acceptance rate functions. A case study shows that the connectivity evaluation model developed in this study takes well into consideration characteristics of multimodal transit networks, adequately measures the connectivity of transit stops, lines, and areas, and furthermore can be used in determining the level of service of transfer centers.

Multi-Object Goal Visual Navigation Based on Multimodal Context Fusion (멀티모달 맥락정보 융합에 기초한 다중 물체 목표 시각적 탐색 이동)

  • Jeong Hyun Choi;In Cheol Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.9
    • /
    • pp.407-418
    • /
    • 2023
  • The Multi-Object Goal Visual Navigation(MultiOn) is a visual navigation task in which an agent must visit to multiple object goals in an unknown indoor environment in a given order. Existing models for the MultiOn task suffer from the limitation that they cannot utilize an integrated view of multimodal context because use only a unimodal context map. To overcome this limitation, in this paper, we propose a novel deep neural network-based agent model for MultiOn task. The proposed model, MCFMO, uses a multimodal context map, containing visual appearance features, semantic features of environmental objects, and goal object features. Moreover, the proposed model effectively fuses these three heterogeneous features into a global multimodal context map by using a point-wise convolutional neural network module. Lastly, the proposed model adopts an auxiliary task learning module to predict the observation status, goal direction and the goal distance, which can guide to learn the navigational policy efficiently. Conducting various quantitative and qualitative experiments using the Habitat-Matterport3D simulation environment and scene dataset, we demonstrate the superiority of the proposed model.

Audio and Video Bimodal Emotion Recognition in Social Networks Based on Improved AlexNet Network and Attention Mechanism

  • Liu, Min;Tang, Jun
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.754-771
    • /
    • 2021
  • In the task of continuous dimension emotion recognition, the parts that highlight the emotional expression are not the same in each mode, and the influences of different modes on the emotional state is also different. Therefore, this paper studies the fusion of the two most important modes in emotional recognition (voice and visual expression), and proposes a two-mode dual-modal emotion recognition method combined with the attention mechanism of the improved AlexNet network. After a simple preprocessing of the audio signal and the video signal, respectively, the first step is to use the prior knowledge to realize the extraction of audio characteristics. Then, facial expression features are extracted by the improved AlexNet network. Finally, the multimodal attention mechanism is used to fuse facial expression features and audio features, and the improved loss function is used to optimize the modal missing problem, so as to improve the robustness of the model and the performance of emotion recognition. The experimental results show that the concordance coefficient of the proposed model in the two dimensions of arousal and valence (concordance correlation coefficient) were 0.729 and 0.718, respectively, which are superior to several comparative algorithms.

Multimodal MRI analysis model based on deep neural network for glioma grading classification (신경교종 등급 분류를 위한 심층신경망 기반 멀티모달 MRI 영상 분석 모델)

  • Kim, Jonghun;Park, Hyunjin
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.425-427
    • /
    • 2022
  • The grade of glioma is important information related to survival and thus is important to classify the grade of glioma before treatment to evaluate tumor progression and treatment planning. Glioma grading is mostly divided into high-grade glioma (HGG) and low-grade glioma (LGG). In this study, image preprocessing techniques are applied to analyze magnetic resonance imaging (MRI) using the deep neural network model. Classification performance of the deep neural network model is evaluated. The highest-performance EfficientNet-B6 model shows results of accuracy 0.9046, sensitivity 0.9570, specificity 0.7976, AUC 0.8702, and F1-Score 0.8152 in 5-fold cross-validation.

  • PDF

Real-world multimodal lifelog dataset for human behavior study

  • Chung, Seungeun;Jeong, Chi Yoon;Lim, Jeong Mook;Lim, Jiyoun;Noh, Kyoung Ju;Kim, Gague;Jeong, Hyuntae
    • ETRI Journal
    • /
    • v.44 no.3
    • /
    • pp.426-437
    • /
    • 2022
  • To understand the multilateral characteristics of human behavior and physiological markers related to physical, emotional, and environmental states, extensive lifelog data collection in a real-world environment is essential. Here, we propose a data collection method using multimodal mobile sensing and present a long-term dataset from 22 subjects and 616 days of experimental sessions. The dataset contains over 10 000 hours of data, including physiological, data such as photoplethysmography, electrodermal activity, and skin temperature in addition to the multivariate behavioral data. Furthermore, it consists of 10 372 user labels with emotional states and 590 days of sleep quality data. To demonstrate feasibility, human activity recognition was applied on the sensor data using a convolutional neural network-based deep learning model with 92.78% recognition accuracy. From the activity recognition result, we extracted the daily behavior pattern and discovered five representative models by applying spectral clustering. This demonstrates that the dataset contributed toward understanding human behavior using multimodal data accumulated throughout daily lives under natural conditions.

Freight Demand Analysis for Multimodal Shipments (복합수단운송을 고려한 화물통행수요분석 방안)

  • Hong, Da-Hee;Park, Min-Choul;Lee, Jung-Yub;Hahn, Jin-Seok;Kang, Jae-Won
    • Journal of Korean Society of Transportation
    • /
    • v.30 no.4
    • /
    • pp.85-94
    • /
    • 2012
  • Modern freight transport pursues not only the reduction of logistic costs but also aims at green logistics and efficient shipments. In order to accomplish these goals, various policies regarding the multimodal shipment and stopover to logistic facilities have widely been made. Such situation requires changes in existing methods for analyzing freight demand. However, the reality is that a reliable freight demand forecast is limited, since in the transport research field there is no robust freight demand model that can accommodate transshipments at logistic facilities. This study suggested a novel method to analyze freight demand, which can consider transshipments in multi-modal networks. Also, the applicability of this method was discussed through an example test.

A study on the implementation of user identification system using bioinfomatics (생물학적 특징을 이용한 사용자 인증시스템 구현)

  • 문용선;정택준
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.2
    • /
    • pp.346-355
    • /
    • 2002
  • This study will offer multimodal recognition instead of an existing monomodal bioinfomatics by using face, lips, to improve the accuracy of recognition. Each bioinfomatics vector can be found by the following ways. For a face, the feature is calculated by principal component analysis with wavelet multiresolution. For a lip, a filter is used to find out an equation to calculate the edges of the lips first. Then by using a thinning image and least square method, an equation factor can be drawn. A voice recognition is found with MFCC by using mel frequency. We've sorted backpropagation neural network and experimented with the inputs used above. Based on the experimental results we discuss the advantage and efficiency.