• Title/Summary/Keyword: Multi-modal Data

Search Result 134, Processing Time 0.034 seconds

Analysis of Semantic Relations Between Multimodal Medical Images Based on Coronary Anatomy for Acute Myocardial Infarction

  • Park, Yeseul;Lee, Meeyeon;Kim, Myung-Hee;Lee, Jung-Won
    • Journal of Information Processing Systems
    • /
    • v.12 no.1
    • /
    • pp.129-148
    • /
    • 2016
  • Acute myocardial infarction (AMI) is one of the three emergency diseases that require urgent diagnosis and treatment in the golden hour. It is important to identify the status of the coronary artery in AMI due to the nature of disease. Therefore, multi-modal medical images, which can effectively show the status of the coronary artery, have been widely used to diagnose AMI. However, the legacy system has provided multi-modal medical images with flat and unstructured data. It has a lack of semantic information between multi-modal images, which are distributed and stored individually. If we can see the status of the coronary artery all at once by integrating the core information extracted from multi-modal medical images, the time for diagnosis and treatment will be reduced. In this paper, we analyze semantic relations between multi-modal medical images based on coronary anatomy for AMI. First, we selected a coronary arteriogram, coronary angiography, and echocardiography as the representative medical images for AMI and extracted semantic features from them, respectively. We then analyzed the semantic relations between them and defined the convergence data model for AMI. As a result, we show that the data model can present core information from multi-modal medical images and enable to diagnose through the united view of AMI intuitively.

Multi-Modal Wearable Sensor Integration for Daily Activity Pattern Analysis with Gated Multi-Modal Neural Networks (Gated Multi-Modal Neural Networks를 이용한 다중 웨어러블 센서 결합 방법 및 일상 행동 패턴 분석)

  • On, Kyoung-Woon;Kim, Eun-Sol;Zhang, Byoung-Tak
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.2
    • /
    • pp.104-109
    • /
    • 2017
  • We propose a new machine learning algorithm which analyzes daily activity patterns of users from multi-modal wearable sensor data. The proposed model learns and extracts activity patterns using input from wearable devices in real-time. Inspired by cue integration of human's property, we constructed gated multi-modal neural networks which integrate wearable sensor input data selectively by using gate modules. For the experiments, sensory data were collected by using multiple wearable devices in restaurant situations. As an experimental result, we first show that the proposed model performs well in terms of prediction accuracy. Then, the possibility to construct a knowledge schema automatically by analyzing the activation patterns in the middle layer of our proposed model is explained.

A Multi-Modal Complex Motion Authoring Tool for Creating Robot Contents

  • Seok, Kwang-Ho;Kim, Yoon-Sang
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.6
    • /
    • pp.924-932
    • /
    • 2010
  • This paper proposes a multi-modal complex motion authoring tool for creating robot contents. The proposed tool is user-friendly and allows general users without much knowledge about robots, including children, women and the elderly, to easily edit and modify robot contents. Furthermore, the tool uses multi-modal data including graphic motion, voice and music to simulate user-created robot contents in the 3D virtual environment. This allows the user to not only view the authoring process in real time but also transmit the final authored contents to control the robot. The validity of the proposed tool was examined based on simulations using the authored multi-modal complex motion robot contents as well as experiments of actual robot motions.

Multi Modal Sensor Training Dataset for the Robust Object Detection and Tracking in Outdoor Surveillance (MMO (Multi Modal Outdoor) Dataset) (실외 경비 환경에서 강인한 객체 검출 및 추적을 위한 실외 멀티 모달 센서 기반 학습용 데이터베이스 구축)

  • Noh, DongKi;Yang, Wonkeun;Uhm, Teayoung;Lee, Jaekwang;Kim, Hyoung-Rock;Baek, SeungMin
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.8
    • /
    • pp.1006-1018
    • /
    • 2020
  • Dataset is getting more import to develop a learning based algorithm. Quality of the algorithm definitely depends on dataset. So we introduce new dataset over 200 thousands images which are fully labeled multi modal sensor data. Proposed dataset was designed and constructed for researchers who want to develop detection, tracking, and action classification in outdoor environment for surveillance scenarios. The dataset includes various images and multi modal sensor data under different weather and lighting condition. Therefor, we hope it will be very helpful to develop more robust algorithm for systems equipped with difference kinds of sensors in outdoor application. Case studies with the proposed dataset are also discussed in this paper.

On Addressing Network Synchronization in Object Tracking with Multi-modal Sensors

  • Jung, Sang-Kil;Lee, Jin-Seok;Hong, Sang-Jin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.3 no.4
    • /
    • pp.344-365
    • /
    • 2009
  • The performance of a tracking system is greatly increased if multiple types of sensors are combined to achieve the objective of the tracking instead of relying on single type of sensor. To conduct the multi-modal tracking, we have previously developed a multi-modal sensor-based tracking model where acoustic sensors mainly track the objects and visual sensors compensate the tracking errors [1]. In this paper, we find a network synchronization problem appearing in the developed tracking system. The problem is caused by the different location and traffic characteristics of multi-modal sensors and non-synchronized arrival of the captured sensor data at a processing server. To effectively deliver the sensor data, we propose a time-based packet aggregation algorithm where the acoustic sensor data are aggregated based on the sampling time and sent to the server. The delivered acoustic sensor data is then compensated by visual images to correct the tracking errors and such a compensation process improves the tracking accuracy in ideal case. However, in real situations, the tracking improvement from visual compensation can be severely degraded due to the aforementioned network synchronization problem, the impact of which is analyzed by simulations in this paper. To resolve the network synchronization problem, we differentiate the service level of sensor traffic based on Weight Round Robin (WRR) scheduling at the routers. The weighting factor allocated to each queue is calculated by a proposed Delay-based Weight Allocation (DWA) algorithm. From the simulations, we show the traffic differentiation model can mitigate the non-synchronization of sensor data. Finally, we analyze expected traffic behaviors of the tracking system in terms of acoustic sampling interval and visual image size.

FakedBits- Detecting Fake Information on Social Platforms using Multi-Modal Features

  • Dilip Kumar, Sharma;Bhuvanesh, Singh;Saurabh, Agarwal;Hyunsung, Kim;Raj, Sharma
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.1
    • /
    • pp.51-73
    • /
    • 2023
  • Social media play a significant role in communicating information across the globe, connecting with loved ones, getting the news, communicating ideas, etc. However, a group of people uses social media to spread fake information, which has a bad impact on society. Therefore, minimizing fake news and its detection are the two primary challenges that need to be addressed. This paper presents a multi-modal deep learning technique to address the above challenges. The proposed modal can use and process visual and textual features. Therefore, it has the ability to detect fake information from visual and textual data. We used EfficientNetB0 and a sentence transformer, respectively, for detecting counterfeit images and for textural learning. Feature embedding is performed at individual channels, whilst fusion is done at the last classification layer. The late fusion is applied intentionally to mitigate the noisy data that are generated by multi-modalities. Extensive experiments are conducted, and performance is evaluated against state-of-the-art methods. Three real-world benchmark datasets, such as MediaEval (Twitter), Weibo, and Fakeddit, are used for experimentation. Result reveals that the proposed modal outperformed the state-of-the-art methods and achieved an accuracy of 86.48%, 82.50%, and 88.80%, respectively, for MediaEval (Twitter), Weibo, and Fakeddit datasets.

Damage detection of multi-storeyed shear structure using sparse and noisy modal data

  • Panigrahi, S.K.;Chakraverty, S.;Bhattacharyya, S.K.
    • Smart Structures and Systems
    • /
    • v.15 no.5
    • /
    • pp.1215-1232
    • /
    • 2015
  • In the present paper, a method for identifying damage in a multi storeyed shear building structure is presented using minimum number of modal parameters of the structure. A damage at any level of the structure may lead to a major failure if the damage is not attended at appropriate time. Hence an early detection of damage is essential. The proposed identification methodology requires experimentally determined sparse modal data of any particular mode as input to detect the location and extent of damage in the structure. Here, the first natural frequency and corresponding partial mode shape values are used as input to the model and results are compared by changing the sensor placement locations at different floors to conclude the best location of sensors for accurate damage identification. Initially experimental data are simulated numerically by solving eigen value problem of the damaged structure with inclusion of random noise on the vibration characteristics. Reliability of the procedure has been demonstrated through a few examples of multi storeyed shear structure with different damage scenarios and various noise levels. Validation of the methodology has also been done using dynamic data obtained through experiment conducted on a laboratory scale steel structure.

A Simple Tandem Method for Clustering of Multimodal Dataset

  • Cho C.;Lee J.W.;Lee J.W.
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2003.05a
    • /
    • pp.729-733
    • /
    • 2003
  • The presence of local features within clusters incurred by multi-modal nature of data prohibits many conventional clustering techniques from working properly. Especially, the clustering of datasets with non-Gaussian distributions within a cluster can be problematic when the technique with implicit assumption of Gaussian distribution is used. Current study proposes a simple tandem clustering method composed of k-means type algorithm and hierarchical method to solve such problems. The multi-modal dataset is first divided into many small pre-clusters by k-means or fuzzy k-means algorithm. The pre-clusters found from the first step are to be clustered again using agglomerative hierarchical clustering method with Kullback- Leibler divergence as the measure of dissimilarity. This method is not only effective at extracting the multi-modal clusters but also fast and easy in terms of computation complexity and relatively robust at the presence of outliers. The performance of the proposed method was evaluated on three generated datasets and six sets of publicly known real world data.

  • PDF

A multi-modal neural network using Chebyschev polynomials

  • Ikuo Yoshihara;Tomoyuki Nakagawa;Moritoshi Yasunaga;Abe, Ken-ichi
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1998.10a
    • /
    • pp.250-253
    • /
    • 1998
  • This paper presents a multi-modal neural network composed of a preprocessing module and a multi-layer neural network module in order to enhance the nonlinear characteristics of neural network. The former module is based on spectral method using Chebyschev polynomials and transforms input data into spectra. The latter module identifies the system using the spectra generated by the preprocessing module. The omnibus numerical experiments show that the method is applicable to many a nonlinear dynamic system in the real world, and that preprocessing using Chebyschev polynomials reduces the number of neurons required for the multi-layer neural network.

  • PDF

Damage detection in truss bridges using vibration based multi-criteria approach

  • Shih, H.W.;Thambiratnam, D.P.;Chan, T.H.T.
    • Structural Engineering and Mechanics
    • /
    • v.39 no.2
    • /
    • pp.187-206
    • /
    • 2011
  • This paper uses dynamic computer simulation techniques to develop and apply a multi-criteria procedure using non-destructive vibration-based parameters for damage assessment in truss bridges. In addition to changes in natural frequencies, this procedure incorporates two parameters, namely the modal flexibility and the modal strain energy. Using the numerically simulated modal data obtained through finite element analysis of the healthy and damaged bridge models, algorithms based on modal flexibility and modal strain energy changes before and after damage are obtained and used as the indices for the assessment of structural health state. The application of the two proposed parameters to truss-type structures is limited in the literature. The proposed multi-criteria based damage assessment procedure is therefore developed and applied to truss bridges. The application of the approach is demonstrated through numerical simulation studies of a single-span simply supported truss bridge with eight damage scenarios corresponding to different types of deck and truss damage. Results show that the proposed multi-criteria method is effective in damage assessment in this type of bridge superstructure.