• 제목/요약/키워드: Model Fusion

Search Result 971, Processing Time 0.025 seconds

Improving the Distributed Data Fusion Ability of the JDL Data Fusion Model (JDL 자료융합 모델의 분산 자료융합 능력 개선)

  • Park, Gyu-Dong;Byun, Young-Tae
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.15 no.2
    • /
    • pp.147-154
    • /
    • 2012
  • In this paper, we revise the JDL data fusion model to have an ability of distributed data fusion(DDF). Data fusion is a function that produces valuable information using data from multiple sources. After the network centric warfare concept was introduced, the data fusion was required to be expanded to DDF. We identify the data transfer and control between nodes is the core function of DDF. The previous data fusion models can not be used for DDF because they don't include that function. Therefore, we revise the previous JDL data fusion model by adding the core function of DDF and propose this new model as a model for DDF. We show that our model is adequate and useful for DDF by using several examples.

Traffic Flow Prediction with Spatio-Temporal Information Fusion using Graph Neural Networks

  • Huijuan Ding;Giseop Noh
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.88-97
    • /
    • 2023
  • Traffic flow prediction is of great significance in urban planning and traffic management. As the complexity of urban traffic increases, existing prediction methods still face challenges, especially for the fusion of spatiotemporal information and the capture of long-term dependencies. This study aims to use the fusion model of graph neural network to solve the spatio-temporal information fusion problem in traffic flow prediction. We propose a new deep learning model Spatio-Temporal Information Fusion using Graph Neural Networks (STFGNN). We use GCN module, TCN module and LSTM module alternately to carry out spatiotemporal information fusion. GCN and multi-core TCN capture the temporal and spatial dependencies of traffic flow respectively, and LSTM connects multiple fusion modules to carry out spatiotemporal information fusion. In the experimental evaluation of real traffic flow data, STFGNN showed better performance than other models.

Evaluation of Spatio-temporal Fusion Models of Multi-sensor High-resolution Satellite Images for Crop Monitoring: An Experiment on the Fusion of Sentinel-2 and RapidEye Images (작물 모니터링을 위한 다중 센서 고해상도 위성영상의 시공간 융합 모델의 평가: Sentinel-2 및 RapidEye 영상 융합 실험)

  • Park, Soyeon;Kim, Yeseul;Na, Sang-Il;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.807-821
    • /
    • 2020
  • The objective of this study is to evaluate the applicability of representative spatio-temporal fusion models developed for the fusion of mid- and low-resolution satellite images in order to construct a set of time-series high-resolution images for crop monitoring. Particularly, the effects of the characteristics of input image pairs on the prediction performance are investigated by considering the principle of spatio-temporal fusion. An experiment on the fusion of multi-temporal Sentinel-2 and RapidEye images in agricultural fields was conducted to evaluate the prediction performance. Three representative fusion models, including Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), SParse-representation-based SpatioTemporal reflectance Fusion Model (SPSTFM), and Flexible Spatiotemporal DAta Fusion (FSDAF), were applied to this comparative experiment. The three spatio-temporal fusion models exhibited different prediction performance in terms of prediction errors and spatial similarity. However, regardless of the model types, the correlation between coarse resolution images acquired on the pair dates and the prediction date was more significant than the difference between the pair dates and the prediction date to improve the prediction performance. In addition, using vegetation index as input for spatio-temporal fusion showed better prediction performance by alleviating error propagation problems, compared with using fused reflectance values in the calculation of vegetation index. These experimental results can be used as basic information for both the selection of optimal image pairs and input types, and the development of an advanced model in spatio-temporal fusion for crop monitoring.

Interactions among Measles Virus Hemagglutinin, Fusion Protein and Cell Receptor Signaling Lymphocyte Activation Molecule (SLAM) Indicating a New Fusion-trimer Model

  • Zhang, Peng;Li, Lingyun;Hu, Chunlin;Xu, Qin;Liu, Xin;Qi, Yipeng
    • BMB Reports
    • /
    • v.38 no.4
    • /
    • pp.373-380
    • /
    • 2005
  • For measles viruses, fusion on the cell membrane is an important initial step in the entry into the infected cells. The recent research indicated that hemagglutinin firstly leads the conformational changes in the fusion protein then co-mediates the membrane fusion. In the work, we use the co-immunoprecipitation and pull-down techniques to identify the interactions among fusion protein, hemagglutinin and signaling lymphocyte activation molecule (SLAM), which reveal that the three proteins can form a functional complex to mediate the SLAM-dependent fusion. Moreover, under the confocal microscope, fusion protein and hemagglutinin protein can show the cocapping mediated by the SLAM. So fusion protein not only is involved in the fusion but also might directly interact with the SLAM to be a new fusion-trimer model, which might account for the infection mechanism of measles virus.

Multi-modality image fusion via generalized Riesz-wavelet transformation

  • Jin, Bo;Jing, Zhongliang;Pan, Han
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.11
    • /
    • pp.4118-4136
    • /
    • 2014
  • To preserve the spatial consistency of low-level features, generalized Riesz-wavelet transform (GRWT) is adopted for fusing multi-modality images. The proposed method can capture the directional image structure arbitrarily by exploiting a suitable parameterization fusion model and additional structural information. Its fusion patterns are controlled by a heuristic fusion model based on image phase and coherence features. It can explore and keep the structural information efficiently and consistently. A performance analysis of the proposed method applied to real-world images demonstrates that it is competitive with the state-of-art fusion methods, especially in combining structural information.

Command Fusion for Navigation of Mobile Robots in Dynamic Environments with Objects

  • Jin, Taeseok
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.1
    • /
    • pp.24-29
    • /
    • 2013
  • In this paper, we propose a fuzzy inference model for a navigation algorithm for a mobile robot that intelligently searches goal location in unknown dynamic environments. Our model uses sensor fusion based on situational commands using an ultrasonic sensor. Instead of using the "physical sensor fusion" method, which generates the trajectory of a robot based upon the environment model and sensory data, a "command fusion" method is used to govern the robot motions. The navigation strategy is based on a combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance based on a hierarchical behavior-based control architecture. To identify the environments, a command fusion technique is introduced where the sensory data of the ultrasonic sensors and a vision sensor are fused into the identification process. The result of experiment has shown that highlights interesting aspects of the goal seeking, obstacle avoiding, decision making process that arise from navigation interaction.

Study on the Cooperation Model for Fusion-Technology Development in SMEs (융합기술 개발을 위한 중소기업 간 협업모형 제안)

  • Cho, Chanwoo;Lee, Sungjoo
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.39 no.3
    • /
    • pp.198-203
    • /
    • 2013
  • Industry convergence is the inexorable trend, which has become a fundamental concept to understand the industrial dynamics and to develop business strategies. However, most of the previous studies on convergence have dealt with the issues at the macro-level (technology- or industry-level), little attention has been paid to the analysis of convergence at the micro level (firm-level). Recognizing that firms are the principal agents that develop fusion technologies, it is encouraged to help firms to work together for technology convergence. Therefore, this research proposes a collaboration model for SMEs, since SMEs tend to have novel ideas and are flexible enough to make fusion-technology. To do this, we conducted a survey for Korean SMEs and analyzed their successful cases of collaboration, which was used as a basis to develop the model. The research results will help develop strategy or policy to promote the collaboration between SMEs and ultimately to make a fusion-technology.

Speech emotion recognition based on genetic algorithm-decision tree fusion of deep and acoustic features

  • Sun, Linhui;Li, Qiu;Fu, Sheng;Li, Pingan
    • ETRI Journal
    • /
    • v.44 no.3
    • /
    • pp.462-475
    • /
    • 2022
  • Although researchers have proposed numerous techniques for speech emotion recognition, its performance remains unsatisfactory in many application scenarios. In this study, we propose a speech emotion recognition model based on a genetic algorithm (GA)-decision tree (DT) fusion of deep and acoustic features. To more comprehensively express speech emotional information, first, frame-level deep and acoustic features are extracted from a speech signal. Next, five kinds of statistic variables of these features are calculated to obtain utterance-level features. The Fisher feature selection criterion is employed to select high-performance features, removing redundant information. In the feature fusion stage, the GA is is used to adaptively search for the best feature fusion weight. Finally, using the fused feature, the proposed speech emotion recognition model based on a DT support vector machine model is realized. Experimental results on the Berlin speech emotion database and the Chinese emotion speech database indicate that the proposed model outperforms an average weight fusion method.

Performance Evaluation of Decision Fusion Rules of Wireless Sensor Networks in Generalized Gaussian Noise (Generalized Gaussian Noise에서의 무선센서 네트워크의 Decision Fusion Rule의 성능 분석에 관한 연구)

  • Park, Jin-Tae;Koo, In-Soo;Kim, Ki-Seon
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.97-98
    • /
    • 2006
  • Fusion of decisions from multiple distributed sensor nodes is studied in this work. Based on the canonical parallel fusion model, we derive the optimal likelihood ratio based fusion rule with the assumptions of the generalized Gaussian noise model and the arbitrary fading channel. This optimal fusion rule, however, requires the complete knowledge of the channels and the detection performance of local sensor nodes. To mitigate these requirements and to provide near optimum performance, we derive suboptimum fusion rules by using high and low signal-to-noise ratio (SNR) approximations to the optimal fusion rule. Performance evaluation is conducted through simulations.

  • PDF