• 제목/요약/키워드: network fusion

검색결과 528건 처리시간 0.06초

A Visualization System for Multiple Heterogeneous Network Security Data and Fusion Analysis

  • Zhang, Sheng;Shi, Ronghua;Zhao, Jue
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권6호
    • /
    • pp.2801-2816
    • /
    • 2016
  • Owing to their low scalability, weak support on big data, insufficient data collaborative analysis and inadequate situational awareness, the traditional methods fail to meet the needs of the security data analysis. This paper proposes visualization methods to fuse the multi-source security data and grasp the network situation. Firstly, data sources are classified at their collection positions, with the objects of security data taken from three different layers. Secondly, the Heatmap is adopted to show host status; the Treemap is used to visualize Netflow logs; and the radial Node-link diagram is employed to express IPS logs. Finally, the Labeled Treemap is invented to make a fusion at data-level and the Time-series features are extracted to fuse data at feature-level. The comparative analyses with the prize-winning works prove this method enjoying substantial advantages for network analysts to facilitate data feature fusion, better understand network security situation with a unified, convenient and accurate mode.

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • 제17권2호
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.

특허분석을 통한 유망융합기술의 예측 (A Study on Forecast of the Promising Fusion Technology by US Patent Analysis)

  • 강희종;엄미정;김동명
    • 기술혁신연구
    • /
    • 제14권3호
    • /
    • pp.93-116
    • /
    • 2006
  • This study provides a quantitative forecasting method to identify promising fusion technology and it also applies the method based on patent analysis to IT. This study defines fusion technology, promising technology, fusion index, promising index and promising fusion technology. From the analysis, this study found that the next generation computer network is the most promising in IT area. This result is consistent with the forecasts made by the interviews and discussion of experts.

  • PDF

DCNN Optimization Using Multi-Resolution Image Fusion

  • Alshehri, Abdullah A.;Lutz, Adam;Ezekiel, Soundararajan;Pearlstein, Larry;Conlen, John
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권11호
    • /
    • pp.4290-4309
    • /
    • 2020
  • In recent years, advancements in machine learning capabilities have allowed it to see widespread adoption for tasks such as object detection, image classification, and anomaly detection. However, despite their promise, a limitation lies in the fact that a network's performance quality is based on the data which it receives. A well-trained network will still have poor performance if the subsequent data supplied to it contains artifacts, out of focus regions, or other visual distortions. Under normal circumstances, images of the same scene captured from differing points of focus, angles, or modalities must be separately analysed by the network, despite possibly containing overlapping information such as in the case of images of the same scene captured from different angles, or irrelevant information such as images captured from infrared sensors which can capture thermal information well but not topographical details. This factor can potentially add significantly to the computational time and resources required to utilize the network without providing any additional benefit. In this study, we plan to explore using image fusion techniques to assemble multiple images of the same scene into a single image that retains the most salient key features of the individual source images while discarding overlapping or irrelevant data that does not provide any benefit to the network. Utilizing this image fusion step before inputting a dataset into the network, the number of images would be significantly reduced with the potential to improve the classification performance accuracy by enhancing images while discarding irrelevant and overlapping regions.

다중 수동 소나 센서 기반 에너지 인식 분산탐지 체계의 설계 및 성능 분석 (Design and Performance Analysis of Energy-Aware Distributed Detection Systems with Multiple Passive Sonar Sensors)

  • 김송근;홍순목
    • 한국군사과학기술학회지
    • /
    • 제13권1호
    • /
    • pp.9-21
    • /
    • 2010
  • In this paper, optimum design of distributed detection is considered for a parallel sensor network system consisting of a fusion center and multiple passive sonar nodes. Nonrandom fusion rules are employed as the fusion rules of the sensor network. For the nonrandom fusion rules, it is shown that a threshold rule of each sensor node has uniformly most powerful properties. Optimum threshold for each sensor is investigated that maximizes the probability of detection under a constraint on energy consumption due to false alarms. It is also investigated through numerical experiments how signal strength, false alarm probability, and the distance between three sensor nodes affect the system detection performances.

뇌종양 분할을 위한 3D 이중 융합 주의 네트워크 (3D Dual-Fusion Attention Network for Brain Tumor Segmentation)

  • ;;;김수형
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 춘계학술발표대회
    • /
    • pp.496-498
    • /
    • 2023
  • Brain tumor segmentation problem has challenges in the tumor diversity of location, imbalance, and morphology. Attention mechanisms have recently been used widely to tackle medical segmentation problems efficiently by focusing on essential regions. In contrast, the fusion approaches enhance performance by merging mutual benefits from many models. In this study, we proposed a 3D dual fusion attention network to combine the advantages of fusion approaches and attention mechanisms by residual self-attention and local blocks. Compared to fusion approaches and related works, our proposed method has shown promising results on the BraTS 2018 dataset.

조손가족 조모의 자아분화 상태 (The Perceived Self-Differentiation of Custodial Grandmother)

  • 김명희;김신희
    • 보건의료산업학회지
    • /
    • 제9권3호
    • /
    • pp.233-246
    • /
    • 2015
  • Objectives : This study investigated the general characteristics, child rearing characteristics, and the level of self-differentiation of 120 custodial grandmothers. Methods : Data were collected with a self-administered questionnaire from 120 custodial grandmothers who registered Kinship Network in Busan City. Results : For the self-differentiation measure, the mean score of the sample was $2.52{\pm}0.51$. However, the level of the fusion with emotion $1.89{\pm}0.80$ was extremely low partly due to the influence of collectivist culture of Korean society. The levels of emotional reactivity and fusion with emotion were significantly low which were dependent on depression (F=4.387, p=0.015). Conclusions : The findings of this study show the need to improve the level of self-differentiation by increasing the score of emotional reactivity and fusion with the emotion among the kinship network grandmothers. Therefore, supportive programs for kinship network grandmothers need to develop self-differentiation.

Attention-based for Multiscale Fusion Underwater Image Enhancement

  • Huang, Zhixiong;Li, Jinjiang;Hua, Zhen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권2호
    • /
    • pp.544-564
    • /
    • 2022
  • Underwater images often suffer from color distortion, blurring and low contrast, which is caused by the propagation of light in the underwater environment being affected by the two processes: absorption and scattering. To cope with the poor quality of underwater images, this paper proposes a multiscale fusion underwater image enhancement method based on channel attention mechanism and local binary pattern (LBP). The network consists of three modules: feature aggregation, image reconstruction and LBP enhancement. The feature aggregation module aggregates feature information at different scales of the image, and the image reconstruction module restores the output features to high-quality underwater images. The network also introduces channel attention mechanism to make the network pay more attention to the channels containing important information. The detail information is protected by real-time superposition with feature information. Experimental results demonstrate that the method in this paper produces results with correct colors and complete details, and outperforms existing methods in quantitative metrics.

다중센서 데이터 융합에서 이벤트 발생 빈도기반 가중치 부여 (Multi-sensor Data Fusion Using Weighting Method based on Event Frequency)

  • 서동혁;유창근
    • 한국전자통신학회논문지
    • /
    • 제6권4호
    • /
    • pp.581-587
    • /
    • 2011
  • 무선센서네트워크는 높은 수준의 상황정보를 추론할 수 있기 위해 이질적인 다중센서로 이루어질 필요가 있다. 다중센서에 의해 수집된 데이터를 상황 정보추론에 활용할 때 다중센서 데이터 융합이 필요하다. 본 논문에서는 Dempster-Shafer의 증거이론에 입각하여 무선센서네트워크에서의 데이터 융합할 때 센서별 가중치를 부여하는 방안을 제안하였다. 센서별 이벤트 발생빈도수를 기준으로 하였는데, 센서별 이벤트 발생 빈도수는 해당 센서가 입수한 상황데이터의 가중치를 계산하는데 반영해야 할 요소이다. 센서별 이벤트 발생빈도에 기초하여 가중치를 계산하였으며 이 가중치를 부여하여 다중센서 데이터 융합하였을 때 신뢰도가 더욱 뚜렷한 격차를 보이게 함으로써 상황정보를 추론함에 있어서 용이할 수 있었다.

Face inpainting via Learnable Structure Knowledge of Fusion Network

  • Yang, You;Liu, Sixun;Xing, Bin;Li, Kesen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권3호
    • /
    • pp.877-893
    • /
    • 2022
  • With the development of deep learning, face inpainting has been significantly enhanced in the past few years. Although image inpainting framework integrated with generative adversarial network or attention mechanism enhanced the semantic understanding among facial components, the issues of reconstruction on corrupted regions are still worthy to explore, such as blurred edge structure, excessive smoothness, unreasonable semantic understanding and visual artifacts, etc. To address these issues, we propose a Learnable Structure Knowledge of Fusion Network (LSK-FNet), which learns a prior knowledge by edge generation network for image inpainting. The architecture involves two steps: Firstly, structure information obtained by edge generation network is used as the prior knowledge for face inpainting network. Secondly, both the generated prior knowledge and the incomplete image are fed into the face inpainting network together to get the fusion information. To improve the accuracy of inpainting, both of gated convolution and region normalization are applied in our proposed model. We evaluate our LSK-FNet qualitatively and quantitatively on the CelebA-HQ dataset. The experimental results demonstrate that the edge structure and details of facial images can be improved by using LSK-FNet. Our model surpasses the compared models on L1, PSNR and SSIM metrics. When the masked region is less than 20%, L1 loss reduce by more than 4.3%.