• Title/Summary/Keyword: Feature Fusion Method

Search Result 165, Processing Time 0.023 seconds

Ensemble convolutional neural networks for automatic fusion recognition of multi-platform radar emitters

  • Zhou, Zhiwen;Huang, Gaoming;Wang, Xuebao
    • ETRI Journal
    • /
    • v.41 no.6
    • /
    • pp.750-759
    • /
    • 2019
  • Presently, the extraction of hand-crafted features is still the dominant method in radar emitter recognition. To solve the complicated problems of selection and updation of empirical features, we present a novel automatic feature extraction structure based on deep learning. In particular, a convolutional neural network (CNN) is adopted to extract high-level abstract representations from the time-frequency images of emitter signals. Thus, the redundant process of designing discriminative features can be avoided. Furthermore, to address the performance degradation of a single platform, we propose the construction of an ensemble learning-based architecture for multi-platform fusion recognition. Experimental results indicate that the proposed algorithms are feasible and effective, and they outperform other typical feature extraction and fusion recognition methods in terms of accuracy. Moreover, the proposed structure could be extended to other prevalent ensemble learning alternatives.

Multi-Path Feature Fusion Module for Semantic Segmentation (다중 경로 특징점 융합 기반의 의미론적 영상 분할 기법)

  • Park, Sangyong;Heo, Yong Seok
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.1
    • /
    • pp.1-12
    • /
    • 2021
  • In this paper, we present a new architecture for semantic segmentation. Semantic segmentation aims at a pixel-wise classification which is important to fully understand images. Previous semantic segmentation networks use features of multi-layers in the encoder to predict final results. However, they do not contain various receptive fields in the multi-layers features, which easily lead to inaccurate results for boundaries between different classes and small objects. To solve this problem, we propose a multi-path feature fusion module that allows for features of each layers to contain various receptive fields by use of a set of dilated convolutions with different dilatation rates. Various experiments demonstrate that our method outperforms previous methods in terms of mean intersection over unit (mIoU).

A Multimodal Fusion Method Based on a Rotation Invariant Hierarchical Model for Finger-based Recognition

  • Zhong, Zhen;Gao, Wanlin;Wang, Minjuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.1
    • /
    • pp.131-146
    • /
    • 2021
  • Multimodal biometric-based recognition has been an active topic because of its higher convenience in recent years. Due to high user convenience of finger, finger-based personal identification has been widely used in practice. Hence, taking Finger-Print (FP), Finger-Vein (FV) and Finger-Knuckle-Print (FKP) as the ingredients of characteristic, their feature representation were helpful for improving the universality and reliability in identification. To usefully fuse the multimodal finger-features together, a new robust representation algorithm was proposed based on hierarchical model. Firstly, to obtain more robust features, the feature maps were obtained by Gabor magnitude feature coding and then described by Local Binary Pattern (LBP). Secondly, the LGBP-based feature maps were processed hierarchically in bottom-up mode by variable rectangle and circle granules, respectively. Finally, the intension of each granule was represented by Local-invariant Gray Features (LGFs) and called Hierarchical Local-Gabor-based Gray Invariant Features (HLGGIFs). Experiment results revealed that the proposed algorithm is capable of improving rotation variation of finger-pose, and achieving lower Equal Error Rate (EER) in our homemade database.

Improved Fusion Method of Detection Features in SAR ATR System (SAR 자동표적인식 시스템에서의 탐지특징 결합 방법 개선 방안)

  • Cha, Min-Jun;Kim, Hyung-Myung
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.13 no.3
    • /
    • pp.461-469
    • /
    • 2010
  • In this paper, we have proposed an improved fusion method of detection features which can enhance the detection probability under the given false alarm rate in the prescreening stage of SAR ATR(Synthetic Aperture Radar Automatic Target Recognition) system. Since the detection features have the positive correlation, the detection performance can be improved if the joint probability distribution of detection features is considered in the fusion process. The detection region is designed as a simple piecewise linear function which can be represented by few parameters. The parameters for the detection region can be derived by training the sample SAR images to maximize the detection probability with the given false alarm rate. Simulation result shows that the detection performance of the proposed method is improved for all combinations of detection features.

Ground Target Classification Algorithm based on Multi-Sensor Images (다중센서 영상 기반의 지상 표적 분류 알고리즘)

  • Lee, Eun-Young;Gu, Eun-Hye;Lee, Hee-Yul;Cho, Woong-Ho;Park, Kil-Houm
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.2
    • /
    • pp.195-203
    • /
    • 2012
  • This paper proposes ground target classification algorithm based on decision fusion and feature extraction method using multi-sensor images. The decisions obtained from the individual classifiers are fused by applying a weighted voting method to improve target recognition rate. For classifying the targets belong to the individual sensors images, features robust to scale and rotation are extracted using the difference of brightness of CM images obtained from CCD image and the boundary similarity and the width ratio between the vehicle body and turret of target in FLIR image. Finally, we verity the performance of proposed ground target classification algorithm and feature extraction method by the experimentation.

Feature information fusion using multiple neural networks and target identification application of FLIR image (다중 신경회로망을 이용한 특징정보 융합과 적외선영상에서의 표적식별에의 응용)

  • 선선구;박현욱
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.4
    • /
    • pp.266-274
    • /
    • 2003
  • Distance Fourier descriptors of local target boundary and feature information fusion using multiple MLPs (Multilayer perceptrons) are proposed. They are used to identify nonoccluded and partially occluded targets in natural FLIR (forward-looking infrared) images. After segmenting a target, radial Fourier descriptors as global shape features are defined from the target boundary. A target boundary is partitioned into four local boundaries to extract local shape features. In a local boundary, a distance function is defined from boundary points and a line between two extreme points. Distance Fourier descriptors as local shape features are defined by using distance function. One global feature vector and four local feature vectors are used as input data for multiple MLPs to determine final identification result of the target. In the experiments, we show that the proposed method is superior to the traditional feature sets with respect to the identification performance.

A Novel Multifocus Image Fusion Algorithm Based on Nonsubsampled Contourlet Transform

  • Liu, Cuiyin;Cheng, Peng;Chen, Shu-Qing;Wang, Cuiwei;Xiang, Fenghong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.3
    • /
    • pp.539-557
    • /
    • 2013
  • A novel multifocus image fusion algorithm based on NSCT is proposed in this paper. In order to not only attain the image focusing properties and more visual information in the fused image, but also sensitive to the human visual perception, a local multidirection variance (LEOV) fusion rule is proposed for lowpass subband coefficient. In order to introduce more visual saliency, a modified local contrast is defined. In addition, according to the feature of distribution of highpass subband coefficients, a direction vector is proposed to constrain the modified local contrast and construct the new fusion rule for highpass subband coefficients selection The NSCT is a flexible multiscale, multidirection, and shift-invariant tool for image decomposition, which can be implemented via the atrous algorithm. The proposed fusion algorithm based on NSCT not only can prevent artifacts and erroneous from introducing into the fused image, but also can eliminate 'block effect' and 'frequency aliasing' phenomenon. Experimental results show that the proposed method achieved better fusion results than wavelet-based and CT-based fusion method in contrast and clarity.

Language Identification by Fusion of Gabor, MDLC, and Co-Occurrence Features (Gabor, MDLC, Co-Occurrence 특징의 융합에 의한 언어 인식)

  • Jang, Ick-Hoon;Kim, Ji-Hong
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.3
    • /
    • pp.277-286
    • /
    • 2014
  • In this paper, we propose a texture feature-based language identification by fusion of Gabor, MDLC (multi-lag directional local correlation), and co-occurrence features. In the proposed method, for a test image, Gabor magnitude images are first obtained by Gabor transform followed by magnitude operator. Moments for the Gabor magniude images are then computed and vectorized. MDLC images are then obtained by MDLC operator and their moments are computed and vectorized. GLCM (gray-level co-occurrence matrix) is next calculated from the test image and co-occurrence features are computed using the GLCM, and the features are also vectorized. The three vectors of the Gabor, MDLC, and co-occurrence features are fused into a feature vector. In classification, the WPCA (whitened principal component analysis) classifier, which is usually adopted in the face identification, searches the training feature vector most similar to the test feature vector. We evaluate the performance of our method by examining averaged identification rates for a test document image DB obtained by scanning of documents with 15 languages. Experimental results show that the proposed method yields excellent language identification with rather low feature dimension for the test DB.

Crack location in beams by data fusion of fractal dimension features of laser-measured operating deflection shapes

  • Bai, R.B.;Song, X.G.;Radzienski, M.;Cao, M.S.;Ostachowicz, W.;Wang, S.S.
    • Smart Structures and Systems
    • /
    • v.13 no.6
    • /
    • pp.975-991
    • /
    • 2014
  • The objective of this study is to develop a reliable method for locating cracks in a beam using data fusion of fractal dimension features of operating deflection shapes. The Katz's fractal dimension curve of an operating deflection shape is used as a basic feature of damage. Like most available damage features, the Katz's fractal dimension curve has a notable limitation in characterizing damage: it is unresponsive to damage near the nodes of structural deformation responses, e.g., operating deflection shapes. To address this limitation, data fusion of Katz's fractal dimension curves of various operating deflection shapes is used to create a sophisticated fractal damage feature, the 'overall Katz's fractal dimension curve'. This overall Katz's fractal dimension curve has the distinctive capability of overcoming the nodal effect of operating deflection shapes so that it maximizes responsiveness to damage and reliability of damage localization. The method is applied to the detection of damage in numerical and experimental cases of cantilever beams with single/multiple cracks, with high-resolution operating deflection shapes acquired by a scanning laser vibrometer. Results show that the overall Katz's fractal dimension curve can locate single/multiple cracks in beams with significantly improved accuracy and reliability in comparison to the existing method. Data fusion of fractal dimension features of operating deflection shapes provides a viable strategy for identifying damage in beam-type structures, with robustness against node effects.

A New Covert Visual Attention System by Object-based Spatiotemporal Cues and Their Dynamic Fusioned Saliency Map (객체기반의 시공간 단서와 이들의 동적결합 된돌출맵에 의한 상향식 인공시각주의 시스템)

  • Cheoi, Kyungjoo
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.4
    • /
    • pp.460-472
    • /
    • 2015
  • Most of previous visual attention system finds attention regions based on saliency map which is combined by multiple extracted features. The differences of these systems are in the methods of feature extraction and combination. This paper presents a new system which has an improvement in feature extraction method of color and motion, and in weight decision method of spatial and temporal features. Our system dynamically extracts one color which has the strongest response among two opponent colors, and detects the moving objects not moving pixels. As a combination method of spatial and temporal feature, the proposed system sets the weight dynamically by each features' relative activities. Comparative results show that our suggested feature extraction and integration method improved the detection rate of attention region.