• 제목/요약/키워드: Feature Fusion Method

검색결과 165건 처리시간 0.024초

Classification of Textured Images Based on Discrete Wavelet Transform and Information Fusion

  • Anibou, Chaimae;Saidi, Mohammed Nabil;Aboutajdine, Driss
    • Journal of Information Processing Systems
    • /
    • 제11권3호
    • /
    • pp.421-437
    • /
    • 2015
  • This paper aims to present a supervised classification algorithm based on data fusion for the segmentation of the textured images. The feature extraction method we used is based on discrete wavelet transform (DWT). In the segmentation stage, the estimated feature vector of each pixel is sent to the support vector machine (SVM) classifier for initial labeling. To obtain a more accurate segmentation result, two strategies based on information fusion were used. We first integrated decision-level fusion strategies by combining decisions made by the SVM classifier within a sliding window. In the second strategy, the fuzzy set theory and rules based on probability theory were used to combine the scores obtained by SVM over a sliding window. Finally, the performance of the proposed segmentation algorithm was demonstrated on a variety of synthetic and real images and showed that the proposed data fusion method improved the classification accuracy compared to applying a SVM classifier. The results revealed that the overall accuracies of SVM classification of textured images is 88%, while our fusion methodology obtained an accuracy of up to 96%, depending on the size of the data base.

다중 바이오 인증에서 특징 융합과 결정 융합의 결합 (Combining Feature Fusion and Decision Fusion in Multimodal Biometric Authentication)

  • 이경희
    • 정보보호학회논문지
    • /
    • 제20권5호
    • /
    • pp.133-138
    • /
    • 2010
  • 본 논문은 얼굴과 음성 정보를 사용한 다중 바이오 인증에서, 특정 단계의 융합과 결정 단계의 융합을 동시에 수행하는 다단계 융합 방법을 제안한다. 얼굴과 음성 특징을 1차 융합한 얼굴 음성 융합특징에 대해 Support Vector Machines(SVM)을 생성한 후, 이 융합특징 SVM 인증기의 결정과 얼굴 SVM 인증기의 결정, 음성 SVM 인증기의 결정들을 다시 2차 융합하여 최종 인증 여부를 결정한다. XM2VTS 멀티모달 데이터베이스를 사용하여 특징 단계 융합, 결정 단계 융합, 다단계 융합 인증을 비교 실험한 결과, 제안한 다단계 융합에 의한 인증이 가장 우수한 성능을 보였다.

특징 선택과 융합 방법을 이용한 음성 감정 인식 (Speech Emotion Recognition using Feature Selection and Fusion Method)

  • 김원구
    • 전기학회논문지
    • /
    • 제66권8호
    • /
    • pp.1265-1271
    • /
    • 2017
  • In this paper, the speech parameter fusion method is studied to improve the performance of the conventional emotion recognition system. For this purpose, the combination of the parameters that show the best performance by combining the cepstrum parameters and the various pitch parameters used in the conventional emotion recognition system are selected. Various pitch parameters were generated using numerical and statistical methods using pitch of speech. Performance evaluation was performed on the emotion recognition system using Gaussian mixture model(GMM) to select the pitch parameters that showed the best performance in combination with cepstrum parameters. As a parameter selection method, sequential feature selection method was used. In the experiment to distinguish the four emotions of normal, joy, sadness and angry, fifteen of the total 56 pitch parameters were selected and showed the best recognition performance when fused with cepstrum and delta cepstrum coefficients. This is a 48.9% reduction in the error of emotion recognition system using only pitch parameters.

Transfer Learning-Based Feature Fusion Model for Classification of Maneuver Weapon Systems

  • Jinyong Hwang;You-Rak Choi;Tae-Jin Park;Ji-Hoon Bae
    • Journal of Information Processing Systems
    • /
    • 제19권5호
    • /
    • pp.673-687
    • /
    • 2023
  • Convolutional neural network-based deep learning technology is the most commonly used in image identification, but it requires large-scale data for training. Therefore, application in specific fields in which data acquisition is limited, such as in the military, may be challenging. In particular, the identification of ground weapon systems is a very important mission, and high identification accuracy is required. Accordingly, various studies have been conducted to achieve high performance using small-scale data. Among them, the ensemble method, which achieves excellent performance through the prediction average of the pre-trained models, is the most representative method; however, it requires considerable time and effort to find the optimal combination of ensemble models. In addition, there is a performance limitation in the prediction results obtained by using an ensemble method. Furthermore, it is difficult to obtain the ensemble effect using models with imbalanced classification accuracies. In this paper, we propose a transfer learning-based feature fusion technique for heterogeneous models that extracts and fuses features of pre-trained heterogeneous models and finally, fine-tunes hyperparameters of the fully connected layer to improve the classification accuracy. The experimental results of this study indicate that it is possible to overcome the limitations of the existing ensemble methods by improving the classification accuracy through feature fusion between heterogeneous models based on transfer learning.

Attention-based for Multiscale Fusion Underwater Image Enhancement

  • Huang, Zhixiong;Li, Jinjiang;Hua, Zhen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권2호
    • /
    • pp.544-564
    • /
    • 2022
  • Underwater images often suffer from color distortion, blurring and low contrast, which is caused by the propagation of light in the underwater environment being affected by the two processes: absorption and scattering. To cope with the poor quality of underwater images, this paper proposes a multiscale fusion underwater image enhancement method based on channel attention mechanism and local binary pattern (LBP). The network consists of three modules: feature aggregation, image reconstruction and LBP enhancement. The feature aggregation module aggregates feature information at different scales of the image, and the image reconstruction module restores the output features to high-quality underwater images. The network also introduces channel attention mechanism to make the network pay more attention to the channels containing important information. The detail information is protected by real-time superposition with feature information. Experimental results demonstrate that the method in this paper produces results with correct colors and complete details, and outperforms existing methods in quantitative metrics.

초분광 영상 특징선택과 밴드비 기법을 이용한 유사색상의 특이재질 검출기법 (Specific Material Detection with Similar Colors using Feature Selection and Band Ratio in Hyperspectral Image)

  • 심민섭;김성호
    • 제어로봇시스템학회논문지
    • /
    • 제19권12호
    • /
    • pp.1081-1088
    • /
    • 2013
  • Hyperspectral cameras acquire reflectance values at many different wavelength bands. Dimensions tend to increase because spectral information is stored in each pixel. Several attempts have been made to reduce dimensional problems such as the feature selection using Adaboost and dimension reduction using the Simulated Annealing technique. We propose a novel material detection method that consists of four steps: feature band selection, feature extraction, SVM (Support Vector Machine) learning, and target and specific region detection. It is a combination of the band ratio method and Simulated Annealing algorithm based on detection rate. The experimental results validate the effectiveness of the proposed feature selection and band ratio method.

폴리에틸렌 배관재의 건전성 평가를 위한 어트랙터 시스템의 구축 (Construction of Attractor System by Integrity Evaluation of Polyethylene Piping Materials)

  • 황영택;오승규;이원
    • 대한기계학회:학술대회논문집
    • /
    • 대한기계학회 2001년도 춘계학술대회논문집A
    • /
    • pp.609-615
    • /
    • 2001
  • This study proposes analysis and evaluation method of time series ultrasonic signal using attractor analysis for fusion joint part of polyethylene piping. Quantitatively characteristics of fusion joint part is analysed features extracted from time series. Trajectory changes in the attractor indicated a substantial difference in fractal characteristics. These differences in characteristics of fusion joint part enables the evaluation of unique characteristics of fusion joint part. In quantitative fractal feature extraction, feature values of 4.291 in the case of debonding and 3.694 in the case of bonding were proposed on the basis of fractal dimensions. In quantitative quadrant feature extraction, 1,306 point in the case of bonding(one quadrant) and 1,209 point(one quadrant) in the case of debonding were proposed on the basis of fractal dimensions. Proposed attractor feature extraction can be used for integrity evaluation of polyethylene piping material which is in case of bonding or debonding.

  • PDF

정보보안을 위한 생체 인식 모델에 관한 연구 (A Study on Biometric Model for Information Security)

  • 김준영;정세훈;심춘보
    • 한국전자통신학회논문지
    • /
    • 제19권1호
    • /
    • pp.317-326
    • /
    • 2024
  • 생체 인식은 사람의 생체적, 행동적 특징 정보를 특정 장치로 추출하여 본인 여부를 판별하는 기술이다. 생체 인식 분야에서 생체 특성 위조, 복제, 해킹 등 사이버 위협이 증가하고 있다. 이에 대응하여 보안 시스템이 강화되고 복잡해지며, 개인이 사용하기 어려워지고 있다. 이를 위해 다중 생체 인식 모델이 연구되고 있다. 기존 연구들은 특징 융합 방법을 제시하고 있으나, 특징 융합 방법 간의 비교는 부족하다. 이에 본 논문에서는 지문, 얼굴, 홍채 영상을 이용한 다중 생체 인식 모델의 융합 방법을 비교 평가했다. 특징 추출을 위해VGG-16, ResNet-50, EfficientNet-B1, EfficientNet-B4, EfficientNet-B7, Inception-v3를 사용했으며, 특성융합을 위해 'Sensor-Level', 'Feature-Level', 'Score-Level', 'Rank-Level' 융합 방법을 비교 평가했다. 비교평가결과 'Feature-Level' 융합 방법에서 EfficientNet-B7 모델이 98.51%의 정확도를 보이며 높은 안정성을 보였다. 그러나 EfficietnNet-B7모델의 크기가 크기 때문에 생체 특성 융합을 위한 모델 경량화 연구가 필요하다.

Bio-Inspired Object Recognition Using Parameterized Metric Learning

  • Li, Xiong;Wang, Bin;Liu, Yuncai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권4호
    • /
    • pp.819-833
    • /
    • 2013
  • Computing global features based on local features using a bio-inspired framework has shown promising performance. However, for some tough applications with large intra-class variances, a single local feature is inadequate to represent all the attributes of the images. To integrate the complementary abilities of multiple local features, in this paper we have extended the efficacy of the bio-inspired framework, HMAX, to adapt heterogeneous features for global feature extraction. Given multiple global features, we propose an approach, designated as parameterized metric learning, for high dimensional feature fusion. The fusion parameters are solved by maximizing the canonical correlation with respect to the parameters. Experimental results show that our method achieves significant improvements over the benchmark bio-inspired framework, HMAX, and other related methods on the Caltech dataset, under varying numbers of training samples and feature elements.

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • 제17권2호
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.