• 제목/요약/키워드: Feature Fusion

검색결과 294건 처리시간 0.024초

Change Detection in Bitemporal Remote Sensing Images by using Feature Fusion and Fuzzy C-Means

  • Wang, Xin;Huang, Jing;Chu, Yanli;Shi, Aiye;Xu, Lizhong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권4호
    • /
    • pp.1714-1729
    • /
    • 2018
  • Change detection of remote sensing images is a profound challenge in the field of remote sensing image analysis. This paper proposes a novel change detection method for bitemporal remote sensing images based on feature fusion and fuzzy c-means (FCM). Different from the state-of-the-art methods that mainly utilize a single image feature for difference image construction, the proposed method investigates the fusion of multiple image features for the task. The subsequent problem is regarded as the difference image classification problem, where a modified fuzzy c-means approach is proposed to analyze the difference image. The proposed method has been validated on real bitemporal remote sensing data sets. Experimental results confirmed the effectiveness of the proposed method.

폴리에틸렌 배관재의 건전성 평가를 위한 어트랙터 시스템의 구축 (Construction of Attractor System by Integrity Evaluation of Polyethylene Piping Materials)

  • 황영택;오승규;이원
    • 대한기계학회:학술대회논문집
    • /
    • 대한기계학회 2001년도 춘계학술대회논문집A
    • /
    • pp.609-615
    • /
    • 2001
  • This study proposes analysis and evaluation method of time series ultrasonic signal using attractor analysis for fusion joint part of polyethylene piping. Quantitatively characteristics of fusion joint part is analysed features extracted from time series. Trajectory changes in the attractor indicated a substantial difference in fractal characteristics. These differences in characteristics of fusion joint part enables the evaluation of unique characteristics of fusion joint part. In quantitative fractal feature extraction, feature values of 4.291 in the case of debonding and 3.694 in the case of bonding were proposed on the basis of fractal dimensions. In quantitative quadrant feature extraction, 1,306 point in the case of bonding(one quadrant) and 1,209 point(one quadrant) in the case of debonding were proposed on the basis of fractal dimensions. Proposed attractor feature extraction can be used for integrity evaluation of polyethylene piping material which is in case of bonding or debonding.

  • PDF

LFFCNN: 라이트 필드 카메라의 다중 초점 이미지 합성 (LFFCNN: Multi-focus Image Synthesis in Light Field Camera)

  • 김형식;남가빈;김영섭
    • 반도체디스플레이기술학회지
    • /
    • 제22권3호
    • /
    • pp.149-154
    • /
    • 2023
  • This paper presents a novel approach to multi-focus image fusion using light field cameras. The proposed neural network, LFFCNN (Light Field Focus Convolutional Neural Network), is composed of three main modules: feature extraction, feature fusion, and feature reconstruction. Specifically, the feature extraction module incorporates SPP (Spatial Pyramid Pooling) to effectively handle images of various scales. Experimental results demonstrate that the proposed model not only effectively fuses a single All-in-Focus image from images with multi focus images but also offers more efficient and robust focus fusion compared to existing methods.

  • PDF

정보보안을 위한 생체 인식 모델에 관한 연구 (A Study on Biometric Model for Information Security)

  • 김준영;정세훈;심춘보
    • 한국전자통신학회논문지
    • /
    • 제19권1호
    • /
    • pp.317-326
    • /
    • 2024
  • 생체 인식은 사람의 생체적, 행동적 특징 정보를 특정 장치로 추출하여 본인 여부를 판별하는 기술이다. 생체 인식 분야에서 생체 특성 위조, 복제, 해킹 등 사이버 위협이 증가하고 있다. 이에 대응하여 보안 시스템이 강화되고 복잡해지며, 개인이 사용하기 어려워지고 있다. 이를 위해 다중 생체 인식 모델이 연구되고 있다. 기존 연구들은 특징 융합 방법을 제시하고 있으나, 특징 융합 방법 간의 비교는 부족하다. 이에 본 논문에서는 지문, 얼굴, 홍채 영상을 이용한 다중 생체 인식 모델의 융합 방법을 비교 평가했다. 특징 추출을 위해VGG-16, ResNet-50, EfficientNet-B1, EfficientNet-B4, EfficientNet-B7, Inception-v3를 사용했으며, 특성융합을 위해 'Sensor-Level', 'Feature-Level', 'Score-Level', 'Rank-Level' 융합 방법을 비교 평가했다. 비교평가결과 'Feature-Level' 융합 방법에서 EfficientNet-B7 모델이 98.51%의 정확도를 보이며 높은 안정성을 보였다. 그러나 EfficietnNet-B7모델의 크기가 크기 때문에 생체 특성 융합을 위한 모델 경량화 연구가 필요하다.

Animal Fur Recognition Algorithm Based on Feature Fusion Network

  • Liu, Peng;Lei, Tao;Xiang, Qian;Wang, Zexuan;Wang, Jiwei
    • Journal of Multimedia Information System
    • /
    • 제9권1호
    • /
    • pp.1-10
    • /
    • 2022
  • China is a big country in animal fur industry. The total production and consumption of fur are increasing year by year. However, the recognition of fur in the fur production process still mainly relies on the visual identification of skilled workers, and the stability and consistency of products cannot be guaranteed. In response to this problem, this paper proposes a feature fusion-based animal fur recognition network on the basis of typical convolutional neural network structure, relying on rapidly developing deep learning techniques. This network superimposes texture feature - the most prominent feature of fur image - into the channel dimension of input image. The output feature map of the first layer convolution is inverted to obtain the inverted feature map and concat it into the original output feature map, then Leaky ReLU is used for activation, which makes full use of the texture information of fur image and the inverted feature information. Experimental results show that the algorithm improves the recognition accuracy by 9.08% on Fur_Recognition dataset and 6.41% on CIFAR-10 dataset. The algorithm in this paper can change the current situation that fur recognition relies on manual visual method to classify, and can lay foundation for improving the efficiency of fur production technology.

Bio-Inspired Object Recognition Using Parameterized Metric Learning

  • Li, Xiong;Wang, Bin;Liu, Yuncai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권4호
    • /
    • pp.819-833
    • /
    • 2013
  • Computing global features based on local features using a bio-inspired framework has shown promising performance. However, for some tough applications with large intra-class variances, a single local feature is inadequate to represent all the attributes of the images. To integrate the complementary abilities of multiple local features, in this paper we have extended the efficacy of the bio-inspired framework, HMAX, to adapt heterogeneous features for global feature extraction. Given multiple global features, we propose an approach, designated as parameterized metric learning, for high dimensional feature fusion. The fusion parameters are solved by maximizing the canonical correlation with respect to the parameters. Experimental results show that our method achieves significant improvements over the benchmark bio-inspired framework, HMAX, and other related methods on the Caltech dataset, under varying numbers of training samples and feature elements.

특징 융합과 공간 강조를 적용한 딥러닝 기반의 개선된 YOLOv4S (Modified YOLOv4S based on Deep learning with Feature Fusion and Spatial Attention)

  • 황범연;이상훈;이승현
    • 한국융합학회논문지
    • /
    • 제12권12호
    • /
    • pp.31-37
    • /
    • 2021
  • 본 논문은 특징 융합과 공간 강조를 적용하여 작고 페색된 객체 검출을 위한 개선된 YOLOv4S를 제안하였다. 기존 YOLOv4S은 경량 네트워크로 깊은 네트워크 대비 특징 추출 능력 부족하다. 제안하는 방법은 먼저 feature fusion으로 서로 다른 크기의 특징맵을 결합하여 의미론적 정보 및 저수준 정보를 개선하였다. 또한, dilated convolution으로 수용 영역을 확장하여 작고 폐색된 객체에 대한 검출 정확도를 향상시켰다. 두 번째로 spatial attention으로 기존 공간 정보 개선하여 객체간 구분되어 폐색된 객체의 검출 정확도를 향상시켰다. 제안하는 방법의 정량적 평가를 위해 PASCAL VOC 및 COCO 데이터세트를 사용하였다. 실험을 통해 제안하는 방법은 기존 YOLOv4S 대비 PASCAL VOC 데이터세트에서 mAP 2.7% 및 COCO 데이터세트에서 mAP 1.8% 향상되었다.

Classification of Textured Images Based on Discrete Wavelet Transform and Information Fusion

  • Anibou, Chaimae;Saidi, Mohammed Nabil;Aboutajdine, Driss
    • Journal of Information Processing Systems
    • /
    • 제11권3호
    • /
    • pp.421-437
    • /
    • 2015
  • This paper aims to present a supervised classification algorithm based on data fusion for the segmentation of the textured images. The feature extraction method we used is based on discrete wavelet transform (DWT). In the segmentation stage, the estimated feature vector of each pixel is sent to the support vector machine (SVM) classifier for initial labeling. To obtain a more accurate segmentation result, two strategies based on information fusion were used. We first integrated decision-level fusion strategies by combining decisions made by the SVM classifier within a sliding window. In the second strategy, the fuzzy set theory and rules based on probability theory were used to combine the scores obtained by SVM over a sliding window. Finally, the performance of the proposed segmentation algorithm was demonstrated on a variety of synthetic and real images and showed that the proposed data fusion method improved the classification accuracy compared to applying a SVM classifier. The results revealed that the overall accuracies of SVM classification of textured images is 88%, while our fusion methodology obtained an accuracy of up to 96%, depending on the size of the data base.

다중 바이오 인증에서 특징 융합과 결정 융합의 결합 (Combining Feature Fusion and Decision Fusion in Multimodal Biometric Authentication)

  • 이경희
    • 정보보호학회논문지
    • /
    • 제20권5호
    • /
    • pp.133-138
    • /
    • 2010
  • 본 논문은 얼굴과 음성 정보를 사용한 다중 바이오 인증에서, 특정 단계의 융합과 결정 단계의 융합을 동시에 수행하는 다단계 융합 방법을 제안한다. 얼굴과 음성 특징을 1차 융합한 얼굴 음성 융합특징에 대해 Support Vector Machines(SVM)을 생성한 후, 이 융합특징 SVM 인증기의 결정과 얼굴 SVM 인증기의 결정, 음성 SVM 인증기의 결정들을 다시 2차 융합하여 최종 인증 여부를 결정한다. XM2VTS 멀티모달 데이터베이스를 사용하여 특징 단계 융합, 결정 단계 융합, 다단계 융합 인증을 비교 실험한 결과, 제안한 다단계 융합에 의한 인증이 가장 우수한 성능을 보였다.

천정부착 랜드마크와 광류를 이용한 단일 카메라/관성 센서 융합 기반의 인공위성 지상시험장치의 위치 및 자세 추정 (Pose Estimation of Ground Test Bed using Ceiling Landmark and Optical Flow Based on Single Camera/IMU Fusion)

  • 신옥식;박찬국
    • 제어로봇시스템학회논문지
    • /
    • 제18권1호
    • /
    • pp.54-61
    • /
    • 2012
  • In this paper, the pose estimation method for the satellite GTB (Ground Test Bed) using vision/MEMS IMU (Inertial Measurement Unit) integrated system is presented. The GTB for verifying a satellite system on the ground is similar to the mobile robot having thrusters and a reaction wheel as actuators and floating on the floor by compressed air. The EKF (Extended Kalman Filter) is also used for fusion of MEMS IMU and vision system that consists of a single camera and infrared LEDs that is ceiling landmarks. The fusion filter generally utilizes the position of feature points from the image as measurement. However, this method can cause position error due to the bias of MEMS IMU when the camera image is not obtained if the bias is not properly estimated through the filter. Therefore, it is proposed that the fusion method which uses the position of feature points and the velocity of the camera determined from optical flow of feature points. It is verified by experiments that the performance of the proposed method is robust to the bias of IMU compared to the method that uses only the position of feature points.