• 제목/요약/키워드: Fusion Algorithm

검색결과 649건 처리시간 0.027초

A Novel Automatic Block-based Multi-focus Image Fusion via Genetic Algorithm

  • Yang, Yong;Zheng, Wenjuan;Huang, Shuying
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권7호
    • /
    • pp.1671-1689
    • /
    • 2013
  • The key issue of block-based multi-focus image fusion is to determine the size of the sub-block because different sizes of the sub-block will lead to different fusion effects. To solve this problem, this paper presents a novel genetic algorithm (GA) based multi-focus image fusion method, in which the block size can be automatically found. In our method, the Sum-modified-Laplacian (SML) is selected as an evaluation criterion to measure the clarity of the image sub-block, and the edge information retention is employed to calculate the fitness of each individual. Then, through the selection, crossover and mutation procedures of the GA, we can obtain the optimal solution for the sub-block, which is finally used to fuse the images. Experimental results show that the proposed method outperforms the traditional methods, including the average, gradient pyramid, discrete wavelet transform (DWT), shift invariant DWT (SIDWT) and two existing GA-based methods in terms of both the visual subjective evaluation and the objective evaluation.

간접 칼만 필터 기반의 센서융합을 이용한 실외 주행 이동로봇의 위치 추정 (Localization of Outdoor Wheeled Mobile Robots using Indirect Kalman Filter Based Sensor fusion)

  • 권지욱;박문수;김태은;좌동경;홍석교
    • 제어로봇시스템학회논문지
    • /
    • 제14권8호
    • /
    • pp.800-808
    • /
    • 2008
  • This paper presents a localization algorithm of the outdoor wheeled mobile robot using the sensor fusion method based on indirect Kalman filter(IKF). The wheeled mobile robot considered with in this paper is approximated to the two wheeled mobile robot. The mobile robot has the IMU and encoder sensor for inertia positioning system and GPS. Because the IMU and encoder sensor have bias errors, divergence of the estimated position from the measured data can occur when the mobile robot moves for a long time. Because of many natural and artificial conditions (i.e. atmosphere or GPS body itself), GPS has the maximum error about $10{\sim}20m$ when the mobile robot moves for a short time. Thus, the fusion algorithm of IMU, encoder sensor and GPS is needed. For the sensor fusion algorithm, we use IKF that estimates the errors of the position of the mobile robot. IKF proposed in this paper can be used other autonomous agents (i.e. UAV, UGV) because IKF in this paper use the position errors of the mobile robot. We can show the stability of the proposed sensor fusion method, using the fact that the covariance of error state of the IKF is bounded. To evaluate the performance of proposed algorithm, simulation and experimental results of IKF for the position(x-axis position, y-axis position, and yaw angle) of the outdoor wheeled mobile robot are presented.

Classification of Textured Images Based on Discrete Wavelet Transform and Information Fusion

  • Anibou, Chaimae;Saidi, Mohammed Nabil;Aboutajdine, Driss
    • Journal of Information Processing Systems
    • /
    • 제11권3호
    • /
    • pp.421-437
    • /
    • 2015
  • This paper aims to present a supervised classification algorithm based on data fusion for the segmentation of the textured images. The feature extraction method we used is based on discrete wavelet transform (DWT). In the segmentation stage, the estimated feature vector of each pixel is sent to the support vector machine (SVM) classifier for initial labeling. To obtain a more accurate segmentation result, two strategies based on information fusion were used. We first integrated decision-level fusion strategies by combining decisions made by the SVM classifier within a sliding window. In the second strategy, the fuzzy set theory and rules based on probability theory were used to combine the scores obtained by SVM over a sliding window. Finally, the performance of the proposed segmentation algorithm was demonstrated on a variety of synthetic and real images and showed that the proposed data fusion method improved the classification accuracy compared to applying a SVM classifier. The results revealed that the overall accuracies of SVM classification of textured images is 88%, while our fusion methodology obtained an accuracy of up to 96%, depending on the size of the data base.

A multisource image fusion method for multimodal pig-body feature detection

  • Zhong, Zhen;Wang, Minjuan;Gao, Wanlin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권11호
    • /
    • pp.4395-4412
    • /
    • 2020
  • The multisource image fusion has become an active topic in the last few years owing to its higher segmentation rate. To enhance the accuracy of multimodal pig-body feature segmentation, a multisource image fusion method was employed. Nevertheless, the conventional multisource image fusion methods can not extract superior contrast and abundant details of fused image. To superior segment shape feature and detect temperature feature, a new multisource image fusion method was presented and entitled as NSST-GF-IPCNN. Firstly, the multisource images were resolved into a range of multiscale and multidirectional subbands by Nonsubsampled Shearlet Transform (NSST). Then, to superior describe fine-scale texture and edge information, even-symmetrical Gabor filter and Improved Pulse Coupled Neural Network (IPCNN) were used to fuse low and high-frequency subbands, respectively. Next, the fused coefficients were reconstructed into a fusion image using inverse NSST. Finally, the shape feature was extracted using automatic threshold algorithm and optimized using morphological operation. Nevertheless, the highest temperature of pig-body was gained in view of segmentation results. Experiments revealed that the presented fusion algorithm was able to realize 2.102-4.066% higher average accuracy rate than the traditional algorithms and also enhanced efficiency.

측정치 융합에 기반을 둔 다중표적 방위각 추적 알고리즘 (Mutiple Target Angle Tracking Algorithm Based on measurement Fusion)

  • 류창수
    • 전자공학회논문지 IE
    • /
    • 제43권3호
    • /
    • pp.13-21
    • /
    • 2006
  • Ryu 등은 배열센서 출력을 이용하여 추정한 신호부공간으로부터 표적의 방위각 측정치를 구하고, 이를 이용하여 표적의 방위각 궤적을 추적하는 알고리즘을 제안하였다. Ryu 등이 제안한 방위각 추적 알고리즘은 별도의 데이터연관 필터가 필요 없으며 구조가 간단하다는 장점을 가지고 있다. Ryu의 방위각 추적 알고리즘에서는 신호부공간이 센서출력에 의해서 계속 쇄신되고 있지만 표본시간의 신호부공간에서 구한 측정치만을 사용하고 있으며, 신호대잡음비가 낮은 경우에는 Ryu 알고리즘의 추적 성능이 매우 급격히 저하되는 문제점을 가지고 있다. 본 논문에서는 Ryu 알고리즘의 방위각 추적 성능을 개선하기 위하여 표본시간의 신호부공간에서 구한 측정치뿐만 아니라 표본시간에 인접한 신호부공간으로부터 구한 측정치까지 사용할 수 있도록 ML(Maximum Lekelihood)에 기반을 둔 측정치 융합 기법을 제안한다. 그리고 제안한 측정치 융합 기법을 이용하여 Ryu 알고리즘과 같은 구조를 가지는 새로운 방위각 추적 알고리즘을 제안한다. 제안한 방위각 추적 알고리즘은 Ryu 알고리즘의 장점을 그대로 유지하면서 Ryu 알고리즘보다 향상된 추적 성능을 가진다.

그레디언트 및 분산을 이용한 웨이블릿 기반의 광학 및 레이더 영상 융합 (Wavelet-based Fusion of Optical and Radar Image using Gradient and Variance)

  • 예철수
    • 대한원격탐사학회지
    • /
    • 제26권5호
    • /
    • pp.581-591
    • /
    • 2010
  • 본 연구에서는 주파수 및 공간 도메인 상에서 선호 분석에 장점이 있는 웨이블릿 기반의 영상 융합 알고리듬을 제안하였다. 개발된 알고리듬은 레이더 영상 신호와 광학 영상 신호의 상대적인 크기를 비교하여 상대적으로 신호 크기가 큰 경우에는 레이더 영상 신호를 융합 영상에 할당하고 크기가 작은 경우에는 레이더 영상 신호와 광학 영상 선호의 가중치 합으로 융합 영상 신호를 결정한다. 사용되는 융합 규칙은 두 영상 신호의 상대적인 신호 비(ratio) 영상 그레디언트, 로컬 영역의 분산 특성을 동시에 고려한다. Ikonos 위성 영상과 TerraSAR-X 위성 영상을 이용한 실험에서 상대적으로 신호 크기가 큰 레이더 신호 만을 융합 영상에 할당하는 기존 방법에 비해 entropy, image clarity, spatial frequency, speckle index 측면에서 우수한 융합 결과를 얻었다.

네트워크 기반 자율이동로봇을 위한 장애물 회피 알고리즘 개발 (Development of an Obstacle Avoidance Algorithm for a Network-based Autonomous Mobile Robot)

  • 김홍열;김대원;김홍석;손수경
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제54권5호
    • /
    • pp.291-299
    • /
    • 2005
  • An obstacle avoidance algorithm for a network-based autonomous mobile robot is proposed in this paper. The obstacle avoidance algorithm is based on the VFH(Vector Field Histogram) algorithm and two delay compensation methods with the VFH algorithm are proposed for a network-based robot with distributed environmental sensors, mobile actuators, and the VFH controller. Firstly, the environmental sensor information is compensated by prospection with acquired environmental sensor information, measured network delays, and the kinematic model of the robot. The compensated environmental sensor information is used for building polar histogram with the VFH algorithm. Secondly, a sensor fusion algorithm for localization of the robot is proposed to compensate the delay of odometry sensor information and the delay of environmental sensor information. Through some simulation tests, the performance enhancement of the proposed algorithm in the viewpoint of efficient path generation and accurate goal positioning is shown here.

다중센서 융합 상이 지도를 통한 다중센서 기반 3차원 복원 결과 개선 (Refinements of Multi-sensor based 3D Reconstruction using a Multi-sensor Fusion Disparity Map)

  • 김시종;안광호;성창훈;정명진
    • 로봇학회논문지
    • /
    • 제4권4호
    • /
    • pp.298-304
    • /
    • 2009
  • This paper describes an algorithm that improves 3D reconstruction result using a multi-sensor fusion disparity map. We can project LRF (Laser Range Finder) 3D points onto image pixel coordinatesusing extrinsic calibration matrixes of a camera-LRF (${\Phi}$, ${\Delta}$) and a camera calibration matrix (K). The LRF disparity map can be generated by interpolating projected LRF points. In the stereo reconstruction, we can compensate invalid points caused by repeated pattern and textureless region using the LRF disparity map. The result disparity map of compensation process is the multi-sensor fusion disparity map. We can refine the multi-sensor 3D reconstruction based on stereo vision and LRF using the multi-sensor fusion disparity map. The refinement algorithm of multi-sensor based 3D reconstruction is specified in four subsections dealing with virtual LRF stereo image generation, LRF disparity map generation, multi-sensor fusion disparity map generation, and 3D reconstruction process. It has been tested by synchronized stereo image pair and LRF 3D scan data.

  • PDF

Speech emotion recognition based on genetic algorithm-decision tree fusion of deep and acoustic features

  • Sun, Linhui;Li, Qiu;Fu, Sheng;Li, Pingan
    • ETRI Journal
    • /
    • 제44권3호
    • /
    • pp.462-475
    • /
    • 2022
  • Although researchers have proposed numerous techniques for speech emotion recognition, its performance remains unsatisfactory in many application scenarios. In this study, we propose a speech emotion recognition model based on a genetic algorithm (GA)-decision tree (DT) fusion of deep and acoustic features. To more comprehensively express speech emotional information, first, frame-level deep and acoustic features are extracted from a speech signal. Next, five kinds of statistic variables of these features are calculated to obtain utterance-level features. The Fisher feature selection criterion is employed to select high-performance features, removing redundant information. In the feature fusion stage, the GA is is used to adaptively search for the best feature fusion weight. Finally, using the fused feature, the proposed speech emotion recognition model based on a DT support vector machine model is realized. Experimental results on the Berlin speech emotion database and the Chinese emotion speech database indicate that the proposed model outperforms an average weight fusion method.

동적환경에서 퍼지추론을 이용한 이동로봇의 다중센서기반의 자율주행 (Multisensor-Based Navigation of a Mobile Robot Using a Fuzzy Inference in Dynamic Environments)

  • 진태석;이장명
    • 한국정밀공학회지
    • /
    • 제20권11호
    • /
    • pp.79-90
    • /
    • 2003
  • In this paper, we propose a multisensor-based navigation algorithm for a mobile robot, which is intelligently searching the goal location in unknown dynamic environments using multi-ultrasonic sensor. Instead of using “sensor fusion” method which generates the trajectory of a robot based upon the environment model and sensory data, “command fusion” method by fuzzy inference is used to govern the robot motions. The major factors for robot navigation are represented as a cost function. Using the data of the robot states and the environment, the weight value of each factor using fuzzy inference is determined for an optimal trajectory in dynamic environments. For the evaluation of the proposed algorithm, we performed simulations in PC as well as experiments with IRL-2002. The results show that the proposed algorithm is apt to identify obstacles in unknown environments to guide the robot to the goal location safely.