• Title/Summary/Keyword: Fusion Technique

Search Result 646, Processing Time 0.026 seconds

Command Fusion for Navigation of Mobile Robots in Dynamic Environments with Objects

  • Jin, Taeseok
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.1
    • /
    • pp.24-29
    • /
    • 2013
  • In this paper, we propose a fuzzy inference model for a navigation algorithm for a mobile robot that intelligently searches goal location in unknown dynamic environments. Our model uses sensor fusion based on situational commands using an ultrasonic sensor. Instead of using the "physical sensor fusion" method, which generates the trajectory of a robot based upon the environment model and sensory data, a "command fusion" method is used to govern the robot motions. The navigation strategy is based on a combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance based on a hierarchical behavior-based control architecture. To identify the environments, a command fusion technique is introduced where the sensory data of the ultrasonic sensors and a vision sensor are fused into the identification process. The result of experiment has shown that highlights interesting aspects of the goal seeking, obstacle avoiding, decision making process that arise from navigation interaction.

FS-Transformer: A new frequency Swin Transformer for multi-focus image fusion

  • Weiping Jiang;Yan Wei;Hao Zhai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.7
    • /
    • pp.1907-1928
    • /
    • 2024
  • In recent years, multi-focus image fusion has emerged as a prominent area of research, with transformers gaining recognition in the field of image processing. Current approaches encounter challenges such as boundary artifacts, loss of detailed information, and inaccurate localization of focused regions, leading to suboptimal fusion outcomes necessitating subsequent post-processing interventions. To address these issues, this paper introduces a novel multi-focus image fusion technique leveraging the Swin Transformer architecture. This method integrates a frequency layer utilizing Wavelet Transform, enhancing performance in comparison to conventional Swin Transformer configurations. Additionally, to mitigate the deficiency of local detail information within the attention mechanism, Convolutional Neural Networks (CNN) are incorporated to enhance region recognition accuracy. Comparative evaluations of various fusion methods across three datasets were conducted in the paper. The experimental findings demonstrate that the proposed model outperformed existing techniques, yielding superior quality in the resultant fused images.

Hyperspectral Image Fusion Algorithm Based on Two-Stage Spectral Unmixing Method (2단계 분광혼합기법 기반의 하이퍼스펙트럴 영상융합 알고리즘)

  • Choi, Jae-Wan;Kim, Dae-Sung;Lee, Byoung-Kil;Yu, Ki-Yun;Kim, Yong-Il
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.4
    • /
    • pp.295-304
    • /
    • 2006
  • Image fusion is defined as making new image by merging two or more images using special algorithms. In case of remote sensing, it means fusing multispectral low-resolution remotely sensed image with panchromatic high-resolution image. Generally, hyperspectral image fusion is accomplished by utilizing fusion technique of multispectral imagery or spectral unmixing model. But, the former may distort spectral information and the latter needs endmember data or additional data, and has a problem with not preserving spatial information well. This study proposes a new algorithm based on two stage spectral unmixing model for preserving hyperspectral image's spectral information. The proposed fusion technique is implemented and tested using Hyperion and ALI images. it is shown to work well on maintaining more spatial/spectral information than the PCA/GS fusion algorithms.

Real Time Motion Processing for Autonomous Navigation

  • Kolodko, J.;Vlacic, L.
    • International Journal of Control, Automation, and Systems
    • /
    • v.1 no.1
    • /
    • pp.156-161
    • /
    • 2003
  • An overview of our approach to autonomous navigation is presented showing how motion information can be integrated into existing navigation schemes. Particular attention is given to our short range motion estimation scheme which utilises a number of unique assumptions regarding the nature of the visual environment allowing a direct fusion of visual and range information. Graduated non-convexity is used to solve the resulting non-convex minimisation problem. Experimental results show the advantages of our fusion technique.

Multi-Frame Face Classification with Decision-Level Fusion based on Photon-Counting Linear Discriminant Analysis

  • Yeom, Seokwon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.14 no.4
    • /
    • pp.332-339
    • /
    • 2014
  • Face classification has wide applications in security and surveillance. However, this technique presents various challenges caused by pose, illumination, and expression changes. Face recognition with long-distance images involves additional challenges, owing to focusing problems and motion blurring. Multiple frames under varying spatial or temporal settings can acquire additional information, which can be used to achieve improved classification performance. This study investigates the effectiveness of multi-frame decision-level fusion with photon-counting linear discriminant analysis. Multiple frames generate multiple scores for each class. The fusion process comprises three stages: score normalization, score validation, and score combination. Candidate scores are selected during the score validation process, after the scores are normalized. The score validation process removes bad scores that can degrade the final output. The selected candidate scores are combined using one of the following fusion rules: maximum, averaging, and majority voting. Degraded facial images are employed to demonstrate the robustness of multi-frame decision-level fusion in harsh environments. Out-of-focus and motion blurring point-spread functions are applied to the test images, to simulate long-distance acquisition. Experimental results with three facial data sets indicate the efficiency of the proposed decision-level fusion scheme.

Exploring the Effect of Replacement Levels on Data Fusion Methods : A Monte Carlo Simulation Approach (자료융합방법의 성과에 대체수준이 미치는 영향에 관한 연구 : 몬테카를로 시뮬레이션 접근방법)

  • 김성호;조성빈;백승익
    • Korean Management Science Review
    • /
    • v.19 no.1
    • /
    • pp.129-142
    • /
    • 2002
  • Data fusion Is a technique used for creating an Integrated database by combining two or more databases that include a different set of variables or attributes. This paper attempts to apply data fusion technique to customer relationships management (CRM), in that we can not only plan a database structure but also collect and manage customer data In a more efficient way In particular our study Is useful when no s1n91e database Is complete, i.e., each and every subject in the pre-integrated database contains somewhat missing observations. According to the way of treating the common variables, donors can be differently selected for the substitution of the missing attributes of recipients. One way is to find the donor that has the highest correlation coefficient with the recipient by. treating common variables metrically The other is based on the closest distance by the correspondence analysis in case of treating common variables nominally. The predictability of data fusion for CRM can be evaluated by measuring the correlation of the original database and the substituted one. A Monte Carlo Simulation analysis is used to examine the stability of the two substitution methods in building an integrated database.

Multi-Scale Dilation Convolution Feature Fusion (MsDC-FF) Technique for CNN-Based Black Ice Detection

  • Sun-Kyoung KANG
    • Korean Journal of Artificial Intelligence
    • /
    • v.11 no.3
    • /
    • pp.17-22
    • /
    • 2023
  • In this paper, we propose a black ice detection system using Convolutional Neural Networks (CNNs). Black ice poses a serious threat to road safety, particularly during winter conditions. To overcome this problem, we introduce a CNN-based architecture for real-time black ice detection with an encoder-decoder network, specifically designed for real-time black ice detection using thermal images. To train the network, we establish a specialized experimental platform to capture thermal images of various black ice formations on diverse road surfaces, including cement and asphalt. This enables us to curate a comprehensive dataset of thermal road black ice images for a training and evaluation purpose. Additionally, in order to enhance the accuracy of black ice detection, we propose a multi-scale dilation convolution feature fusion (MsDC-FF) technique. This proposed technique dynamically adjusts the dilation ratios based on the input image's resolution, improving the network's ability to capture fine-grained details. Experimental results demonstrate the superior performance of our proposed network model compared to conventional image segmentation models. Our model achieved an mIoU of 95.93%, while LinkNet achieved an mIoU of 95.39%. Therefore, it is concluded that the proposed model in this paper could offer a promising solution for real-time black ice detection, thereby enhancing road safety during winter conditions.

AGV Navigation Using a Space and Time Sensor Fusion of an Active Camera

  • Jin, Tae-Seok;Lee, Bong-Ki;Lee, Jang-Myung
    • Journal of Navigation and Port Research
    • /
    • v.27 no.3
    • /
    • pp.273-282
    • /
    • 2003
  • This paper proposes a sensor-fusion technique where rho data sets for the previous moments are properly transformed and fused into the current data sets to enable accurate measurement, such as, distance to an obstacle and location of the service robot itself. In the conventional fusion schemes, the measurement is dependent only on the current data sets. As the results, more of sensors are required to measure a certain physical promoter or to improve the accuracy of the measurement. However, in this approach, intend of adding more sensors to the system, the temporal sequence of the data sets are stored and utilized for the measurement improvement. Theoretical basis is illustrated by examples md the effectiveness is proved through the simulation. Finally, the new space and time sensor fusion (STSF) scheme is applied to the control of a mobile robot in the indoor environment and the performance was demonstrated by the real experiments.

HY Simplified Synthetic Test Facility (한양대학교 간이차단 합성시험설비구축)

  • Chang, Yong-Moo;Hwang, Ryul;Kim, Cheol-Ho;Lee, Bang-Wook;Koo, Ja-Yoon
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2009.11a
    • /
    • pp.220-220
    • /
    • 2009
  • We are developing of the evaluation technique and system for testing the performance of circuit breaker using Simplified Synthetic circuit. This facility specification is up to 90[kApeak] and up to 300[kVpeak]. It is possible to verify the interrupting capability by using low-energy and reduce the development period and the cost.

  • PDF

Development of a Monitoring and Verification Tool for Sensor Fusion (센서융합 검증을 위한 실시간 모니터링 및 검증 도구 개발)

  • Kim, Hyunwoo;Shin, Seunghwan;Bae, Sangjin
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.22 no.3
    • /
    • pp.123-129
    • /
    • 2014
  • SCC (Smart Cruise Control) and AEBS (Autonomous Emergency Braking System) are using various types of sensors data, so it is important to consider about sensor data reliability. In this paper, data from radar and vision sensor is fused by applying a Bayesian sensor fusion technique to improve the reliability of sensors data. Then, it presents a sensor fusion verification tool developed to monitor acquired sensors data and to verify sensor fusion results, efficiently. A parallel computing method was applied to reduce verification time and a series of simulation results of this method are discussed in detail.