• Title/Summary/Keyword: multimodal medical image fusion

Search Result 7, Processing Time 0.026 seconds

Multimodal Medical Image Fusion Based on Sugeno's Intuitionistic Fuzzy Sets

  • Tirupal, Talari;Mohan, Bhuma Chandra;Kumar, Samayamantula Srinivas
    • ETRI Journal
    • /
    • v.39 no.2
    • /
    • pp.173-180
    • /
    • 2017
  • Multimodal medical image fusion is the process of retrieving valuable information from medical images. The primary goal of medical image fusion is to combine several images obtained from various sources into a distinct image suitable for improved diagnosis. Complexity in medical images is higher, and many soft computing methods are applied by researchers to process them. Intuitionistic fuzzy sets are more appropriate for medical images because the images have many uncertainties. In this paper, a new method, based on Sugeno's intuitionistic fuzzy set (SIFS), is proposed. First, medical images are converted into Sugeno's intuitionistic fuzzy image (SIFI). An exponential intuitionistic fuzzy entropy calculates the optimum values of membership, non-membership, and hesitation degree functions. Then, the two SIFIs are disintegrated into image blocks for calculating the count of blackness and whiteness of the blocks. Finally, the fused image is rebuilt from the recombination of SIFI image blocks. The efficiency of the use of SIFS in multimodal medical image fusion is demonstrated on several pairs of images and the results are compared with existing studies in recent literature.

Multimodal Medical Image Fusion Based on Two-Scale Decomposer and Detail Preservation Model (이중스케일분해기와 미세정보 보존모델에 기반한 다중 모드 의료영상 융합연구)

  • Zhang, Yingmei;Lee, Hyo Jong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.655-658
    • /
    • 2021
  • The purpose of multimodal medical image fusion (MMIF) is to integrate images of different modes with different details into a result image with rich information, which is convenient for doctors to accurately diagnose and treat the diseased tissues of patients. Encouraged by this purpose, this paper proposes a novel method based on a two-scale decomposer and detail preservation model. The first step is to use the two-scale decomposer to decompose the source image into the energy layers and structure layers, which have the characteristic of detail preservation. And then, structure tensor operator and max-abs are combined to fuse the structure layers. The detail preservation model is proposed for the fusion of the energy layers, which greatly improves the image performance. The fused image is achieved by summing up the two fused sub-images obtained by the above fusion rules. Experiments demonstrate that the proposed method has superior performance compared with the state-of-the-art fusion methods.

Multimodal Medical Image Fusion Based on Double-Layer Decomposer and Fine Structure Preservation Model (복층 분해기와 상세구조 보존모델에 기반한 다중모드 의료영상 융합)

  • Zhang, Yingmei;Lee, Hyo Jong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.6
    • /
    • pp.185-192
    • /
    • 2022
  • Multimodal medical image fusion (MMIF) fuses two images containing different structural details generated in two different modes into a comprehensive image with saturated information, which can help doctors improve the accuracy of observation and treatment of patients' diseases. Therefore, a method based on double-layer decomposer and fine structure preservation model is proposed. Firstly, a double-layer decomposer is applied to decompose the source images into the energy layers and structure layers, which can preserve details well. Secondly, The structure layer is processed by combining the structure tensor operator (STO) and max-abs. As for the energy layers, a fine structure preservation model is proposed to guide the fusion, further improving the image quality. Finally, the fused image can be achieved by performing an addition operation between the two sub-fused images formed through the fusion rules. Experiments manifest that our method has excellent performance compared with several typical fusion methods.

MOSAICFUSION: MERGING MODALITIES WITH PARTIAL DIFFERENTIAL EQUATION AND DISCRETE COSINE TRANSFORMATION

  • GARGI TRIVEDI;RAJESH SANGHAVI
    • Journal of Applied and Pure Mathematics
    • /
    • v.5 no.5_6
    • /
    • pp.389-406
    • /
    • 2023
  • In the pursuit of enhancing image fusion techniques, this research presents a novel approach for fusing multimodal images, specifically infrared (IR) and visible (VIS) images, utilizing a combination of partial differential equations (PDE) and discrete cosine transformation (DCT). The proposed method seeks to leverage the thermal and structural information provided by IR imaging and the fine-grained details offered by VIS imaging create composite images that are superior in quality and informativeness. Through a meticulous fusion process, which involves PDE-guided fusion, DCT component selection, and weighted combination, the methodology aims to strike a balance that optimally preserves essential features and minimizes artifacts. Rigorous evaluations, both objective and subjective, are conducted to validate the effectiveness of the approach. This research contributes to the ongoing advancement of multimodal image fusion, addressing applications in fields like medical imaging, surveillance, and remote sensing, where the marriage of IR and VIS data is of paramount importance.

Multimodal Data Fusion for Alzheimers Patients Using Dempster-Shafer Theory of Evidence

  • Majumder, Dwijesh Dutta;Bhattacharya, Nahua
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.06a
    • /
    • pp.713-718
    • /
    • 1998
  • The paper is part of an investigation by the authors on development of a knowledge based frame work for multimodal medical image in collaboration with the All India Institute of Medical Science, new Delhi. After presenting the key aspects of the Dempster-Shafer Evidence theory we have presented implementation of registration and fusion of T₁and T₂ weighted MR images and CT images of the brain of an Alzheimer's patient for minimising the uncertainty and increasing the reliability for dianostics and therapeutic planning.

  • PDF

Development of a Brain Phantom for Multimodal Image Registration in Radiotherapy Treatment Planning

  • H. S. Jin;T. S. Suh;R. H. Juh;J. Y. Song;C. B. Y. Choe;Lee, H .G.;C. Kwark
    • Proceedings of the Korean Society of Medical Physics Conference
    • /
    • 2002.09a
    • /
    • pp.450-453
    • /
    • 2002
  • In radiotherapy treatment planning, it is critical to deliver the radiation dose to tumor and protect surrounding normal tissue. Recent developments in functional imaging and radiotherapy treatment technology have been raising chances to control tumor saving normal tissues. A brain phantom which could be used for image registration technique of CT-MR and CT-SPECT images using surface matching was developed. The brain phantom was specially designed to obtain imaging dataset of CT, MR, and SPECT. The phantom had an external frame with 4 N-shaped pipes filled with acryl rods, Pb rods for CT, MR, and SPECT imaging, respectively. 8 acrylic pipes were inserted into the empty space of the brain phantom to be imaged for geometric evaluation of the matching. For an optimization algorithm of image registration, we used Downhill simplex algorithm suggested as a fast surface matching algorithm. Accuracy of image fusion was assessed by the comparison between the center points of the section of N-shaped bars in the external frame and the inserted pipes of the phantom and minimized cost functions of the optimization algorithm. Technique with partially transparent, mixed images using color on gray was used for visual assessment of the image registration process. The errors of image registration of CT-MR and CT-SPECT were within 2mm and 4mm, respectively. Since these errors were considered within a reasonable margin from the phantom study, the phantom is expected to be used for conventional image registration between multimodal image datasets..

  • PDF

Quantitative Feasibility Evaluation of 11C-Methionine Positron Emission Tomography Images in Gamma Knife Radiosurgery : Phantom-Based Study and Clinical Application

  • Lim, Sa-Hoe;Jung, Tae-Young;Jung, Shin;Kim, In-Young;Moon, Kyung-Sub;Kwon, Seong-Young;Jang, Woo-Youl
    • Journal of Korean Neurosurgical Society
    • /
    • v.62 no.4
    • /
    • pp.476-486
    • /
    • 2019
  • Objective : The functional information of $^{11}C$-methionine positron emission tomography (MET-PET) images can be applied for Gamma knife radiosurgery (GKR) and its image quality may affect defining the tumor. This study conducted the phantom-based evaluation for geometric accuracy and functional characteristic of diagnostic MET-PET image co-registered with stereotactic image in Leksell $GammaPlan^{(R)}$ (LGP) and also investigated clinical application of these images in metastatic brain tumors. Methods : Two types of cylindrical acrylic phantoms fabricated in-house were used for this study : the phantom with an array-shaped axial rod insert and the phantom with different sized tube indicators. The phantoms were mounted on the stereotactic frame and scanned using computed tomography (CT), magnetic resonance imaging (MRI), and PET system. Three-dimensional coordinate values on co-registered MET-PET images were compared with those on stereotactic CT image in LGP. MET uptake values of different sized indicators inside phantom were evaluated. We also evaluated the CT and MRI co-registered stereotactic MET-PET images with MR-enhancing volume and PET-metabolic tumor volume (MTV) in 14 metastatic brain tumors. Results : Imaging distortion of MET-PET was maintained stable at less than approximately 3% on mean value. There was no statistical difference in the geometric accuracy according to co-registered reference stereotactic images. In functional characteristic study for MET-PET image, the indicator on the lateral side of the phantom exhibited higher uptake than that on the medial side. This effect decreased as the size of the object increased. In 14 metastatic tumors, the median matching percentage between MR-enhancing volume and PET-MTV was 36.8% on PET/MR fusion images and 39.9% on PET/CT fusion images. Conclusion : The geometric accuracy of the diagnostic MET-PET co-registered with stereotactic MR in LGP is acceptable on phantom-based study. However, the MET-PET images could the limitations in providing exact stereotactic information in clinical study.