• Title/Summary/Keyword: Multimodal Medical Image

Search Result 20, Processing Time 0.033 seconds

Multimodal Medical Image Registration based on Image Sub-division and Bi-linear Transformation Interpolation (영상의 영역 분할과 이중선형 보간행렬을 이용한 멀티모달 의료 영상의 정합)

  • Kim, Yang-Wook;Park, Jun
    • Journal of Biomedical Engineering Research
    • /
    • v.30 no.1
    • /
    • pp.34-40
    • /
    • 2009
  • Transforms including translation and rotation are required for registering two or more images. In medical applications, different registration methods have been applied depending on the structures: for rigid bodies such as bone structures, affine transformation was widely used. In most previous research, a single transform was used for registering the whole images, which resulted in low registration accuracy especially when the degree of deformation was high between two images. In this paper, a novel registration method is introduced which is based image sub-division and bilinear interpolation of transformations. The proposed method enhanced the registration accuracy by 40% comparing with Trimmed ICP for registering color and MRI images.

Multimodal Digital Photographic Imaging System for Total Diagnostic Analysis of Skin Lesions: DermaVision-Pro (다모드 디지털 사진 영상 시스템을 이용한 피부 손상의 진단적 분석에 대한 연구 : DermaVision-Pro)

  • Bae, Young-Woo;Kim, Eun-Ji;Jung, Byung-Jo
    • Proceedings of the KIEE Conference
    • /
    • 2008.10b
    • /
    • pp.153-154
    • /
    • 2008
  • Digital photographic analysis is currently considered as a routine procedure in clinic because periodic follow-up examinations can provide meaningful information for diagnosis. However, it is impractical to separately evaluate all suspicious lesions with conventional digital photographic systems, which have inconsistent characteristics of the environmental conditions. To address the issue, it is necessary for total diagnostic evaluation in clinic to integrate conventional systems. Previously, a multimodal digital photographic imaging system, which provides a conventional color image, parallel and cross polarization color images and a fluorescent color image, was developed for objective evaluation of facial skin lesions. Based on our previous study, we introduce a commercial product, "DermaVision-PRO," for routine use in clinical application in dermatology. We characterize the system and describe the image analysis methods for objective evaluation of skin lesions. In order to demonstrate the validity of the system in dermatology, sample images were obtained from subjects with various skin disorders, and image analysis methods were applied for objective evaluation of those lesions.

  • PDF

Accuracy Evaluation of Three-Dimensional Multimodal Image Registration Using a Brain Phantom (뇌팬톰을 이용한 삼차원 다중영상정합의 정확성 평가)

  • 진호상;송주영;주라형;정수교;최보영;이형구;서태석
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.1
    • /
    • pp.33-41
    • /
    • 2004
  • Accuracy of registration between images acquired from various medical image modalities is one of the critical issues in radiation treatment planing. In this study, a method of accuracy evaluation of image registration using a homemade brain phantom was investigated. Chamfer matching of CT-MR and CT-SPECT imaging was applied for the multimodal image registration. The accuracy of image correlation was evaluated by comparing the center points of the inserted targets of the phantom. The three dimensional root-mean-square translation deviations of the CT-MR and CT-SPECT registration were 2.1${\pm}$0.8 mm and 2.8${\pm}$1.4 mm, respectively. The rotational errors were < 2$^{\circ}$ for the three orthogonal axes. These errors were within a reasonable margin compared with the previous phantom studies. A visual inspection of the superimposed CT-MR and CT- SPECT images also showed good matching results.

Overcoming the Challenges in the Development and Implementation of Artificial Intelligence in Radiology: A Comprehensive Review of Solutions Beyond Supervised Learning

  • Gil-Sun Hong;Miso Jang;Sunggu Kyung;Kyungjin Cho;Jiheon Jeong;Grace Yoojin Lee;Keewon Shin;Ki Duk Kim;Seung Min Ryu;Joon Beom Seo;Sang Min Lee;Namkug Kim
    • Korean Journal of Radiology
    • /
    • v.24 no.11
    • /
    • pp.1061-1080
    • /
    • 2023
  • Artificial intelligence (AI) in radiology is a rapidly developing field with several prospective clinical studies demonstrating its benefits in clinical practice. In 2022, the Korean Society of Radiology held a forum to discuss the challenges and drawbacks in AI development and implementation. Various barriers hinder the successful application and widespread adoption of AI in radiology, such as limited annotated data, data privacy and security, data heterogeneity, imbalanced data, model interpretability, overfitting, and integration with clinical workflows. In this review, some of the various possible solutions to these challenges are presented and discussed; these include training with longitudinal and multimodal datasets, dense training with multitask learning and multimodal learning, self-supervised contrastive learning, various image modifications and syntheses using generative models, explainable AI, causal learning, federated learning with large data models, and digital twins.

Bone Segmentation Method of Visible Human using Multimodal Registration (다중 모달 정합에 의한 Visible Human의 뼈 분할 방법)

  • Lee, Ho;Kim, Dong-Sung;Kang, Heung-Sik
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.7_8
    • /
    • pp.719-726
    • /
    • 2003
  • This paper proposes a multimodal registration method for segmentation of the Visible Human color images, in which color characteristics of bones are very similar to those of its surrounding fat areas. Bones are initially segmented in CT images, and then registered into color images to lineate their boundaries in the color images. For the segmentation of bones in CT images, a thresholding method is developed. The registration method registers boundaries of bodies in CT and color images using a cross-correlation approach, in which the boundaries of bodies are extracted by thresholding segmentation methods. The proposed method has been applied to segmentation of bones in a head and legs whose boundary is ambiguous due to surrounding fat areas with similar color characteristics, and produced promising results.

List-event Data Resampling for Quantitative Improvement of PET Image (PET 영상의 정량적 개선을 위한 리스트-이벤트 데이터 재추출)

  • Woo, Sang-Keun;Ju, Jung Woo;Kim, Ji Min;Kang, Joo Hyun;Lim, Sang Moo;Kim, Kyeong Min
    • Progress in Medical Physics
    • /
    • v.23 no.4
    • /
    • pp.309-316
    • /
    • 2012
  • Multimodal-imaging technique has been rapidly developed for improvement of diagnosis and evaluation of therapeutic effects. In despite of integrated hardware, registration accuracy was decreased due to a discrepancy between multimodal image and insufficiency of count in accordance with different acquisition method of each modality. The purpose of this study was to improve the PET image by event data resampling through analysis of data format, noise and statistical properties of small animal PET list data. Inveon PET listmode data was acquired as static data for 10 min after 60 min of 37 MBq/0.1 ml $^{18}F$-FDG injection via tail vein. Listmode data format was consist of packet containing 48 bit in which divided 8 bit header and 40 bit payload space. Realigned sinogram was generated from resampled event data of original listmode by using adjustment of LOR location, simple event magnification and nonparametric bootstrap. Sinogram was reconstructed for imaging using OSEM 2D algorithm with 16 subset and 4 iterations. Prompt coincidence was 13,940,707 count measured from PET data header and 13,936,687 count measured from analysis of list-event data. In simple event magnification of PET data, maximum was improved from 1.336 to 1.743, but noise was also increased. Resampling efficiency of PET data was assessed from de-noised and improved image by shift operation of payload value of sequential packet. Bootstrap resampling technique provides the PET image which noise and statistical properties was improved. List-event data resampling method would be aid to improve registration accuracy and early diagnosis efficiency.

Multimodal Brain Image Registration based on Surface Distance and Surface Curvature Optimization (표면거리 및 표면곡률 최적화 기반 다중모달리티 뇌영상 정합)

  • Park Ji-Young;Choi Yoo-Joo;Kim Min-Jeong;Tae Woo-Suk;Hong Seung-Bong;Kim Myoung-Hee
    • The KIPS Transactions:PartA
    • /
    • v.11A no.5
    • /
    • pp.391-400
    • /
    • 2004
  • Within multimodal medical image registration techniques, which correlate different images and Provide integrated information, surface registration methods generally minimize the surface distance between two modalities. However, the features of two modalities acquired from one subject are similar. So, it can improve the accuracy of registration result to match two images based on optimization of both surface distance and shape feature. This research proposes a registration method which optimizes surface distance and surface curvature of two brain modalities. The registration process has two steps. First, surface information is extracted from the reference images and the test images. Next, the optimization process is performed. In the former step, the surface boundaries of regions of interest are extracted from the two modalities. And for the boundary of reference volume image, distance map and curvature map are generated. In the optimization step, a transformation minimizing both surface distance and surface curvature difference is determined by a cost function referring to the distance map and curvature map. The applying of the result transformation makes test volume be registered to reference volume. The suggested cost function makes possible a more robust and accurate registration result than that of the cost function using the surface distance only. Also, this research provides an efficient means for image analysis through volume visualization of the registration result.

Quantitative Feasibility Evaluation of 11C-Methionine Positron Emission Tomography Images in Gamma Knife Radiosurgery : Phantom-Based Study and Clinical Application

  • Lim, Sa-Hoe;Jung, Tae-Young;Jung, Shin;Kim, In-Young;Moon, Kyung-Sub;Kwon, Seong-Young;Jang, Woo-Youl
    • Journal of Korean Neurosurgical Society
    • /
    • v.62 no.4
    • /
    • pp.476-486
    • /
    • 2019
  • Objective : The functional information of $^{11}C$-methionine positron emission tomography (MET-PET) images can be applied for Gamma knife radiosurgery (GKR) and its image quality may affect defining the tumor. This study conducted the phantom-based evaluation for geometric accuracy and functional characteristic of diagnostic MET-PET image co-registered with stereotactic image in Leksell $GammaPlan^{(R)}$ (LGP) and also investigated clinical application of these images in metastatic brain tumors. Methods : Two types of cylindrical acrylic phantoms fabricated in-house were used for this study : the phantom with an array-shaped axial rod insert and the phantom with different sized tube indicators. The phantoms were mounted on the stereotactic frame and scanned using computed tomography (CT), magnetic resonance imaging (MRI), and PET system. Three-dimensional coordinate values on co-registered MET-PET images were compared with those on stereotactic CT image in LGP. MET uptake values of different sized indicators inside phantom were evaluated. We also evaluated the CT and MRI co-registered stereotactic MET-PET images with MR-enhancing volume and PET-metabolic tumor volume (MTV) in 14 metastatic brain tumors. Results : Imaging distortion of MET-PET was maintained stable at less than approximately 3% on mean value. There was no statistical difference in the geometric accuracy according to co-registered reference stereotactic images. In functional characteristic study for MET-PET image, the indicator on the lateral side of the phantom exhibited higher uptake than that on the medial side. This effect decreased as the size of the object increased. In 14 metastatic tumors, the median matching percentage between MR-enhancing volume and PET-MTV was 36.8% on PET/MR fusion images and 39.9% on PET/CT fusion images. Conclusion : The geometric accuracy of the diagnostic MET-PET co-registered with stereotactic MR in LGP is acceptable on phantom-based study. However, the MET-PET images could the limitations in providing exact stereotactic information in clinical study.

Deep Learning-Based Companion Animal Abnormal Behavior Detection Service Using Image and Sensor Data

  • Lee, JI-Hoon;Shin, Min-Chan;Park, Jun-Hee;Moon, Nam-Mee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.10
    • /
    • pp.1-9
    • /
    • 2022
  • In this paper, we propose the Deep Learning-Based Companion Animal Abnormal Behavior Detection Service, which using video and sensor data. Due to the recent increase in households with companion animals, the pet tech industry with artificial intelligence is growing in the existing food and medical-oriented companion animal market. In this study, companion animal behavior was classified and abnormal behavior was detected based on a deep learning model using various data for health management of companion animals through artificial intelligence. Video data and sensor data of companion animals are collected using CCTV and the manufactured pet wearable device, and used as input data for the model. Image data was processed by combining the YOLO(You Only Look Once) model and DeepLabCut for extracting joint coordinates to detect companion animal objects for behavior classification. Also, in order to process sensor data, GAT(Graph Attention Network), which can identify the correlation and characteristics of each sensor, was used.

Development of the Multi-Parametric Mapping Software Based on Functional Maps to Determine the Clinical Target Volumes (임상표적체적 결정을 위한 기능 영상 기반 생물학적 인자 맵핑 소프트웨어 개발)

  • Park, Ji-Yeon;Jung, Won-Gyun;Lee, Jeong-Woo;Lee, Kyoung-Nam;Ahn, Kook-Jin;Hong, Se-Mie;Juh, Ra-Hyeong;Choe, Bo-Young;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.21 no.2
    • /
    • pp.153-164
    • /
    • 2010
  • To determine the clinical target volumes considering vascularity and cellularity of tumors, the software was developed for mapping of the analyzed biological clinical target volumes on anatomical images using regional cerebral blood volume (rCBV) maps and apparent diffusion coefficient (ADC) maps. The program provides the functions for integrated registrations using mutual information, affine transform and non-rigid registration. The registration accuracy is evaluated by the calculation of the overlapped ratio of segmented bone regions and average distance difference of contours between reference and registered images. The performance of the developed software was tested using multimodal images of a patient who has the residual tumor of high grade gliomas. Registration accuracy of about 74% and average 2.3 mm distance difference were calculated by the evaluation method of bone segmentation and contour extraction. The registration accuracy can be improved as higher as 4% by the manual adjustment functions. Advanced MR images are analyzed using color maps for rCBV maps and quantitative calculation based on region of interest (ROI) for ADC maps. Then, multi-parameters on the same voxels are plotted on plane and constitute the multi-functional parametric maps of which x and y axis representing rCBV and ADC values. According to the distributions of functional parameters, tumor regions showing the higher vascularity and cellularity are categorized according to the criteria corresponding malignant gliomas. Determined volumes reflecting pathological and physiological characteristics of tumors are marked on anatomical images. By applying the multi-functional images, errors arising from using one type of image would be reduced and local regions representing higher probability as tumor cells would be determined for radiation treatment plan. Biological tumor characteristics can be expressed using image registration and multi-functional parametric maps in the developed software. The software can be considered to delineate clinical target volumes using advanced MR images with anatomical images.