• Title/Summary/Keyword: Multi-modality fusion

Search Result 10, Processing Time 0.025 seconds

Multi-modality image fusion via generalized Riesz-wavelet transformation

  • Jin, Bo;Jing, Zhongliang;Pan, Han
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.11
    • /
    • pp.4118-4136
    • /
    • 2014
  • To preserve the spatial consistency of low-level features, generalized Riesz-wavelet transform (GRWT) is adopted for fusing multi-modality images. The proposed method can capture the directional image structure arbitrarily by exploiting a suitable parameterization fusion model and additional structural information. Its fusion patterns are controlled by a heuristic fusion model based on image phase and coherence features. It can explore and keep the structural information efficiently and consistently. A performance analysis of the proposed method applied to real-world images demonstrates that it is competitive with the state-of-art fusion methods, especially in combining structural information.

Interruption in Digital Convergence: Focused on Multi-Modality and Multi-Tasking (디지털 컨버전스에서의 인터럽션: 멀티 모달리티와 멀티 태스킹 간의 상호 관계를 중심으로)

  • Lee, Ki-Ho;Jung, Seung-Ki;Kim, Hye-Jin;Lee, In-Seong;Kim, Jin-Woo
    • Journal of the Ergonomics Society of Korea
    • /
    • v.26 no.3
    • /
    • pp.67-80
    • /
    • 2007
  • Digital convergence, defined as the creative fusion of once-independent technologies and service, is getting more attention recently. Interruptions among internal functions happen frequently in digital convergence products because many functions that were in separate products are merged into a single product. Multi-tasking and multi-modality are two distinctive features of interruption in digital convergence products, but their impacts to the user have not been investigated yet. This study conducted a controlled experiment to investigate the impacts of multi-tasking and multi-modality on the subjective satisfaction and objective performance of digital convergent products. The study results indicate that multi-tasking and multi-modality have substantial effects individually as well as together. The paper ends with practical and theoretical implications of study results as well as research limits and future research.

Human Action Recognition Via Multi-modality Information

  • Gao, Zan;Song, Jian-Ming;Zhang, Hua;Liu, An-An;Xue, Yan-Bing;Xu, Guang-Ping
    • Journal of Electrical Engineering and Technology
    • /
    • v.9 no.2
    • /
    • pp.739-748
    • /
    • 2014
  • In this paper, we propose pyramid appearance and global structure action descriptors on both RGB and depth motion history images and a model-free method for human action recognition. In proposed algorithm, we firstly construct motion history image for both RGB and depth channels, at the same time, depth information is employed to filter RGB information, after that, different action descriptors are extracted from depth and RGB MHIs to represent these actions, and then multimodality information collaborative representation and recognition model, in which multi-modality information are put into object function naturally, and information fusion and action recognition also be done together, is proposed to classify human actions. To demonstrate the superiority of the proposed method, we evaluate it on MSR Action3D and DHA datasets, the well-known dataset for human action recognition. Large scale experiment shows our descriptors are robust, stable and efficient, when comparing with the-state-of-the-art algorithms, the performances of our descriptors are better than that of them, further, the performance of combined descriptors is much better than just using sole descriptor. What is more, our proposed model outperforms the state-of-the-art methods on both MSR Action3D and DHA datasets.

FLIR and CCD Image Fusion Algorithm Based on Adaptive Weight for Target Extraction (표적 추출을 위한 적응적 가중치 기반 FLIR 및 CCD 센서 영상 융합 알고리즘)

  • Gu, Eun-Hye;Lee, Eun-Young;Kim, Se-Yun;Cho, Woon-Ho;Kim, Hee-Soo;Park, Kil-Houm
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.3
    • /
    • pp.291-298
    • /
    • 2012
  • In automatic target recognition(ATR) systems, target extraction techniques are very important because ATR performance depends on segmentation result. So, this paper proposes a multi-sensor image fusion method based on adaptive weights. To incorporate the FLIR image and CCD image, we used information such as the bi-modality, distance and texture. A weight of the FLIR image is derived from the bi-modality and distance measure. For the weight of CCD image, the information that the target's texture is more uniform than the background region is used. The proposed algorithm is applied to many images and its performance is compared with the segmentation result using the single image. Experimental results show that the proposed method has the accurate extraction performance.

Development of a multi-modal imaging system for single-gamma and fluorescence fusion images

  • Young Been Han;Seong Jong Hong;Ho-Young Lee;Seong Hyun Song
    • Nuclear Engineering and Technology
    • /
    • v.55 no.10
    • /
    • pp.3844-3853
    • /
    • 2023
  • Although radiation and chemotherapy methods for cancer therapy have advanced significantly, surgical resection is still recommended for most cancers. Therefore, intraoperative imaging studies have emerged as a surgical tool for identifying tumor margins. Intraoperative imaging has been examined using conventional imaging devices, such as optical near-infrared probes, gamma probes, and ultrasound devices. However, each modality has its limitations, such as depth penetration and spatial resolution. To overcome these limitations, hybrid imaging modalities and tracer studies are being developed. In a previous study, a multi-modal laparoscope with silicon photo-multiplier (SiPM)-based gamma detection acquired a 1 s interval gamma image. However, improvements in the near-infrared fluorophore (NIRF) signal intensity and gamma image central defects are needed to further evaluate the usefulness of multi-modal systems. In this study, an attempt was made to change the NIRF image acquisition method and the SiPM-based gamma detector to improve the source detection ability and reduce the image acquisition time. The performance of the multi-modal system using a complementary metal oxide semiconductor and modified SiPM gamma detector was evaluated in a phantom test. In future studies, a multi-modal system will be further optimized for pilot preclinical studies.

Multimodality and Application Software (다중영상기기의 응용 소프트웨어)

  • Im, Ki-Chun
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.42 no.2
    • /
    • pp.153-163
    • /
    • 2008
  • Medical imaging modalities to image either anatomical structure or functional processes have developed along somewhat independent paths. Functional images with single photon emission computed tomography (SPECT) and positron emission tomography (PET) are playing an increasingly important role in the diagnosis and staging of malignant disease, image-guided therapy planning, and treatment monitoring. SPECT and PET complement the more conventional anatomic imaging modalities of computed tomography (CT) and magnetic resonance (MR) imaging. When the functional imaging modality was combined with the anatomic imaging modality, the multimodality can help both identify and localize functional abnormalities. Combining PET with a high-resolution anatomical imaging modality such as CT can resolve the localization issue as long as the images from the two modalities are accurately coregistered. Software-based registration techniques have difficulty accounting for differences in patient positioning and involuntary movement of internal organs, often necessitating labor-intensive nonlinear mapping that may not converge to a satisfactory result. These challenges have recently been addressed by the introduction of the combined PET/CT scanner and SPECT/CT scanner, a hardware-oriented approach to image fusion. Combined PET/CT and SPECT/CT devices are playing an increasingly important role in the diagnosis and staging of human disease. The paper will review the development of multi modality instrumentations for clinical use from conception to present-day technology and the application software.

Heterogeneous Face Recognition Using Texture feature descriptors (텍스처 기술자들을 이용한 이질적 얼굴 인식 시스템)

  • Bae, Han Byeol;Lee, Sangyoun
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.3
    • /
    • pp.208-214
    • /
    • 2021
  • Recently, much of the intelligent security scenario and criminal investigation demands for matching photo and non-photo. Existing face recognition system can not sufficiently guarantee these needs. In this paper, we propose an algorithm to improve the performance of heterogeneous face recognition systems by reducing the different modality between sketches and photos of the same person. The proposed algorithm extracts each image's texture features through texture descriptors (gray level co-occurrence matrix, multiscale local binary pattern), and based on this, generates a transformation matrix through eigenfeature regularization and extraction techniques. The score value calculated between the vectors generated in this way finally recognizes the identity of the sketch image through the score normalization methods.

High-Frequency Interchange Network for Multispectral Object Detection (다중 스펙트럼 객체 감지를 위한 고주파 교환 네트워크)

  • Park, Seon-Hoo;Yun, Jun-Seok;Yoo, Seok Bong;Han, Seunghwoi
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.8
    • /
    • pp.1121-1129
    • /
    • 2022
  • Object recognition is carried out using RGB images in various object recognition studies. However, RGB images in dark illumination environments or environments where target objects are occluded other objects cause poor object recognition performance. On the other hand, IR images provide strong object recognition performance in these environments because it detects infrared waves rather than visible illumination. In this paper, we propose an RGB-IR fusion model, high-frequency interchange network (HINet), which improves object recognition performance by combining only the strengths of RGB-IR image pairs. HINet connected two object detection models using a mutual high-frequency transfer (MHT) to interchange advantages between RGB-IR images. MHT converts each pair of RGB-IR images into a discrete cosine transform (DCT) spectrum domain to extract high-frequency information. The extracted high-frequency information is transmitted to each other's networks and utilized to improve object recognition performance. Experimental results show the superiority of the proposed network and present performance improvement of the multispectral object recognition task.

Facile Fabrication of Animal-Specific Positioning Molds For Multi-modality Molecular Imaging (다중 분자 영상을 위한 간편한 동물 특이적 자세 고정틀의 제작)

  • Park, Jeong-Chan;Oh, Ji-Eun;Woo, Seung-Tae;Kwak, Won-Jung;Lee, Jeong-Eun;Kim, Kyeong-Min;An, Gwang-Il;Choi, Tae-Hyun;Cheon, Gi-Jeong;Chang, Young-Min;Lee, Sang-Woo;Ahn, Byeong-Cheol;Lee, Jae-Tae;Yoo, Jeong-Soo
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.42 no.5
    • /
    • pp.401-409
    • /
    • 2008
  • Purpose: Recently multi-modal imaging system has become widely adopted in molecular imaging. We tried to fabricate animal-specific positioning molds for PET/MR fusion imaging using easily available molding clay and rapid foam. The animal-specific positioning molds provide immobilization and reproducible positioning of small animal. Herein, we have compared fiber-based molding clay with rapid foam in fabricating the molds of experimental animal. Materials and Methods: The round bottomed-acrylic frame, which fitted into microPET gantry, was prepared at first. The experimental mice was anesthetized and placed on the mold for positioning. Rapid foam and fiber-based clay were used to fabricate the mold. In case of both rapid foam and the clay, the experimental animal needs to be pushed down smoothly into the mold for positioning. However, after the mouse was removed, the fabricated clay needed to be dried completely at $60^{\circ}C$ in oven overnight for hardening. Four sealed pipet tips containing $[^{18}F]FDG$ solution were used as fiduciary markers. After injection of $[^{18}F]FDG$ via tail vein, microPET scanning was performed. Successively, MRI scanning was followed in the same animal. Results: Animal-specific positioning molds were fabricated using rapid foam and fiber-based molding clay for multimodality imaging. Functional and anatomical images were obtained with microPET and MRI, respectively. The fused PET/MR images were obtained using freely available AMIDE program. Conclusion: Animal-specific molds were successfully prepared using easily available rapid foam, molding clay and disposable pipet tips. Thanks to animal-specific molds, fusion images of PET and MR were co-registered with negligible misalignment.

Game Platform and System that Synchronize Actual Humanoid Robot with Virtual 3D Character Robot (가상의 3D와 실제 로봇이 동기화하는 시스템 및 플랫폼)

  • Park, Chang-Hyun;Lee, Chang-Jo
    • Journal of Korea Entertainment Industry Association
    • /
    • v.8 no.2
    • /
    • pp.283-297
    • /
    • 2014
  • The future of human life is expected to be innovative by increasing social, economic, political and personal, including all areas of life across the multi-disciplinary skills. Particularly, in the field of robotics and next-generation games with robots, by multidisciplinary contributions and interaction, convergence between technology is expected to accelerate more and more. The purpose of this study is that by new interface model beyond the technical limitations of the "human-robot interface technology," until now and time and spatial constraints and through fusion of various modalities which existing human-robot interface technologies can't have, the research of more reliable and easy free "human-robot interface technology". This is the research of robot game system which develop and utilizing real time synchronization engine linking between biped humanoid robot and the behavior of the position value of mobile device screen's 3D content (contents), robot (virtual robots), the wireless protocol for sending and receiving (Protocol) mutual information and development of a teaching program of "Direct Teaching & Play" by the study for effective teaching.