• 제목/요약/키워드: Multi-Image

검색결과 2,926건 처리시간 0.03초

다중스케일 노멀라이즈 컷을 이용한 영상분할 (Image Segmentation using Multi-scale Normalized Cut)

  • 이재현;이지은;박래홍
    • 방송공학회논문지
    • /
    • 제18권4호
    • /
    • pp.609-618
    • /
    • 2013
  • 본 논문은 기존 그래프 컷 기반 영상분할의 성능은 유지하면서 연산속도가 빠른 영상분할 방법을 제안한다. 기존 그래프 컷 기반 영상분할은 높은 성능을 보이지만 고유쌍 연산으로 인해 분할 속도가 느리다는 단점을 지닌다. 이는 고유쌍 연산에서 영상 내 모든 화소 사이의 유사도를 고려하여 정방행렬을 만들기 때문이다. 그러므로 제안하는 방법은 영상을 여러 영역으로 분할하여 작은 크기의 정방행렬을 구성하고 이를 통해 고유쌍 연산 속도를 크게 향상시킨다. 본 논문에서는 대수적 다중 격자를 이용한 다중스케일 영상분할법을 제안하고 실험 결과를 통해 제안하는 방법이 기존 영상분할 방법보다 그 성능이 더 우수함을 보인다.

Multi- Resolution MSS Image Fusion

  • Ghassemian, Hassan;Amidian, Asghar
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2003년도 Proceedings of ACRS 2003 ISRS
    • /
    • pp.648-650
    • /
    • 2003
  • Efficient multi-resolution image fusion aims to take advantage of the high spectral resolution of Landsat TM images and high spatial resolution of SPOT panchromatic images simultaneously. This paper presents a multi-resolution data fusion scheme, based on multirate image representation. Motivated by analytical results obtained from high-resolution multispectral image data analysis: the energy packing the spectral features are distributed in the lower frequency bands, and the spatial features, edges, are distributed in the higher frequency bands. This allows to spatially enhancing the multispectral images, by adding the high-resolution spatial features to them, by a multirate filtering procedure. The proposed method is compared with some conventional methods. Results show it preserves more spectral features with less spatial distortion.

  • PDF

FS-Transformer: A new frequency Swin Transformer for multi-focus image fusion

  • Weiping Jiang;Yan Wei;Hao Zhai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권7호
    • /
    • pp.1907-1928
    • /
    • 2024
  • In recent years, multi-focus image fusion has emerged as a prominent area of research, with transformers gaining recognition in the field of image processing. Current approaches encounter challenges such as boundary artifacts, loss of detailed information, and inaccurate localization of focused regions, leading to suboptimal fusion outcomes necessitating subsequent post-processing interventions. To address these issues, this paper introduces a novel multi-focus image fusion technique leveraging the Swin Transformer architecture. This method integrates a frequency layer utilizing Wavelet Transform, enhancing performance in comparison to conventional Swin Transformer configurations. Additionally, to mitigate the deficiency of local detail information within the attention mechanism, Convolutional Neural Networks (CNN) are incorporated to enhance region recognition accuracy. Comparative evaluations of various fusion methods across three datasets were conducted in the paper. The experimental findings demonstrate that the proposed model outperformed existing techniques, yielding superior quality in the resultant fused images.

Multi-resolution Lossless Image Compression for Progressive Transmission and Multiple Decoding Using an Enhanced Edge Adaptive Hierarchical Interpolation

  • Biadgie, Yenewondim;Kim, Min-sung;Sohn, Kyung-Ah
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권12호
    • /
    • pp.6017-6037
    • /
    • 2017
  • In a multi-resolution image encoding system, the image is encoded into a single file as a layer of bit streams, and then it is transmitted layer by layer progressively to reduce the transmission time across a low bandwidth connection. This encoding scheme is also suitable for multiple decoders, each with different capabilities ranging from a handheld device to a PC. In our previous work, we proposed an edge adaptive hierarchical interpolation algorithm for multi-resolution image coding system. In this paper, we enhanced its compression efficiency by adding three major components. First, its prediction accuracy is improved using context adaptive error modeling as a feedback. Second, the conditional probability of prediction errors is sharpened by removing the sign redundancy among local prediction errors by applying sign flipping. Third, the conditional probability is sharpened further by reducing the number of distinct error symbols using error remapping function. Experimental results on benchmark data sets reveal that the enhanced algorithm achieves a better compression bit rate than our previous algorithm and other algorithms. It is shown that compression bit rate is much better for images that are rich in directional edges and textures. The enhanced algorithm also shows better rate-distortion performance and visual quality at the intermediate stages of progressive image transmission.

A Multi-view Super-Resolution Method with Joint-optimization of Image Fusion and Blind Deblurring

  • Fan, Jun;Wu, Yue;Zeng, Xiangrong;Huangpeng, Qizi;Liu, Yan;Long, Xin;Zhou, Jinglun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권5호
    • /
    • pp.2366-2395
    • /
    • 2018
  • Multi-view super-resolution (MVSR) refers to the process of reconstructing a high-resolution (HR) image from a set of low-resolution (LR) images captured from different viewpoints typically by different cameras. These multi-view images are usually obtained by a camera array. In our previous work [1], we super-resolved multi-view LR images via image fusion (IF) and blind deblurring (BD). In this paper, we present a new MVSR method that jointly realizes IF and BD based on an integrated energy function optimization. First, we reformulate the MVSR problem into a multi-channel blind deblurring (MCBD) problem which is easier to be solved than the former. Then the depth map of the desired HR image is calculated. Finally, we solve the MCBD problem, in which the optimization problems with respect to the desired HR image and with respect to the unknown blur are efficiently addressed by the alternating direction method of multipliers (ADMM). Experiments on the Multi-view Image Database of the University of Tsukuba and images captured by our own camera array system demonstrate the effectiveness of the proposed method.

Real-Time Visible-Infrared Image Fusion using Multi-Guided Filter

  • Jeong, Woojin;Han, Bok Gyu;Yang, Hyeon Seok;Moon, Young Shik
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권6호
    • /
    • pp.3092-3107
    • /
    • 2019
  • Visible-infrared image fusion is a process of synthesizing an infrared image and a visible image into a fused image. This process synthesizes the complementary advantages of both images. The infrared image is able to capture a target object in dark or foggy environments. However, the utility of the infrared image is hindered by the blurry appearance of objects. On the other hand, the visible image clearly shows an object under normal lighting conditions, but it is not ideal in dark or foggy environments. In this paper, we propose a multi-guided filter and a real-time image fusion method. The proposed multi-guided filter is a modification of the guided filter for multiple guidance images. Using this filter, we propose a real-time image fusion method. The speed of the proposed fusion method is much faster than that of conventional image fusion methods. In an experiment, we compare the proposed method and the conventional methods in terms of quantity, quality, fusing speed, and flickering artifacts. The proposed method synthesizes 57.93 frames per second for an image size of $320{\times}270$. Based on our experiments, we confirmed that the proposed method is able to perform real-time processing. In addition, the proposed method synthesizes flicker-free video.

A New Three-dimensional Integrated Multi-index Method for CBIR System

  • Zhang, Mingzhu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권3호
    • /
    • pp.993-1014
    • /
    • 2021
  • This paper proposes a new image retrieval method called the 3D integrated multi-index to fuse SIFT (Scale Invariant Feature Transform) visual words with other features at the indexing level. The advantage of the 3D integrated multi-index is that it can produce finer subdivisions in the search space. Compared with the inverted indices of medium-sized codebook, the proposed method increases time slightly in preprocessing and querying. Particularly, the SIFT, contour and colour features are fused into the integrated multi-index, and the joint cooperation of complementary features significantly reduces the impact of false positive matches, so that effective image retrieval can be achieved. Extensive experiments on five benchmark datasets show that the 3D integrated multi-index significantly improves the retrieval accuracy. While compared with other methods, it requires an acceptable memory usage and query time. Importantly, we show that the 3D integrated multi-index is well complementary to many prior techniques, which make our method compared favorably with the state-of-the-arts.

뇌혈관 추출과 대화형 가시화를 위한 다중 GPU기반 영상정합 (Multi GPU Based Image Registration for Cerebrovascular Extraction and Interactive Visualization)

  • 박성진;신영길
    • 한국정보과학회논문지:컴퓨팅의 실제 및 레터
    • /
    • 제15권6호
    • /
    • pp.445-449
    • /
    • 2009
  • 본 논문에서는 조영전 CT 와 조영후 CTA 영상 의 움직임을 보정하기 위하여 연산에 효율적인 다중 GPU 기반 영상정합 기법을 제안한다. 제안방법은 크게 다중 GPU 기반 정합과 뇌혈관 가시화의 두 단계로 구성된다. 우선, 복셀기반정합을 수행하기 위하여 GPU 내부의 병렬성뿐 아니라 GPU 간 병렬성도 고려함으로써 유사도값을 계산한다. 그리고 나서 CTA 영상데이터에서 최적변환행렬에 의하여 변환된 CT 영상데이터를 다중 GPU를 이용하여 차감하고, 차감된 결과를 GPU 기반 볼륨렌더링기법을 이용하여 가시화한다. 본 논문에서 제안한 방법을 화질과 수행시간측면에서 기존방법에 대한 우수성을 나타내기 위하여 5쌍의 조영전 뇌 CT 영상과 조영후 뇌 CTA 영상데이터를 사용하여 비교하였다. 실험결과 제안방법은 뇌혈관이 잘 가시화되어 혈관질환을 정확히 진단할 수 있었다. 다중 GPU 기반 방법은 CPU 기반 방법에 비하여 11.6배, 단일 GPU 기반 방법에 비하여 1.4배 빠른 결과를 보여주었다.

An Effective Framework for Contented-Based Image Retrieval with Multi-Instance Learning Techniques

  • Peng, Yu;Wei, Kun-Juan;Zhang, Da-Li
    • Journal of Ubiquitous Convergence Technology
    • /
    • 제1권1호
    • /
    • pp.18-22
    • /
    • 2007
  • Multi-Instance Learning(MIL) performs well to deal with inherently ambiguity of images in multimedia retrieval. In this paper, an effective framework for Contented-Based Image Retrieval(CBIR) with MIL techniques is proposed, the effective mechanism is based on the image segmentation employing improved Mean Shift algorithm, and processes the segmentation results utilizing mathematical morphology, where the goal is to detect the semantic concepts contained in the query. Every sub-image detected is represented as a multiple features vector which is regarded as an instance. Each image is produced to a bag comprised of a flexible number of instances. And we apply a few number of MIL algorithms in this framework to perform the retrieval. Extensive experimental results illustrate the excellent performance in comparison with the existing methods of CBIR with MIL.

  • PDF

PATN: Polarized Attention based Transformer Network for Multi-focus image fusion

  • Pan Wu;Zhen Hua;Jinjiang Li
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권4호
    • /
    • pp.1234-1257
    • /
    • 2023
  • In this paper, we propose a framework for multi-focus image fusion called PATN. In our approach, by aggregating deep features extracted based on the U-type Transformer mechanism and shallow features extracted using the PSA module, we make PATN feed both long-range image texture information and focus on local detail information of the image. Meanwhile, the edge-preserving information value of the fused image is enhanced using a dense residual block containing the Sobel gradient operator, and three loss functions are introduced to retain more source image texture information. PATN is compared with 17 more advanced MFIF methods on three datasets to verify the effectiveness and robustness of PATN.