• Title/Summary/Keyword: Multi-image

Search Result 2,916, Processing Time 0.03 seconds

Image Segmentation using Multi-scale Normalized Cut (다중스케일 노멀라이즈 컷을 이용한 영상분할)

  • Lee, Jae-Hyun;Lee, Ji Eun;Park, Rae-Hong
    • Journal of Broadcast Engineering
    • /
    • v.18 no.4
    • /
    • pp.609-618
    • /
    • 2013
  • This paper proposes a fast image segmentation method that gives high segmentation performance as graph-cut based methods. Graph-cut based image segmentation methods show high segmentation performance, however, the computational complexity is high to solve a computationally-intensive eigen-system. This is because solving eigen-system depends on the size of square matrix obtained from similarities between all pairs of pixels in the input image. Therefore, the proposed method uses the small-size square matrix, which is obtained from all the similarities among regions obtained by segmenting locally an image into several regions by graph-based method. Experimental results show that the proposed multi-scale image segmentation method using the algebraic multi-grid shows higher performance than existing methods.

Multi- Resolution MSS Image Fusion

  • Ghassemian, Hassan;Amidian, Asghar
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.648-650
    • /
    • 2003
  • Efficient multi-resolution image fusion aims to take advantage of the high spectral resolution of Landsat TM images and high spatial resolution of SPOT panchromatic images simultaneously. This paper presents a multi-resolution data fusion scheme, based on multirate image representation. Motivated by analytical results obtained from high-resolution multispectral image data analysis: the energy packing the spectral features are distributed in the lower frequency bands, and the spatial features, edges, are distributed in the higher frequency bands. This allows to spatially enhancing the multispectral images, by adding the high-resolution spatial features to them, by a multirate filtering procedure. The proposed method is compared with some conventional methods. Results show it preserves more spectral features with less spatial distortion.

  • PDF

Multi-resolution Lossless Image Compression for Progressive Transmission and Multiple Decoding Using an Enhanced Edge Adaptive Hierarchical Interpolation

  • Biadgie, Yenewondim;Kim, Min-sung;Sohn, Kyung-Ah
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.12
    • /
    • pp.6017-6037
    • /
    • 2017
  • In a multi-resolution image encoding system, the image is encoded into a single file as a layer of bit streams, and then it is transmitted layer by layer progressively to reduce the transmission time across a low bandwidth connection. This encoding scheme is also suitable for multiple decoders, each with different capabilities ranging from a handheld device to a PC. In our previous work, we proposed an edge adaptive hierarchical interpolation algorithm for multi-resolution image coding system. In this paper, we enhanced its compression efficiency by adding three major components. First, its prediction accuracy is improved using context adaptive error modeling as a feedback. Second, the conditional probability of prediction errors is sharpened by removing the sign redundancy among local prediction errors by applying sign flipping. Third, the conditional probability is sharpened further by reducing the number of distinct error symbols using error remapping function. Experimental results on benchmark data sets reveal that the enhanced algorithm achieves a better compression bit rate than our previous algorithm and other algorithms. It is shown that compression bit rate is much better for images that are rich in directional edges and textures. The enhanced algorithm also shows better rate-distortion performance and visual quality at the intermediate stages of progressive image transmission.

A Multi-view Super-Resolution Method with Joint-optimization of Image Fusion and Blind Deblurring

  • Fan, Jun;Wu, Yue;Zeng, Xiangrong;Huangpeng, Qizi;Liu, Yan;Long, Xin;Zhou, Jinglun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.5
    • /
    • pp.2366-2395
    • /
    • 2018
  • Multi-view super-resolution (MVSR) refers to the process of reconstructing a high-resolution (HR) image from a set of low-resolution (LR) images captured from different viewpoints typically by different cameras. These multi-view images are usually obtained by a camera array. In our previous work [1], we super-resolved multi-view LR images via image fusion (IF) and blind deblurring (BD). In this paper, we present a new MVSR method that jointly realizes IF and BD based on an integrated energy function optimization. First, we reformulate the MVSR problem into a multi-channel blind deblurring (MCBD) problem which is easier to be solved than the former. Then the depth map of the desired HR image is calculated. Finally, we solve the MCBD problem, in which the optimization problems with respect to the desired HR image and with respect to the unknown blur are efficiently addressed by the alternating direction method of multipliers (ADMM). Experiments on the Multi-view Image Database of the University of Tsukuba and images captured by our own camera array system demonstrate the effectiveness of the proposed method.

Real-Time Visible-Infrared Image Fusion using Multi-Guided Filter

  • Jeong, Woojin;Han, Bok Gyu;Yang, Hyeon Seok;Moon, Young Shik
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.6
    • /
    • pp.3092-3107
    • /
    • 2019
  • Visible-infrared image fusion is a process of synthesizing an infrared image and a visible image into a fused image. This process synthesizes the complementary advantages of both images. The infrared image is able to capture a target object in dark or foggy environments. However, the utility of the infrared image is hindered by the blurry appearance of objects. On the other hand, the visible image clearly shows an object under normal lighting conditions, but it is not ideal in dark or foggy environments. In this paper, we propose a multi-guided filter and a real-time image fusion method. The proposed multi-guided filter is a modification of the guided filter for multiple guidance images. Using this filter, we propose a real-time image fusion method. The speed of the proposed fusion method is much faster than that of conventional image fusion methods. In an experiment, we compare the proposed method and the conventional methods in terms of quantity, quality, fusing speed, and flickering artifacts. The proposed method synthesizes 57.93 frames per second for an image size of $320{\times}270$. Based on our experiments, we confirmed that the proposed method is able to perform real-time processing. In addition, the proposed method synthesizes flicker-free video.

A New Three-dimensional Integrated Multi-index Method for CBIR System

  • Zhang, Mingzhu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.3
    • /
    • pp.993-1014
    • /
    • 2021
  • This paper proposes a new image retrieval method called the 3D integrated multi-index to fuse SIFT (Scale Invariant Feature Transform) visual words with other features at the indexing level. The advantage of the 3D integrated multi-index is that it can produce finer subdivisions in the search space. Compared with the inverted indices of medium-sized codebook, the proposed method increases time slightly in preprocessing and querying. Particularly, the SIFT, contour and colour features are fused into the integrated multi-index, and the joint cooperation of complementary features significantly reduces the impact of false positive matches, so that effective image retrieval can be achieved. Extensive experiments on five benchmark datasets show that the 3D integrated multi-index significantly improves the retrieval accuracy. While compared with other methods, it requires an acceptable memory usage and query time. Importantly, we show that the 3D integrated multi-index is well complementary to many prior techniques, which make our method compared favorably with the state-of-the-arts.

Multi GPU Based Image Registration for Cerebrovascular Extraction and Interactive Visualization (뇌혈관 추출과 대화형 가시화를 위한 다중 GPU기반 영상정합)

  • Park, Seong-Jin;Shin, Yeong-Gil
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.6
    • /
    • pp.445-449
    • /
    • 2009
  • In this paper, we propose a computationally efficient multi GPU accelerated image registration technique to correct the motion difference between the pre-contrast CT image and post-contrast CTA image. Our method consists of two steps: multi GPU based image registration and a cerebrovascular visualization. At first, it computes a similarity measure considering the parallelism between both GPUs as well as the parallelism inside GPU for performing the voxel-based registration. Then, it subtracts a CT image transformed by optimal transformation matrix from CTA image, and visualizes the subtracted volume using GPU based volume rendering technique. In this paper, we compare our proposed method with existing methods using 5 pairs of pre-contrast brain CT image and post-contrast brain CTA image in order to prove the superiority of our method in regard to visual quality and computational time. Experimental results show that our method well visualizes a brain vessel, so it well diagnose a vessel disease. Our multi GPU based approach is 11.6 times faster than CPU based approach and 1.4 times faster than single GPU based approach for total processing.

An Effective Framework for Contented-Based Image Retrieval with Multi-Instance Learning Techniques

  • Peng, Yu;Wei, Kun-Juan;Zhang, Da-Li
    • Journal of Ubiquitous Convergence Technology
    • /
    • v.1 no.1
    • /
    • pp.18-22
    • /
    • 2007
  • Multi-Instance Learning(MIL) performs well to deal with inherently ambiguity of images in multimedia retrieval. In this paper, an effective framework for Contented-Based Image Retrieval(CBIR) with MIL techniques is proposed, the effective mechanism is based on the image segmentation employing improved Mean Shift algorithm, and processes the segmentation results utilizing mathematical morphology, where the goal is to detect the semantic concepts contained in the query. Every sub-image detected is represented as a multiple features vector which is regarded as an instance. Each image is produced to a bag comprised of a flexible number of instances. And we apply a few number of MIL algorithms in this framework to perform the retrieval. Extensive experimental results illustrate the excellent performance in comparison with the existing methods of CBIR with MIL.

  • PDF

PATN: Polarized Attention based Transformer Network for Multi-focus image fusion

  • Pan Wu;Zhen Hua;Jinjiang Li
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.4
    • /
    • pp.1234-1257
    • /
    • 2023
  • In this paper, we propose a framework for multi-focus image fusion called PATN. In our approach, by aggregating deep features extracted based on the U-type Transformer mechanism and shallow features extracted using the PSA module, we make PATN feed both long-range image texture information and focus on local detail information of the image. Meanwhile, the edge-preserving information value of the fused image is enhanced using a dense residual block containing the Sobel gradient operator, and three loss functions are introduced to retain more source image texture information. PATN is compared with 17 more advanced MFIF methods on three datasets to verify the effectiveness and robustness of PATN.

Real Scene Text Image Super-Resolution Based on Multi-Scale and Attention Fusion

  • Xinhua Lu;Haihai Wei;Li Ma;Qingji Xue;Yonghui Fu
    • Journal of Information Processing Systems
    • /
    • v.19 no.4
    • /
    • pp.427-438
    • /
    • 2023
  • Plenty of works have indicated that single image super-resolution (SISR) models relying on synthetic datasets are difficult to be applied to real scene text image super-resolution (STISR) for its more complex degradation. The up-to-date dataset for realistic STISR is called TextZoom, while the current methods trained on this dataset have not considered the effect of multi-scale features of text images. In this paper, a multi-scale and attention fusion model for realistic STISR is proposed. The multi-scale learning mechanism is introduced to acquire sophisticated feature representations of text images; The spatial and channel attentions are introduced to capture the local information and inter-channel interaction information of text images; At last, this paper designs a multi-scale residual attention module by skillfully fusing multi-scale learning and attention mechanisms. The experiments on TextZoom demonstrate that the model proposed increases scene text recognition's (ASTER) average recognition accuracy by 1.2% compared to text super-resolution network.