• Title/Summary/Keyword: edge fusion

Search Result 119, Processing Time 0.02 seconds

A Noisy Infrared and Visible Light Image Fusion Algorithm

  • Shen, Yu;Xiang, Keyun;Chen, Xiaopeng;Liu, Cheng
    • Journal of Information Processing Systems
    • /
    • v.17 no.5
    • /
    • pp.1004-1019
    • /
    • 2021
  • To solve the problems of the low image contrast, fuzzy edge details and edge details missing in noisy image fusion, this study proposes a noisy infrared and visible light image fusion algorithm based on non-subsample contourlet transform (NSCT) and an improved bilateral filter, which uses NSCT to decompose an image into a low-frequency component and high-frequency component. High-frequency noise and edge information are mainly distributed in the high-frequency component, and the improved bilateral filtering method is used to process the high-frequency component of two images, filtering the noise of the images and calculating the image detail of the infrared image's high-frequency component. It can extract the edge details of the infrared image and visible image as much as possible by superimposing the high-frequency component of infrared image and visible image. At the same time, edge information is enhanced and the visual effect is clearer. For the fusion rule of low-frequency coefficient, the local area standard variance coefficient method is adopted. At last, we decompose the high- and low-frequency coefficient to obtain the fusion image according to the inverse transformation of NSCT. The fusion results show that the edge, contour, texture and other details are maintained and enhanced while the noise is filtered, and the fusion image with a clear edge is obtained. The algorithm could better filter noise and obtain clear fused images in noisy infrared and visible light image fusion.

Texture Image Fusion on Wavelet Scheme with Space Borne High Resolution Imagery: An Experimental Study

  • Yoo, Hee-Young;Lee , Ki-Won
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.3
    • /
    • pp.243-252
    • /
    • 2005
  • Wavelet transform and its inverse processing provide the effective framework for data fusion. The purpose of this study is to investigate applicability of wavelet transform using texture images for the urban remote sensing application. We tried several experiments regarding image fusion by wavelet transform and texture imaging using high resolution images such as IKONOS and KOMPSAT EOC. As for texture images, we used homogeneity and ASM (Angular Second Moment) images according that these two types of texture images reveal detailed information of complex features of urban environment well. To find out the useful combination scheme for further applications, we performed DWT(Discrete Wavelet Transform) and IDWT(Inverse Discrete Wavelet Transform) using texture images and original images, with adding edge information on the fused images to display texture-wavelet information within edge boundaries. The edge images were obtained by the LoG (Laplacian of Gaussian) processing of original image. As the qualitative result by the visual interpretation of these experiments, the resultant image by each fusion scheme will be utilized to extract unique details of surface characterization on urban features around edge boundaries.

Face inpainting via Learnable Structure Knowledge of Fusion Network

  • Yang, You;Liu, Sixun;Xing, Bin;Li, Kesen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.877-893
    • /
    • 2022
  • With the development of deep learning, face inpainting has been significantly enhanced in the past few years. Although image inpainting framework integrated with generative adversarial network or attention mechanism enhanced the semantic understanding among facial components, the issues of reconstruction on corrupted regions are still worthy to explore, such as blurred edge structure, excessive smoothness, unreasonable semantic understanding and visual artifacts, etc. To address these issues, we propose a Learnable Structure Knowledge of Fusion Network (LSK-FNet), which learns a prior knowledge by edge generation network for image inpainting. The architecture involves two steps: Firstly, structure information obtained by edge generation network is used as the prior knowledge for face inpainting network. Secondly, both the generated prior knowledge and the incomplete image are fed into the face inpainting network together to get the fusion information. To improve the accuracy of inpainting, both of gated convolution and region normalization are applied in our proposed model. We evaluate our LSK-FNet qualitatively and quantitatively on the CelebA-HQ dataset. The experimental results demonstrate that the edge structure and details of facial images can be improved by using LSK-FNet. Our model surpasses the compared models on L1, PSNR and SSIM metrics. When the masked region is less than 20%, L1 loss reduce by more than 4.3%.

3D Fusion Imaging based on Spectral Computed Tomography Using K-edge Images (K-각 영상을 이용한 스펙트럼 전산화단층촬영 기반 3차원 융합진단영상화에 관한 연구)

  • Kim, Burnyoung;Lee, Seungwan;Yim, Dobin
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.4
    • /
    • pp.523-530
    • /
    • 2019
  • The purpose of this study was to obtain the K-edge images using a spectral CT system based on a photon-counting detector and implement the 3D fusion imaging using the conventional and spectral CT images. Also, we evaluated the clinical feasibility of the 3D fusion images though the quantitative analysis of image quality. A spectral CT system based on a CdTe photon-counting detector was used to obtain K-edge images. A pork phantom was manufactured with the six tubes including diluted iodine and gadolinium solutions. The K-edge images were obtained by the low-energy thresholds of 35 and 52 keV for iodine and gadolinium imaging with the X-ray spectrum, which was generated at a tube voltage of 100 kVp with a tube current of $500{\mu}A$. We implemented 3D fusion imaging by combining the iodine and gadolinium K-edge images with the conventional CT images. The results showed that the CNRs of the 3D fusion images were 6.76-14.9 times higher than those of the conventional CT images. Also, the 3D fusion images was able to provide the maps of target materials. Therefore, the technique proposed in this study can improve the quality of CT images and the diagnostic efficiency through the additional information of target materials.

Edge Detection By Fusion Using Local Information of Edges

  • Vlachos, Ioannis K.;Sergiadis, George D.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.403-406
    • /
    • 2003
  • This paper presents a robust algorithm for edge detection based on fuzzy fusion, using a novel local edge information measure based on Renyi's a-order entropy. The calculation of the proposed measure is carried out using a parametric classification scheme based on local statistics. By suitably tuning its parameters, the local edge information measure is capable of extracting different types of edges, while exhibiting high immunity to noise. The notions of fuzzy measures and the Choquet fuzzy integral are applied to combine the different sources of information obtained using the local edge information measure with different sets of parameters. The effectiveness and the robustness of the new method are demonstrated by applying our algorithm to various synthetic computer-generated and real-world images.

  • PDF

A Novel Automatic Block-based Multi-focus Image Fusion via Genetic Algorithm

  • Yang, Yong;Zheng, Wenjuan;Huang, Shuying
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.7
    • /
    • pp.1671-1689
    • /
    • 2013
  • The key issue of block-based multi-focus image fusion is to determine the size of the sub-block because different sizes of the sub-block will lead to different fusion effects. To solve this problem, this paper presents a novel genetic algorithm (GA) based multi-focus image fusion method, in which the block size can be automatically found. In our method, the Sum-modified-Laplacian (SML) is selected as an evaluation criterion to measure the clarity of the image sub-block, and the edge information retention is employed to calculate the fitness of each individual. Then, through the selection, crossover and mutation procedures of the GA, we can obtain the optimal solution for the sub-block, which is finally used to fuse the images. Experimental results show that the proposed method outperforms the traditional methods, including the average, gradient pyramid, discrete wavelet transform (DWT), shift invariant DWT (SIDWT) and two existing GA-based methods in terms of both the visual subjective evaluation and the objective evaluation.

Perceptual Fusion of Infrared and Visible Image through Variational Multiscale with Guide Filtering

  • Feng, Xin;Hu, Kaiqun
    • Journal of Information Processing Systems
    • /
    • v.15 no.6
    • /
    • pp.1296-1305
    • /
    • 2019
  • To solve the problem of poor noise suppression capability and frequent loss of edge contour and detailed information in current fusion methods, an infrared and visible light image fusion method based on variational multiscale decomposition is proposed. Firstly, the fused images are separately processed through variational multiscale decomposition to obtain texture components and structural components. The method of guided filter is used to carry out the fusion of the texture components of the fused image. In the structural component fusion, a method is proposed to measure the fused weights with phase consistency, sharpness, and brightness comprehensive information. Finally, the texture components of the two images are fused. The structure components are added to obtain the final fused image. The experimental results show that the proposed method displays very good noise robustness, and it also helps realize better fusion quality.

Refinement of Disparity Map using the Rule-based Fusion of Area and Feature-based Matching Results

  • Um, Gi-Mun;Ahn, Chung-Hyun;Kim, Kyung-Ok;Lee, Kwae-Hi
    • Proceedings of the KSRS Conference
    • /
    • 1999.11a
    • /
    • pp.304-309
    • /
    • 1999
  • In this paper, we presents a new disparity map refinement algorithm using statistical characteristics of disparity map and edge information. The proposed algorithm generate a refined disparity map using disparity maps which are obtained from area and feature-based Stereo Matching by selecting a disparity value of edge point based on the statistics of both disparity maps. Experimental results on aerial stereo image show the better results than conventional fusion algorithms in the disparity error. This algorithm can be applied to the reconstruction of building image from the high resolution remote sensing data.

  • PDF

A Study of Time Synchronization Methods for IoT Network Nodes

  • Yoo, Sung Geun;Park, Sangil;Lee, Won-Young
    • International journal of advanced smart convergence
    • /
    • v.9 no.1
    • /
    • pp.109-112
    • /
    • 2020
  • Many devices are connected on the internet to give functionalities for interconnected services. In 2020', The number of devices connected to the internet will be reached 5.8 billion. Moreover, many connected service provider such as Google and Amazon, suggests edge computing and mesh networks to cope with this situation which the many devices completely connected on their networks. This paper introduces the current state of the introduction of the wireless mesh network and edge cloud in order to efficiently manage a large number of nodes in the exploding Internet of Things (IoT) network and introduces the existing Network Time Protocol (NTP). On the basis of this, we propose a relatively accurate time synchronization method, especially in heterogeneous mesh networks. Using this NTP, multiple time coordinators can be placed in a mesh network to find the delay error using the average delay time and the delay time of the time coordinator. Therefore, accurate time can be synchronized when implementing IoT, remote metering, and real-time media streaming using IoT mesh network.

A multisource image fusion method for multimodal pig-body feature detection

  • Zhong, Zhen;Wang, Minjuan;Gao, Wanlin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.11
    • /
    • pp.4395-4412
    • /
    • 2020
  • The multisource image fusion has become an active topic in the last few years owing to its higher segmentation rate. To enhance the accuracy of multimodal pig-body feature segmentation, a multisource image fusion method was employed. Nevertheless, the conventional multisource image fusion methods can not extract superior contrast and abundant details of fused image. To superior segment shape feature and detect temperature feature, a new multisource image fusion method was presented and entitled as NSST-GF-IPCNN. Firstly, the multisource images were resolved into a range of multiscale and multidirectional subbands by Nonsubsampled Shearlet Transform (NSST). Then, to superior describe fine-scale texture and edge information, even-symmetrical Gabor filter and Improved Pulse Coupled Neural Network (IPCNN) were used to fuse low and high-frequency subbands, respectively. Next, the fused coefficients were reconstructed into a fusion image using inverse NSST. Finally, the shape feature was extracted using automatic threshold algorithm and optimized using morphological operation. Nevertheless, the highest temperature of pig-body was gained in view of segmentation results. Experiments revealed that the presented fusion algorithm was able to realize 2.102-4.066% higher average accuracy rate than the traditional algorithms and also enhanced efficiency.