• 제목/요약/키워드: multi-scale fusion

검색결과 71건 처리시간 0.025초

An Improved Multi-resolution image fusion framework using image enhancement technique

  • Jhee, Hojin;Jang, Chulhee;Jin, Sanghun;Hong, Yonghee
    • 한국컴퓨터정보학회논문지
    • /
    • 제22권12호
    • /
    • pp.69-77
    • /
    • 2017
  • This paper represents a novel framework for multi-scale image fusion. Multi-scale Kalman Smoothing (MKS) algorithm with quad-tree structure can provide a powerful multi-resolution image fusion scheme by employing Markov property. In general, such approach provides outstanding image fusion performance in terms of accuracy and efficiency, however, quad-tree based method is often limited to be applied in certain applications due to its stair-like covariance structure, resulting in unrealistic blocky artifacts at the fusion result where finest scale data are void or missed. To mitigate this structural artifact, in this paper, a new scheme of multi-scale fusion framework is proposed. By employing Super Resolution (SR) technique on MKS algorithm, fine resolved measurement is generated and blended through the tree structure such that missed detail information at data missing region in fine scale image is properly inferred and the blocky artifact can be successfully suppressed at fusion result. Simulation results show that the proposed method provides significantly improved fusion results in the senses of both Root Mean Square Error (RMSE) performance and visual improvement over conventional MKS algorithm.

An Efficient Monocular Depth Prediction Network Using Coordinate Attention and Feature Fusion

  • Huihui, Xu;Fei ,Li
    • Journal of Information Processing Systems
    • /
    • 제18권6호
    • /
    • pp.794-802
    • /
    • 2022
  • The recovery of reasonable depth information from different scenes is a popular topic in the field of computer vision. For generating depth maps with better details, we present an efficacious monocular depth prediction framework with coordinate attention and feature fusion. Specifically, the proposed framework contains attention, multi-scale and feature fusion modules. The attention module improves features based on coordinate attention to enhance the predicted effect, whereas the multi-scale module integrates useful low- and high-level contextual features with higher resolution. Moreover, we developed a feature fusion module to combine the heterogeneous features to generate high-quality depth outputs. We also designed a hybrid loss function that measures prediction errors from the perspective of depth and scale-invariant gradients, which contribute to preserving rich details. We conducted the experiments on public RGBD datasets, and the evaluation results show that the proposed scheme can considerably enhance the accuracy of depth prediction, achieving 0.051 for log10 and 0.992 for δ<1.253 on the NYUv2 dataset.

An Approach to Improve the Contrast of Multi Scale Fusion Methods

  • Hwang, Tae Hun;Kim, Jin Heon
    • Journal of Multimedia Information System
    • /
    • 제5권2호
    • /
    • pp.87-90
    • /
    • 2018
  • Various approaches have been proposed to convert low dynamic range (LDR) to high dynamic range (HDR). Of these approaches, the Multi Scale Fusion (MSF) algorithm based on Laplacian pyramid decomposition is used in many applications and demonstrates its usefulness. However, the pyramid fusion technique has no means for controlling the luminance component because the total number of pixels decreases as the pyramid rises to the upper layer. In this paper, we extract the reflection light of the image based on the Retinex theory and generate the weight map by adjusting the reflection component. This weighting map is applied to achieve an MSF-like effect during image fusion and provides an opportunity to control the brightness components. Experimental results show that the proposed method maintains the total number of pixels and exhibits similar effects to the conventional method.

Multi-parametric MRIs based assessment of Hepatocellular Carcinoma Differentiation with Multi-scale ResNet

  • Jia, Xibin;Xiao, Yujie;Yang, Dawei;Yang, Zhenghan;Lu, Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권10호
    • /
    • pp.5179-5196
    • /
    • 2019
  • To explore an effective non-invasion medical imaging diagnostics approach for hepatocellular carcinoma (HCC), we propose a method based on adopting the multiple technologies with the multi-parametric data fusion, transfer learning, and multi-scale deep feature extraction. Firstly, to make full use of complementary and enhancing the contribution of different modalities viz. multi-parametric MRI images in the lesion diagnosis, we propose a data-level fusion strategy. Secondly, based on the fusion data as the input, the multi-scale residual neural network with SPP (Spatial Pyramid Pooling) is utilized for the discriminative feature representation learning. Thirdly, to mitigate the impact of the lack of training samples, we do the pre-training of the proposed multi-scale residual neural network model on the natural image dataset and the fine-tuning with the chosen multi-parametric MRI images as complementary data. The comparative experiment results on the dataset from the clinical cases show that our proposed approach by employing the multiple strategies achieves the highest accuracy of 0.847±0.023 in the classification problem on the HCC differentiation. In the problem of discriminating the HCC lesion from the non-tumor area, we achieve a good performance with accuracy, sensitivity, specificity and AUC (area under the ROC curve) being 0.981±0.002, 0.981±0.002, 0.991±0.007 and 0.999±0.0008, respectively.

Real Scene Text Image Super-Resolution Based on Multi-Scale and Attention Fusion

  • Xinhua Lu;Haihai Wei;Li Ma;Qingji Xue;Yonghui Fu
    • Journal of Information Processing Systems
    • /
    • 제19권4호
    • /
    • pp.427-438
    • /
    • 2023
  • Plenty of works have indicated that single image super-resolution (SISR) models relying on synthetic datasets are difficult to be applied to real scene text image super-resolution (STISR) for its more complex degradation. The up-to-date dataset for realistic STISR is called TextZoom, while the current methods trained on this dataset have not considered the effect of multi-scale features of text images. In this paper, a multi-scale and attention fusion model for realistic STISR is proposed. The multi-scale learning mechanism is introduced to acquire sophisticated feature representations of text images; The spatial and channel attentions are introduced to capture the local information and inter-channel interaction information of text images; At last, this paper designs a multi-scale residual attention module by skillfully fusing multi-scale learning and attention mechanisms. The experiments on TextZoom demonstrate that the model proposed increases scene text recognition's (ASTER) average recognition accuracy by 1.2% compared to text super-resolution network.

캐스케이드 융합 기반 다중 스케일 열화상 향상 기법 (Cascade Fusion-Based Multi-Scale Enhancement of Thermal Image)

  • 이경재
    • 한국전자통신학회논문지
    • /
    • 제19권1호
    • /
    • pp.301-307
    • /
    • 2024
  • 본 연구는 다양한 스케일 조건에서 열화상 이미지를 향상시키기 위한 새로운 캐스케이드 융합 구조를 제안한다. 특정 스케일에 맞춰 설계된 방법들은 다중 스케일에서 열화상 이미지 처리에 한계가 있었다. 이를 극복하기 위해 본 논문에서는 다중 스케일 표현을 활용하는 캐스케이드 특징 융합 기법에 기반한 통합 프레임워크를 제시한다. 서로 다른 스케일의 신뢰도 맵을 순차적으로 융합함으로써 스케일에 제약받지 않는 학습이 가능해진다. 제안된 구조는 상호 스케일 의존성을 강화하기 위해 엔드 투 엔드 방식으로 훈련된 합성곱 신경망으로 구성되어 있다. 실험 결과, 제안된 방법은 기존의 다중 스케일 열화상 이미지 향상 방법들보다 우수한 성능을 보인다는 것을 확인할 수 있었다. 또한, 실험 데이터셋에 대한 성능 분석 결과 이미지 품질 지표가 일관되게 개선되었으며, 이는 캐스케이드 융합 설계가 스케일 간 견고한 일반화를 가능하게 하고 교차 스케일 표현 학습을 더 효율적으로 수행하는 데 기여하는 것을 보여준다.

Research on the Multi-Focus Image Fusion Method Based on the Lifting Stationary Wavelet Transform

  • Hu, Kaiqun;Feng, Xin
    • Journal of Information Processing Systems
    • /
    • 제14권5호
    • /
    • pp.1293-1300
    • /
    • 2018
  • For the disadvantages of multi-scale geometric analysis methods such as loss of definition and complex selection of rules in image fusion, an improved multi-focus image fusion method is proposed. First, the initial fused image is quickly obtained based on the lifting stationary wavelet transform, and a simple normalized cut is performed on the initial fused image to obtain different segmented regions. Then, the original image is subjected to NSCT transformation and the absolute value of the high frequency component coefficient in each segmented region is calculated. At last, the region with the largest absolute value is selected as the postfusion region, and the fused multi-focus image is obtained by traversing each segment region. Numerical experiments show that the proposed algorithm can not only simplify the selection of fusion rules, but also overcome loss of definition and has validity.

Multi-Scale Dilation Convolution Feature Fusion (MsDC-FF) Technique for CNN-Based Black Ice Detection

  • Sun-Kyoung KANG
    • 한국인공지능학회지
    • /
    • 제11권3호
    • /
    • pp.17-22
    • /
    • 2023
  • In this paper, we propose a black ice detection system using Convolutional Neural Networks (CNNs). Black ice poses a serious threat to road safety, particularly during winter conditions. To overcome this problem, we introduce a CNN-based architecture for real-time black ice detection with an encoder-decoder network, specifically designed for real-time black ice detection using thermal images. To train the network, we establish a specialized experimental platform to capture thermal images of various black ice formations on diverse road surfaces, including cement and asphalt. This enables us to curate a comprehensive dataset of thermal road black ice images for a training and evaluation purpose. Additionally, in order to enhance the accuracy of black ice detection, we propose a multi-scale dilation convolution feature fusion (MsDC-FF) technique. This proposed technique dynamically adjusts the dilation ratios based on the input image's resolution, improving the network's ability to capture fine-grained details. Experimental results demonstrate the superior performance of our proposed network model compared to conventional image segmentation models. Our model achieved an mIoU of 95.93%, while LinkNet achieved an mIoU of 95.39%. Therefore, it is concluded that the proposed model in this paper could offer a promising solution for real-time black ice detection, thereby enhancing road safety during winter conditions.

딥 CNN에서의 Different Scale Information Fusion (DSIF)의 영향에 대한 이해 (Understanding the Effect of Different Scale Information Fusion in Deep Convolutional Neural Networks)

  • Liu, Kai;Cheema, Usman;Moon, Seungbin
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2019년도 추계학술발표대회
    • /
    • pp.1004-1006
    • /
    • 2019
  • Different scale of information is an important component in computer vision systems. Recently, there are considerable researches on utilizing multi-scale information to solve the scale-invariant problems, such as GoogLeNet and FPN. In this paper, we introduce the notion of different scale information fusion (DSIF) and show that it has a significant effect on the performance of object recognition systems. We analyze the DSIF in several architecture designs, and the effect of nonlinear activations, dropout, sub-sampling and skip connections on it. This leads to clear suggestions for ways of the DSIF to choose.

New Medical Image Fusion Approach with Coding Based on SCD in Wireless Sensor Network

  • Zhang, De-gan;Wang, Xiang;Song, Xiao-dong
    • Journal of Electrical Engineering and Technology
    • /
    • 제10권6호
    • /
    • pp.2384-2392
    • /
    • 2015
  • The technical development and practical applications of big-data for health is one hot topic under the banner of big-data. Big-data medical image fusion is one of key problems. A new fusion approach with coding based on Spherical Coordinate Domain (SCD) in Wireless Sensor Network (WSN) for big-data medical image is proposed in this paper. In this approach, the three high-frequency coefficients in wavelet domain of medical image are pre-processed. This pre-processing strategy can reduce the redundant ratio of big-data medical image. Firstly, the high-frequency coefficients are transformed to the spherical coordinate domain to reduce the correlation in the same scale. Then, a multi-scale model product (MSMP) is used to control the shrinkage function so as to make the small wavelet coefficients and some noise removed. The high-frequency parts in spherical coordinate domain are coded by improved SPIHT algorithm. Finally, based on the multi-scale edge of medical image, it can be fused and reconstructed. Experimental results indicate the novel approach is effective and very useful for transmission of big-data medical image(especially, in the wireless environment).