• 제목/요약/키워드: Image Fusion

검색결과 888건 처리시간 0.028초

TEXTURE ANALYSIS, IMAGE FUSION AND KOMPSAT-1

  • Kressler, F.P.;Kim, Y.S.;Steinnocher, K.T.
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2002년도 Proceedings of International Symposium on Remote Sensing
    • /
    • pp.792-797
    • /
    • 2002
  • In the following paper two algorithms, suitable for the analysis of panchromatic data as provided by KOMPSAT-1 will be presented. One is a texture analysis which will be used to create a settlement mask based on the variations of gray values. The other is a fusion algorithm which allows the combination of high resolution panchromatic data with medium resolution multispectral data. The procedure developed for this purpose uses the spatial information present in the high resolution image to spatially enhance the low resolution image, while keeping the distortion of the multispectral information to a minimum. This makes it possible to use the fusion results for standard multispecatral classification routines. The procedures presented here can be automated to large extent, making them suitable for a standard processing routine of satellite data.

  • PDF

LFFCNN: 라이트 필드 카메라의 다중 초점 이미지 합성 (LFFCNN: Multi-focus Image Synthesis in Light Field Camera)

  • 김형식;남가빈;김영섭
    • 반도체디스플레이기술학회지
    • /
    • 제22권3호
    • /
    • pp.149-154
    • /
    • 2023
  • This paper presents a novel approach to multi-focus image fusion using light field cameras. The proposed neural network, LFFCNN (Light Field Focus Convolutional Neural Network), is composed of three main modules: feature extraction, feature fusion, and feature reconstruction. Specifically, the feature extraction module incorporates SPP (Spatial Pyramid Pooling) to effectively handle images of various scales. Experimental results demonstrate that the proposed model not only effectively fuses a single All-in-Focus image from images with multi focus images but also offers more efficient and robust focus fusion compared to existing methods.

  • PDF

Comparison of Fusion Methods for Generating 250m MODIS Image

  • Kim, Sun-Hwa;Kang, Sung-Jin;Lee, Kyu-Sung
    • 대한원격탐사학회지
    • /
    • 제26권3호
    • /
    • pp.305-316
    • /
    • 2010
  • The MODerate Resolution Imaging Spectroradiometer (MODIS) sensor has 36 bands at 250m, 500m, 1km spatial resolution. However, 500m or 1km MODIS data exhibits a few limitations when low resolution data is applied at small areas that possess complex land cover types. In this study, we produce seven 250m spectral bands by fusing two MODIS 250m bands into five 500m bands. In order to recommend the best fusion method by which one acquires MODIS data, we compare seven fusion methods including the Brovey transform, principle components algorithm (PCA) fusion method, the Gram-Schmidt fusion method, the least mean and variance matching method, the least square fusion method, the discrete wavelet fusion method, and the wavelet-PCA fusion method. Results of the above fusion methods are compared using various evaluation indicators such as correlation, relative difference of mean, relative variation, deviation index, peak signal-to-noise ratio index and universal image quality index, as well as visual interpretation method. Among various fusion methods, the local mean and variance matching method provides the best fusion result for the visual interpretation and the evaluation indicators. The fusion algorithm of 250m MODIS data may be used to effectively improve the accuracy of various MODIS land products.

수동형 멀리미터파 영상과 가시 영상과의 정합 및 융합에 관한 연구 (Image Registration and Fusion between Passive Millimeter Wave Images and Visual Images)

  • 이형;이동수;염석원;손정영;블라드미르 구신;김신환
    • 한국통신학회논문지
    • /
    • 제36권6C호
    • /
    • pp.349-354
    • /
    • 2011
  • 수동형(passive) 밀리미터파(millimeter wave) 영상은 의복 등에 은닉된 물체의 탐지가 가능하며 악천후의 상황에서도 감쇄도(attenuation)가 낮아 식별이 가능한 영상을 획득할 수 있다. 그러나 영상 시스템의 공간 해상도(spatial resolution)가 낮고 수신신호가 미약하여 잡음의 영향이 크고 시스템의 온도 분해능(temperature resolution)에 따라 영상의 질이 달라진다. 본 논문에서는 수동형 밀리미터파 영상과 일반 카메라부터 획득되는 영상의 정합(registration)과 은닉된 물체의 시각화를 위한 영상 융합(fusion)을 연구한다. 영상의 정합은 추출된 몸체 경계 간의 상호상관도를 최대로 하는 어파인 변환(affine transform)으로 수행되며 융합은 영상 분해를 위한 이산 웨이블릿 변환(discrete wavelet transform), 융합 법칙(fusion rule), 영상을 복원하기 위한 역 이산 웨이블릿 변환의 3단계로 구성된다. 실험에서는 수동형 밀리미터파 영상 시스템에 의해 칼, 도끼, 화장품, 그리고 휴대폰과 같은 또는 비금속의 다양한 물체가 탐지됨을 보인다. 또한 정합과 융합된 영상의 결과로부터 가시 영상으로부터 얻은 얼굴과 의복 등의 대상자의 신원정보와 밀리미터파 영상으로부터 획득한 은닉된 물체의 정보를 동시에 시각화할 수 있음을 보인다.

Comparison of GAN Deep Learning Methods for Underwater Optical Image Enhancement

  • Kim, Hong-Gi;Seo, Jung-Min;Kim, Soo Mee
    • 한국해양공학회지
    • /
    • 제36권1호
    • /
    • pp.32-40
    • /
    • 2022
  • Underwater optical images face various limitations that degrade the image quality compared with optical images taken in our atmosphere. Attenuation according to the wavelength of light and reflection by very small floating objects cause low contrast, blurry clarity, and color degradation in underwater images. We constructed an image data of the Korean sea and enhanced it by learning the characteristics of underwater images using the deep learning techniques of CycleGAN (cycle-consistent adversarial network), UGAN (underwater GAN), FUnIE-GAN (fast underwater image enhancement GAN). In addition, the underwater optical image was enhanced using the image processing technique of Image Fusion. For a quantitative performance comparison, UIQM (underwater image quality measure), which evaluates the performance of the enhancement in terms of colorfulness, sharpness, and contrast, and UCIQE (underwater color image quality evaluation), which evaluates the performance in terms of chroma, luminance, and saturation were calculated. For 100 underwater images taken in Korean seas, the average UIQMs of CycleGAN, UGAN, and FUnIE-GAN were 3.91, 3.42, and 2.66, respectively, and the average UCIQEs were measured to be 29.9, 26.77, and 22.88, respectively. The average UIQM and UCIQE of Image Fusion were 3.63 and 23.59, respectively. CycleGAN and UGAN qualitatively and quantitatively improved the image quality in various underwater environments, and FUnIE-GAN had performance differences depending on the underwater environment. Image Fusion showed good performance in terms of color correction and sharpness enhancement. It is expected that this method can be used for monitoring underwater works and the autonomous operation of unmanned vehicles by improving the visibility of underwater situations more accurately.

Image Fusion for Improving Classification

  • Lee, Dong-Cheon;Kim, Jeong-Woo;Kwon, Jay-Hyoun;Kim, Chung;Park, Ki-Surk
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2003년도 Proceedings of ACRS 2003 ISRS
    • /
    • pp.1464-1466
    • /
    • 2003
  • classification of the satellite images provides information about land cover and/or land use. Quality of the classification result depends mainly on the spatial and spectral resolutions of the images. In this study, image fusion in terms of resolution merging, and band integration with multi-source of the satellite images; Landsat ETM+ and Ikonos were carried out to improve classification. Resolution merging and band integration could generate imagery of high resolution with more spectral bands. Precise image co-registration is required to remove geometric distortion between different sources of images. Combination of unsupervised and supervised classification of the fused imagery was implemented to improve classification. 3D display of the results was possible by combining DEM with the classification result so that interpretability could be improved.

  • PDF

Attention-based for Multiscale Fusion Underwater Image Enhancement

  • Huang, Zhixiong;Li, Jinjiang;Hua, Zhen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권2호
    • /
    • pp.544-564
    • /
    • 2022
  • Underwater images often suffer from color distortion, blurring and low contrast, which is caused by the propagation of light in the underwater environment being affected by the two processes: absorption and scattering. To cope with the poor quality of underwater images, this paper proposes a multiscale fusion underwater image enhancement method based on channel attention mechanism and local binary pattern (LBP). The network consists of three modules: feature aggregation, image reconstruction and LBP enhancement. The feature aggregation module aggregates feature information at different scales of the image, and the image reconstruction module restores the output features to high-quality underwater images. The network also introduces channel attention mechanism to make the network pay more attention to the channels containing important information. The detail information is protected by real-time superposition with feature information. Experimental results demonstrate that the method in this paper produces results with correct colors and complete details, and outperforms existing methods in quantitative metrics.

X-Ray Image Enhancement Using a Boundary Division Wiener Filter and Wavelet-Based Image Fusion Approach

  • Khan, Sajid Ullah;Chai, Wang Yin;See, Chai Soo;Khan, Amjad
    • Journal of Information Processing Systems
    • /
    • 제12권1호
    • /
    • pp.35-45
    • /
    • 2016
  • To resolve the problems of Poisson/impulse noise, blurriness, and sharpness in degraded X-ray images, a novel and efficient enhancement algorithm based on X-ray image fusion using a discrete wavelet transform is proposed in this paper. The proposed algorithm consists of two basics. First, it applies the techniques of boundary division to detect Poisson and impulse noise corrupted pixels and then uses the Wiener filter approach to restore those corrupted pixels. Second, it applies the sharpening technique to the same degraded X-ray image. Thus, it has two source X-ray images, which individually preserve the enhancement effects. The details and approximations of these sources X-ray images are fused via different fusion rules in the wavelet domain. The results of the experiment show that the proposed algorithm successfully combines the merits of the Wiener filter and sharpening and achieves a significant proficiency in the enhancement of degraded X-ray images exhibiting Poisson noise, blurriness, and edge details.

Image fusion technique using flat panel detector rotational angiography for transvenous embolization of intracranial dural arteriovenous fistula

  • Jai Ho Choi;Yong Sam Shin;Bum-soo Kim
    • Journal of Cerebrovascular and Endovascular Neurosurgery
    • /
    • 제25권3호
    • /
    • pp.253-259
    • /
    • 2023
  • Precise evaluation of the feeders, fistulous points, and draining veins plays a key role for successful embolization of intracranial dural arteriovenous fistulas (DAVF). Digital subtraction angiography (DSA) is a gold standard diagnostic tool to assess the exact angioarchitecture of DAVFs. With the advent of new image postprocessing techniques, we lately have been able to apply image fusion techniques with two different image sets obtained with flat panel detector rotational angiography. This new technique can provide additional and better pretherapeutic information of DAVFs over the conventional 2D and 3D angiographies. In addition, it can be used during the endovascular treatment to help the accurate and precise navigation of the microcatheter and microguidwire inside the vessels and identify the proper location of microcatheter in the targeted shunting pouch. In this study, we briefly review the process of an image fusion technique and introduce our clinical application for treating DAVFs, especially focused on the transvenous embolization.

캐스케이드 융합 기반 다중 스케일 열화상 향상 기법 (Cascade Fusion-Based Multi-Scale Enhancement of Thermal Image)

  • 이경재
    • 한국전자통신학회논문지
    • /
    • 제19권1호
    • /
    • pp.301-307
    • /
    • 2024
  • 본 연구는 다양한 스케일 조건에서 열화상 이미지를 향상시키기 위한 새로운 캐스케이드 융합 구조를 제안한다. 특정 스케일에 맞춰 설계된 방법들은 다중 스케일에서 열화상 이미지 처리에 한계가 있었다. 이를 극복하기 위해 본 논문에서는 다중 스케일 표현을 활용하는 캐스케이드 특징 융합 기법에 기반한 통합 프레임워크를 제시한다. 서로 다른 스케일의 신뢰도 맵을 순차적으로 융합함으로써 스케일에 제약받지 않는 학습이 가능해진다. 제안된 구조는 상호 스케일 의존성을 강화하기 위해 엔드 투 엔드 방식으로 훈련된 합성곱 신경망으로 구성되어 있다. 실험 결과, 제안된 방법은 기존의 다중 스케일 열화상 이미지 향상 방법들보다 우수한 성능을 보인다는 것을 확인할 수 있었다. 또한, 실험 데이터셋에 대한 성능 분석 결과 이미지 품질 지표가 일관되게 개선되었으며, 이는 캐스케이드 융합 설계가 스케일 간 견고한 일반화를 가능하게 하고 교차 스케일 표현 학습을 더 효율적으로 수행하는 데 기여하는 것을 보여준다.