• Title/Summary/Keyword: blurring images

Search Result 191, Processing Time 0.028 seconds

An Adaptive Image Restoration Algorithm Using Edge Detection Based on the Block FFT (블록 FFT에 기초한 에지검출을 이용한 적응적 영상복원 알고리즘)

  • Ahn, Do-Rang;Lee, Dong-Wook
    • Proceedings of the KIEE Conference
    • /
    • 1998.11b
    • /
    • pp.569-571
    • /
    • 1998
  • In this paper, we propose a method of restoring blurred images by an edge-sensitive adaptive filter. The direction of the edge is estimated using the properties of 2-D block FFT. Reduction of blurring due to the added noise during image transfer and the focus of lens caused by shooting a fast moving object is very important. To remove this phenomenon effectively, we can use the edge information obtained by processing the blurred images. The proposed algorithm estimates both the existence and the direction of the edge. On the basis of the acquired edge direction information, we choose the appropriate edge-sensitive adaptive filter, which enables us to get better images than images obtained by methods not considering the direction of the edge. The performance of the proposed algorithm is shown in the simulation result.

  • PDF

A Study of Fusion Image System and Simulation based on Mutual Information (상호정보량에 의한 이미지 융합시스템 및 시뮬레이션에 관한 연구)

  • Kim, Yonggil;Kim, Chul;Moon, Kyungil
    • Journal of The Korean Association of Information Education
    • /
    • v.19 no.1
    • /
    • pp.139-148
    • /
    • 2015
  • The purpose of image fusion is to combine the relevant information from a set of images into a single image, where the resultant fused image will be more informative and complete than any of the input images. Image fusion techniques can improve the quality and increase the application of these data important applications of the fusion of images include medical imaging, remote sensing, and robotics. In this paper, we suggest a new method to generate a fusion image using the close relation of image features obtained through maximum entropy threshold and mutual information. This method represents a good image registration in case of using a blurring image than other image fusion methods.

A Defocus Technique based Depth from Lens Translation using Sequential SVD Factorization

  • Kim, Jong-Il;Ahn, Hyun-Sik;Jeong, Gu-Min;Kim, Do-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.383-388
    • /
    • 2005
  • Depth recovery in robot vision is an essential problem to infer the three dimensional geometry of scenes from a sequence of the two dimensional images. In the past, many studies have been proposed for the depth estimation such as stereopsis, motion parallax and blurring phenomena. Among cues for depth estimation, depth from lens translation is based on shape from motion by using feature points. This approach is derived from the correspondence of feature points detected in images and performs the depth estimation that uses information on the motion of feature points. The approaches using motion vectors suffer from the occlusion or missing part problem, and the image blur is ignored in the feature point detection. This paper presents a novel approach to the defocus technique based depth from lens translation using sequential SVD factorization. Solving such the problems requires modeling of mutual relationship between the light and optics until reaching the image plane. For this mutuality, we first discuss the optical properties of a camera system, because the image blur varies according to camera parameter settings. The camera system accounts for the camera model integrating a thin lens based camera model to explain the light and optical properties and a perspective projection camera model to explain the depth from lens translation. Then, depth from lens translation is proposed to use the feature points detected in edges of the image blur. The feature points contain the depth information derived from an amount of blur of width. The shape and motion can be estimated from the motion of feature points. This method uses the sequential SVD factorization to represent the orthogonal matrices that are singular value decomposition. Some experiments have been performed with a sequence of real and synthetic images comparing the presented method with the depth from lens translation. Experimental results have demonstrated the validity and shown the applicability of the proposed method to the depth estimation.

  • PDF

Evaluation of Denoising Filters Based on Edge Locations

  • Seo, Suyoung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.4
    • /
    • pp.503-513
    • /
    • 2020
  • This paper presents a method to evaluate denoising filters based on edge locations in their denoised images. Image quality assessment has often been performed by using structural similarity (SSIM). However, SSIM does not provide clearly the geometric accuracy of features in denoised images. Thus, in this paper, a method to localize edge locations with subpixel accuracy based on adaptive weighting of gradients is used for obtaining the subpixel locations of edges in ground truth image, noisy images, and denoised images. Then, this paper proposes a method to evaluate the geometric accuracy of edge locations based on root mean squares error (RMSE) and jaggedness with reference to ground truth locations. Jaggedness is a measure proposed in this study to measure the stability of the distribution of edge locations. Tested denoising filters are anisotropic diffusion (AF), bilateral filter, guided filter, weighted guided filter, weighted mean of patches filter, and smoothing filter (SF). SF is a simple filter that smooths images by applying a Gaussian blurring to a noisy image. Experiments were performed with a set of simulated images and natural images. The experimental results show that AF and SF recovered edge locations more accurately than the other tested filters in terms of SSIM, RMSE, and jaggedness and that SF produced better results than AF in terms of jaggedness.

Partial Solution for Concomitant Gradient Field in Ultra-low Magnetic Field: Correction of Distortion Artifact

  • Lee, Seong-Joo;Shim, Jeong Hyun
    • Journal of the Korean Magnetic Resonance Society
    • /
    • v.24 no.3
    • /
    • pp.66-69
    • /
    • 2020
  • In ultra-low field magnetic resonance imaging (ULF-MRI), the strength of a static magnetic field can be comparable to that of gradient field. On that occasion, the gradient field is accompanied by concomitant gradient field, which yields distortion and blurring artifacts on MR images. Here, we focused on the distortion artifact and derived the equations capable of correcting it. Its usefulness was confirmed through the corrections in both simulated and experimental images. This solution will be effective for acquiring more accurate images in low and/or ultra-low magnetic fields.

Perceptibility and Acceptability Tests for the Quality Changes of Complex Images (복합 이미지에 대한 Perceptibility와 Acceptability 측정)

  • Kim Dong Ho;Park Seung Ok;Kim Hong Seok;Kim Yeon Jin
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.80-81
    • /
    • 2003
  • The psychophysical experiments were carried out by a panel of eleven observers on the image difference pairs displayed on the LCD (liquid crystal display)to quantify the quality changes of complex images imparted by the typical image processing operations. There were six different kinds of pairs according to their original image. The three types of visual tests performed were: pair-to-pair comparison of image differences for ordering the differences between images introduced by single or combination of image lightness change, contrast change, blurring, and sharpening, perceptibility and acceptability tests using ascending or descending series of image difference pairs ordered according to the size of their visual differences. (omitted)

  • PDF

Attention-based for Multiscale Fusion Underwater Image Enhancement

  • Huang, Zhixiong;Li, Jinjiang;Hua, Zhen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.2
    • /
    • pp.544-564
    • /
    • 2022
  • Underwater images often suffer from color distortion, blurring and low contrast, which is caused by the propagation of light in the underwater environment being affected by the two processes: absorption and scattering. To cope with the poor quality of underwater images, this paper proposes a multiscale fusion underwater image enhancement method based on channel attention mechanism and local binary pattern (LBP). The network consists of three modules: feature aggregation, image reconstruction and LBP enhancement. The feature aggregation module aggregates feature information at different scales of the image, and the image reconstruction module restores the output features to high-quality underwater images. The network also introduces channel attention mechanism to make the network pay more attention to the channels containing important information. The detail information is protected by real-time superposition with feature information. Experimental results demonstrate that the method in this paper produces results with correct colors and complete details, and outperforms existing methods in quantitative metrics.

Attack Detection on Images Based on DCT-Based Features

  • Nirin Thanirat;Sudsanguan Ngamsuriyaroj
    • Asia pacific journal of information systems
    • /
    • v.31 no.3
    • /
    • pp.335-357
    • /
    • 2021
  • As reproduction of images can be done with ease, copy detection has increasingly become important. In the duplication process, image modifications are likely to occur and some alterations are deliberate and can be viewed as attacks. A wide range of copy detection techniques has been proposed. In our study, content-based copy detection, which basically applies DCT-based features for images, namely, pixel values, edges, texture information and frequency-domain component distribution, is employed. Experiments are carried out to evaluate robustness and sensitivity of DCT-based features from attacks. As different types of DCT-based features hold different pieces of information, how features and attacks are related can be shown in their robustness and sensitivity. Rather than searching for proper features, use of robustness and sensitivity is proposed here to realize how the attacked features have changed when an image attack occurs. The experiments show that, out of ten attacks, the neural networks are able to detect seven attacks namely, Gaussian noise, S&P noise, Gamma correction (high), blurring, resizing (big), compression and rotation with mostly related to their sensitive features.

Multi-stage Image Restoration for High Resolution Panchromatic Imagery (고해상도 범색 영상을 위한 다중 단계 영상 복원)

  • Lee, Sanghoon
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.6
    • /
    • pp.551-566
    • /
    • 2016
  • In the satellite remote sensing, the operational environment of the satellite sensor causes image degradation during the image acquisition. The degradation results in noise and blurring which badly affect identification and extraction of useful information in image data. Especially, the degradation gives bad influence in the analysis of images collected over the scene with complicate surface structure such as urban area. This study proposes a multi-stage image restoration to improve the accuracy of detailed analysis for the images collected over the complicate scene. The proposed method assumes a Gaussian additive noise, Markov random field of spatial continuity, and blurring proportional to the distance between the pixels. Point-Jacobian Iteration Maximum A Posteriori (PJI-MAP) estimation is employed to restore a degraded image. The multi-stage process includes the image segmentation performing region merging after pixel-linking. A dissimilarity coefficient combining homogeneity and contrast is proposed for image segmentation. In this study, the proposed method was quantitatively evaluated using simulation data and was also applied to the two panchromatic images of super-high resolution: Dubaisat-2 data of 1m resolution from LA, USA and KOMPSAT3 data of 0.7 m resolution from Daejeon in the Korean peninsula. The experimental results imply that it can improve analytical accuracy in the application of remote sensing high resolution panchromatic imagery.

Analysis of X-ray image Qualities -accuracy of shape and clearness of image using X-ray digital tomosynthesis (디지털 영상 합성에 의한 X선 단층 영상의 형상 정확도와 선명도 분석)

  • Roh, Yeong-Jun;Cho, Hyung-Suck;Kim, Hyeong-Cheol;Kim, Sung-Kwon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.5 no.5
    • /
    • pp.558-567
    • /
    • 1999
  • X-ray laminography and DT(digital tomosynthesis) that can form a cross-sectional image of 3-D objects promis to be good solutions for inspecting interior defects of industrial products. DT is a kind of laminography technique and the difference is in the fact that it synthesizes the several projected images by use of the digitized memory and computation. The quality of images acquired from the DT system varies according to image synthesizing methods, the number of images used in image synthesizing, and X-ray projection angles. In this paper, a new image synthesizing method named 'log-root method' is proposed to get clear and accurate cross-sectional images, which can reduce both artifact and blurring generated by materials out of focal plane. To evaluate the quality of cross-sectional images, two evaluating criteria : (1) shape accuracy and (2) clearness of the cross-sectional images are defined. Based on these criteria, a series of simulations are performed, and the results show the superiority of the new synthesizing method over the existing ones such as averaging and minimum methods.

  • PDF