• 제목/요약/키워드: blurring images

검색결과 191건 처리시간 0.022초

블록 FFT에 기초한 에지검출을 이용한 적응적 영상복원 알고리즘 (An Adaptive Image Restoration Algorithm Using Edge Detection Based on the Block FFT)

  • 안도랑;이동욱
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1998년도 추계학술대회 논문집 학회본부 B
    • /
    • pp.569-571
    • /
    • 1998
  • In this paper, we propose a method of restoring blurred images by an edge-sensitive adaptive filter. The direction of the edge is estimated using the properties of 2-D block FFT. Reduction of blurring due to the added noise during image transfer and the focus of lens caused by shooting a fast moving object is very important. To remove this phenomenon effectively, we can use the edge information obtained by processing the blurred images. The proposed algorithm estimates both the existence and the direction of the edge. On the basis of the acquired edge direction information, we choose the appropriate edge-sensitive adaptive filter, which enables us to get better images than images obtained by methods not considering the direction of the edge. The performance of the proposed algorithm is shown in the simulation result.

  • PDF

상호정보량에 의한 이미지 융합시스템 및 시뮬레이션에 관한 연구 (A Study of Fusion Image System and Simulation based on Mutual Information)

  • 김용길;김철;문경일
    • 정보교육학회논문지
    • /
    • 제19권1호
    • /
    • pp.139-148
    • /
    • 2015
  • 융합 이미지 생성의 목적은 여러 입력 이미지에 나타난 주요 시각적인 정보를 결합시켜 하나의 보다 정보적이고 완성적인 출력 이미지를 얻는 데 있다. 현재 이러한 이미지 융합 기술은 영상 의료, 원격 감지, 로봇공학 등의 분야에서 활발하게 연구되고 있다. 본 논문에서는 최대 엔트로피에 의한 임계값 추정과 이를 바탕으로 하는 특징 벡터 추출 및 상호 정보량에 의한 특징 벡터들의 밀접한 관계를 추정하는 방식으로 융합 이미지를 생성하는 하나의 접근방식을 제안한다. 이러한 융합 이미지 생성 방식은 이미지의 전반적인 불확실성을 감소시킨다는 점에서 장점이 있고, 더 나아가서 융합되는 이미지들 가운데 블러링 이미지가 사용되는 경우에 이미지 정합이 다른 기법에 비해 보다 좋은 성능을 가진다는 점이다.

A Defocus Technique based Depth from Lens Translation using Sequential SVD Factorization

  • Kim, Jong-Il;Ahn, Hyun-Sik;Jeong, Gu-Min;Kim, Do-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.383-388
    • /
    • 2005
  • Depth recovery in robot vision is an essential problem to infer the three dimensional geometry of scenes from a sequence of the two dimensional images. In the past, many studies have been proposed for the depth estimation such as stereopsis, motion parallax and blurring phenomena. Among cues for depth estimation, depth from lens translation is based on shape from motion by using feature points. This approach is derived from the correspondence of feature points detected in images and performs the depth estimation that uses information on the motion of feature points. The approaches using motion vectors suffer from the occlusion or missing part problem, and the image blur is ignored in the feature point detection. This paper presents a novel approach to the defocus technique based depth from lens translation using sequential SVD factorization. Solving such the problems requires modeling of mutual relationship between the light and optics until reaching the image plane. For this mutuality, we first discuss the optical properties of a camera system, because the image blur varies according to camera parameter settings. The camera system accounts for the camera model integrating a thin lens based camera model to explain the light and optical properties and a perspective projection camera model to explain the depth from lens translation. Then, depth from lens translation is proposed to use the feature points detected in edges of the image blur. The feature points contain the depth information derived from an amount of blur of width. The shape and motion can be estimated from the motion of feature points. This method uses the sequential SVD factorization to represent the orthogonal matrices that are singular value decomposition. Some experiments have been performed with a sequence of real and synthetic images comparing the presented method with the depth from lens translation. Experimental results have demonstrated the validity and shown the applicability of the proposed method to the depth estimation.

  • PDF

Evaluation of Denoising Filters Based on Edge Locations

  • Seo, Suyoung
    • 대한원격탐사학회지
    • /
    • 제36권4호
    • /
    • pp.503-513
    • /
    • 2020
  • This paper presents a method to evaluate denoising filters based on edge locations in their denoised images. Image quality assessment has often been performed by using structural similarity (SSIM). However, SSIM does not provide clearly the geometric accuracy of features in denoised images. Thus, in this paper, a method to localize edge locations with subpixel accuracy based on adaptive weighting of gradients is used for obtaining the subpixel locations of edges in ground truth image, noisy images, and denoised images. Then, this paper proposes a method to evaluate the geometric accuracy of edge locations based on root mean squares error (RMSE) and jaggedness with reference to ground truth locations. Jaggedness is a measure proposed in this study to measure the stability of the distribution of edge locations. Tested denoising filters are anisotropic diffusion (AF), bilateral filter, guided filter, weighted guided filter, weighted mean of patches filter, and smoothing filter (SF). SF is a simple filter that smooths images by applying a Gaussian blurring to a noisy image. Experiments were performed with a set of simulated images and natural images. The experimental results show that AF and SF recovered edge locations more accurately than the other tested filters in terms of SSIM, RMSE, and jaggedness and that SF produced better results than AF in terms of jaggedness.

Partial Solution for Concomitant Gradient Field in Ultra-low Magnetic Field: Correction of Distortion Artifact

  • Lee, Seong-Joo;Shim, Jeong Hyun
    • 한국자기공명학회논문지
    • /
    • 제24권3호
    • /
    • pp.66-69
    • /
    • 2020
  • In ultra-low field magnetic resonance imaging (ULF-MRI), the strength of a static magnetic field can be comparable to that of gradient field. On that occasion, the gradient field is accompanied by concomitant gradient field, which yields distortion and blurring artifacts on MR images. Here, we focused on the distortion artifact and derived the equations capable of correcting it. Its usefulness was confirmed through the corrections in both simulated and experimental images. This solution will be effective for acquiring more accurate images in low and/or ultra-low magnetic fields.

복합 이미지에 대한 Perceptibility와 Acceptability 측정 (Perceptibility and Acceptability Tests for the Quality Changes of Complex Images)

  • Kim Dong Ho;Park Seung Ok;Kim Hong Seok;Kim Yeon Jin
    • 한국광학회:학술대회논문집
    • /
    • 한국광학회 2003년도 하계학술발표회
    • /
    • pp.80-81
    • /
    • 2003
  • The psychophysical experiments were carried out by a panel of eleven observers on the image difference pairs displayed on the LCD (liquid crystal display)to quantify the quality changes of complex images imparted by the typical image processing operations. There were six different kinds of pairs according to their original image. The three types of visual tests performed were: pair-to-pair comparison of image differences for ordering the differences between images introduced by single or combination of image lightness change, contrast change, blurring, and sharpening, perceptibility and acceptability tests using ascending or descending series of image difference pairs ordered according to the size of their visual differences. (omitted)

  • PDF

Attention-based for Multiscale Fusion Underwater Image Enhancement

  • Huang, Zhixiong;Li, Jinjiang;Hua, Zhen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권2호
    • /
    • pp.544-564
    • /
    • 2022
  • Underwater images often suffer from color distortion, blurring and low contrast, which is caused by the propagation of light in the underwater environment being affected by the two processes: absorption and scattering. To cope with the poor quality of underwater images, this paper proposes a multiscale fusion underwater image enhancement method based on channel attention mechanism and local binary pattern (LBP). The network consists of three modules: feature aggregation, image reconstruction and LBP enhancement. The feature aggregation module aggregates feature information at different scales of the image, and the image reconstruction module restores the output features to high-quality underwater images. The network also introduces channel attention mechanism to make the network pay more attention to the channels containing important information. The detail information is protected by real-time superposition with feature information. Experimental results demonstrate that the method in this paper produces results with correct colors and complete details, and outperforms existing methods in quantitative metrics.

Attack Detection on Images Based on DCT-Based Features

  • Nirin Thanirat;Sudsanguan Ngamsuriyaroj
    • Asia pacific journal of information systems
    • /
    • 제31권3호
    • /
    • pp.335-357
    • /
    • 2021
  • As reproduction of images can be done with ease, copy detection has increasingly become important. In the duplication process, image modifications are likely to occur and some alterations are deliberate and can be viewed as attacks. A wide range of copy detection techniques has been proposed. In our study, content-based copy detection, which basically applies DCT-based features for images, namely, pixel values, edges, texture information and frequency-domain component distribution, is employed. Experiments are carried out to evaluate robustness and sensitivity of DCT-based features from attacks. As different types of DCT-based features hold different pieces of information, how features and attacks are related can be shown in their robustness and sensitivity. Rather than searching for proper features, use of robustness and sensitivity is proposed here to realize how the attacked features have changed when an image attack occurs. The experiments show that, out of ten attacks, the neural networks are able to detect seven attacks namely, Gaussian noise, S&P noise, Gamma correction (high), blurring, resizing (big), compression and rotation with mostly related to their sensitive features.

고해상도 범색 영상을 위한 다중 단계 영상 복원 (Multi-stage Image Restoration for High Resolution Panchromatic Imagery)

  • 이상훈
    • 대한원격탐사학회지
    • /
    • 제32권6호
    • /
    • pp.551-566
    • /
    • 2016
  • 위성 원격 탐사에서는 센서 운영 환경으로 인하여 영상을 수집하는 동안 영상의 질 저하가 일어나며 이러한 영상의 질 저하는 관측된 자료로부터 유용한 정보를 확인하거나 추출하는 데 악 영향을 미치는 번짐 현상(blurring)과 잡음 (noise)을 야기시킨다. 특히 이러한 질 저하는 도시 지역과 같은 조밀한 구조를 가지는 scene으로부터 관측된 영상 자료의 분석에 더욱 영향을 끼친다. 본 연구는 고해상도 범색 영상 자료의 질 저하 현상을 개선시켜 영상이 포함하고 있는 복잡한 구조에 대한 자세한 분석의 정확성을 제고하기 위한 다중 단계 영상 복원 과정을 제안한다. 본 연구는 질 저하 현상을 모형화 하기 위해 Gaussian 가산 잡음과 Markov random field로 정의되는 공간적 연결성, 중심 화소와 이웃 화소 간의 거리에 비례하는 번짐을 가정하였다. 본 연구는 잡음 완화와 번짐 제거를 위해 Point-Jacobian Iteration Maximum A Posteriori (PJI-MAP) 추정 법을 제안한다. 그리고 화소 연결 후 지역 확장을 통한 영상 분할을 사용하였다. 본 연구는 지역 확장을 위하여 동질성과 대조성을 동시에 고려하는 비유사 계수를 제안하고 있다. 본 연구에서는 모의 자료 실험을 통하여 정량적 평가를 실시하였으며 2 개의 고해상도 범색 영상 자료에 대해 적용하여 복원의 효과에 대해 실험하였다. 사용된 원격 탐사 자료는 1 m급의 미국 LA지역에서 수집된 Dubaisat -2 자료와 0.7 m급의 한반도 대전 지역에서 수집된 KOMPSAT3 자료이다. 실험 결과는 제안된 다중 단계 복원 과정이 고해상 자료의 복잡한 구조의 자세한 분석에서 정확성 향상에 기여할 수 있다는 것을 보여주고 있다.

디지털 영상 합성에 의한 X선 단층 영상의 형상 정확도와 선명도 분석 (Analysis of X-ray image Qualities -accuracy of shape and clearness of image using X-ray digital tomosynthesis)

  • 노영준;조형석;김형철;김성권
    • 제어로봇시스템학회논문지
    • /
    • 제5권5호
    • /
    • pp.558-567
    • /
    • 1999
  • X-ray laminography and DT(digital tomosynthesis) that can form a cross-sectional image of 3-D objects promis to be good solutions for inspecting interior defects of industrial products. DT is a kind of laminography technique and the difference is in the fact that it synthesizes the several projected images by use of the digitized memory and computation. The quality of images acquired from the DT system varies according to image synthesizing methods, the number of images used in image synthesizing, and X-ray projection angles. In this paper, a new image synthesizing method named 'log-root method' is proposed to get clear and accurate cross-sectional images, which can reduce both artifact and blurring generated by materials out of focal plane. To evaluate the quality of cross-sectional images, two evaluating criteria : (1) shape accuracy and (2) clearness of the cross-sectional images are defined. Based on these criteria, a series of simulations are performed, and the results show the superiority of the new synthesizing method over the existing ones such as averaging and minimum methods.

  • PDF