• Title/Summary/Keyword: image blur

Search Result 222, Processing Time 0.033 seconds

Deep Reference-based Dynamic Scene Deblurring

  • Cunzhe Liu;Zhen Hua;Jinjiang Li
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.3
    • /
    • pp.653-669
    • /
    • 2024
  • Dynamic scene deblurring is a complex computer vision problem owing to its difficulty to model mathematically. In this paper, we present a novel approach for image deblurring with the help of the sharp reference image, which utilizes the reference image for high-quality and high-frequency detail results. To better utilize the clear reference image, we develop an encoder-decoder network and two novel modules are designed to guide the network for better image restoration. The proposed Reference Extraction and Aggregation Module can effectively establish the correspondence between blurry image and reference image and explore the most relevant features for better blur removal and the proposed Spatial Feature Fusion Module enables the encoder to perceive blur information at different spatial scales. In the final, the multi-scale feature maps from the encoder and cascaded Reference Extraction and Aggregation Modules are integrated into the decoder for a global fusion and representation. Extensive quantitative and qualitative experimental results from the different benchmarks show the effectiveness of our proposed method.

Depth Map Generation Algorithm from Single Defocused Image (흐린 초점의 단일영상에서 깊이맵 생성 알고리즘)

  • Lee, Yong-Hwan;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.15 no.3
    • /
    • pp.67-71
    • /
    • 2016
  • This paper addresses a problem of defocus map recovery from single image. We describe a simple effective approach to estimate the spatial value of defocus blur at the edge location of the image. At first, we perform a re-blurring process using Gaussian function with input image, and calculate a gradient magnitude ratio with blurring amount between input image and re-blurred image. Then we get a full defocus map by propagating the blur amount at the edge location. Experimental result reveals that our method outperforms a reliable estimation of depth map, and shows that our algorithm is robust to noise, inaccurate edge location and interferences of neighboring edges within input image.

Estimation of Motion-Blur Parameters Based on a Stochastic Peak Trace Algorithm (통계적 극점 자취 알고리즘에 기초한 움직임 열화 영상의 파라메터 추출)

  • 최병철;홍훈섭;강문기
    • Journal of Broadcast Engineering
    • /
    • v.5 no.2
    • /
    • pp.281-289
    • /
    • 2000
  • While acquiring images, the relative motion between the imaging device and the object scene seriously damages the image quality. This phenomenon is called motion blur. The peak-trace approach, which is our recent previous work, identifies important parameters to characterize the point spread function (PSF) of the blur, given only the blurred image itself. With the peak-trace approach the direction of the motion blur can be extracted regardless of the noise corruption and does not need much Processing time. In this paper stochastic peak-trace approaches are introduced. The erroneous data can be selected through the ML classification, and can be made small through weighting. Therefore the distortion of the direction in the low frequency region can be prevented. Using the linear prediction method, the irregular data are prohibited from being selected as the peak point. The detection of the second peak using the proposed moving average least mean (MALM) method is used in the Identification of the motion extent. The MALM method itself includes a noise removal process, so it is possible to extract the parameters even an environment of heavy noise. In the experiment, we could efficiently restore the degraded image using the information obtained by the proposed algorithm.

  • PDF

Object-based Image Restoration Method for Enhancing Motion Blurred Images (움직임열화를 갖는 영상의 화질개선을 위한 객체기반 영상복원기법)

  • Choung, Yoo-Chan;Paik, Joon-Ki
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.12
    • /
    • pp.77-83
    • /
    • 1998
  • Generally a moving picture suffers from motion blur, due to relative motion between moving objects and the image formation system. The purpose of this paper is to propose teh model for the motion blur and the restoration method using the regularized iterative technique. In the proposed model, the boundary effect between moving objects and background is analyzed mathematically to overcome the limit of the spatially invariant model. And we present the motion-based image segmentation technique for the object-based image restoration, which is the modified version of the conventional segmentation method. Based on the proposed model, the restoration technique removes the motion blur by using the estimated motion parameter from the result of the segmentation.

  • PDF

Non-uniform Deblur Algorithm using Gyro Sensor and Different Exposure Image Pair (자이로 센서와 노출시간이 다른 두 장의 영상을 이용한 비균일 디블러 기법)

  • Ryu, Ho-hyeong;Song, Byung Cheol
    • Journal of Broadcast Engineering
    • /
    • v.21 no.2
    • /
    • pp.200-209
    • /
    • 2016
  • This paper proposes a non-uniform de-blur algorithm using IMU sensor and a long/short exposure-time image pair to efficiently remove the blur phenomenon. Conventional blur kernel estimation algorithms using sensor information do not provide acceptable performance due to limitation of sensor performance. In order to overcome such a limitation, we present a kernel refinement step based on images having different exposure times which improves accuracy of the estimated kernel. Also, in order to figure out the phenomenon that conventional non-uniform de-blur algorithms suffer from severe degradation of visual quality in case of large blur kernels, this paper a homography-based residual de-convolution which can minimize quality degradation such as ringing artifacts during de-convolution. Experimental results show that the proposed algorithm is superior to the state-of-the-art methods in terms of subjective as well as objective visual quality.

Analysis on Iris Image Degradation Factors (홍채 인식 성능에 영향을 미치는 화질 저하 요인 분석)

  • Yoon, So-Weon;Kim, Jai-Hie
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.863-864
    • /
    • 2008
  • To predict the iris matching performance and guarantee its reliability, image quality measure prior to matching is desired. An analysis on iris image degradation factors which deteriorate matching performance is a basic step for iris image quality measure. We considered five degradation factors-white-out, black-out, noise, blur, and occlusion by specular reflection-which happen generally during the iris image acquisition process. Experimental results show that noise and white-out degraded the EER most significantly, while others on EER were either insignificant or degradation images resulted in even better performance in some cases of blur. This means that degradation factors that affect the performance can be different from those based on human perception or image degradation evaluation.

  • PDF

A Study on the Reduction of Power Consumption and the Improvement of Motion Blur for OLED Displays (OLED 디스플레이의 전력 저감 및 모션 블러 개선에 관한 연구)

  • Choi, Se-Yoon;Kim, Jin-Sung;Seo, Jeong-Hyun
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.30 no.3
    • /
    • pp.1-8
    • /
    • 2016
  • In this paper, we proposed a new driving scheme to reduce the motion blur and save the power for OLEDs(organic light emitting diodes). We adopted a DVS (dynamic voltage scaling) method to reduce power consumption and the division of TV field to improve motion blur. In the proposed scheme, BEW (Blur Edge Width) was decreased to the ratio of 1/4 compared to the conventional scheme under the optimal conditions. In this scheme, the gray levels to which the DVS method can be applied were divided into much smaller groups depending on the number of subfields. Therefore, our scheme does not guarantee less power consumption for every image compared to the conventional scheme. However, the new scheme can move the gray levels adopting the DVS to higher gray levels. Thus, we can save power even when having images at high gray levels.

Object detection using a light field camera (라이트 필드 카메라를 사용한 객체 검출)

  • Jeong, Mingu;Kim, Dohun;Park, Sanghyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.109-111
    • /
    • 2021
  • Recently, computer vision research using light field cameras has been actively conducted. Since light field cameras have spatial information, various studies are being conducted in fields such as depth map estimation, super resolution, and 3D object detection. In this paper, we propose a method for detecting objects in blur images through a 7×7 array of images acquired through a light field camera. The blur image, which is weak in the existing camera, is detected through the light field camera. The proposed method uses the SSD algorithm to evaluate the performance using blur images acquired from light field cameras.

  • PDF

A Defocus Technique based Depth from Lens Translation using Sequential SVD Factorization

  • Kim, Jong-Il;Ahn, Hyun-Sik;Jeong, Gu-Min;Kim, Do-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.383-388
    • /
    • 2005
  • Depth recovery in robot vision is an essential problem to infer the three dimensional geometry of scenes from a sequence of the two dimensional images. In the past, many studies have been proposed for the depth estimation such as stereopsis, motion parallax and blurring phenomena. Among cues for depth estimation, depth from lens translation is based on shape from motion by using feature points. This approach is derived from the correspondence of feature points detected in images and performs the depth estimation that uses information on the motion of feature points. The approaches using motion vectors suffer from the occlusion or missing part problem, and the image blur is ignored in the feature point detection. This paper presents a novel approach to the defocus technique based depth from lens translation using sequential SVD factorization. Solving such the problems requires modeling of mutual relationship between the light and optics until reaching the image plane. For this mutuality, we first discuss the optical properties of a camera system, because the image blur varies according to camera parameter settings. The camera system accounts for the camera model integrating a thin lens based camera model to explain the light and optical properties and a perspective projection camera model to explain the depth from lens translation. Then, depth from lens translation is proposed to use the feature points detected in edges of the image blur. The feature points contain the depth information derived from an amount of blur of width. The shape and motion can be estimated from the motion of feature points. This method uses the sequential SVD factorization to represent the orthogonal matrices that are singular value decomposition. Some experiments have been performed with a sequence of real and synthetic images comparing the presented method with the depth from lens translation. Experimental results have demonstrated the validity and shown the applicability of the proposed method to the depth estimation.

  • PDF

Region-Based Step-Response Extraction and PSF Estimation for Digital Auto-Focusing (영역기반 계단응답 추출 및 디지털자동초점을 위한 점확산함수 추정)

  • Park, Young-Uk;Kim, Dong-Gyun;Lee, Jin-Hee;Paik, Joon-Ki
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.827-828
    • /
    • 2008
  • Blur identification is the first and the most important step of restoring images. Edge region of the image usually conveys important information of blur parameters. In this paper we propose a region-based edge extraction method for estimating point-spread-function (PSF). As a result, the proposed method can detect the starting and the ending points of a step response, and provides the PSF parameters to the restoration process.

  • PDF