• Title/Summary/Keyword: Blur Parameter

Search Result 18, Processing Time 0.03 seconds

A Study on The Identification of Blur Parameters from a Motion Blurred Image (모션 블러된 이미지로부터 블러 파라미터를 추출하는 기법에 대한 연구)

  • Yang, Hong-Taek;Hwang, Joo-Yeon;Paik, Doo-Won
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.693-696
    • /
    • 2008
  • Motion blurs are caused by relative motion between the camera and the scene. The blurred image needs to be restored because undesired blur effect degrades the quality of the image. In this paper, we propose a new method for the identification of blur parameters. Experiment shows that the proposed method identifies blur extent regardless of the size of the blur and the object in the original image.

  • PDF

Depth From Defocus using Wavelet Transform (웨이블릿 변환을 이용한 Depth From Defocus)

  • Choi, Chang-Min;Choi, Tae-Sun
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.42 no.5 s.305
    • /
    • pp.19-26
    • /
    • 2005
  • In this paper, a new method for obtaining three-dimensional shape of an object by measuring relative blur between images using wavelet analysis has been described. Most of the previous methods use inverse filtering to determine the measure of defocus. These methods suffer from some fundamental problems like inaccuracies in finding the frequency domain representation, windowing effects, and border effects. Besides these deficiencies, a filter, such as Laplacian of Gaussian, that produces an aggregate estimate of defocus for an unknown texture, can not lead to accurate depth estimates because of the non-stationary nature of images. We propose a new depth from defocus (DFD) method using wavelet analysis that is capable of performing both the local analysis and the windowing technique with variable-sized regions for non-stationary images with complex textural properties. We show that normalized image ratio of wavelet power by Parseval's theorem is closely related to blur parameter and depth. Experimental results have been presented demonstrating that our DFD method is faster in speed and gives more precise shape estimates than previous DFD techniques for both synthetic and real scenes.

Vision-based Sensor Fusion of a Remotely Operated Vehicle for Underwater Structure Diagnostication (수중 구조물 진단용 원격 조종 로봇의 자세 제어를 위한 비전 기반 센서 융합)

  • Lee, Jae-Min;Kim, Gon-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.4
    • /
    • pp.349-355
    • /
    • 2015
  • Underwater robots generally show better performances for tasks than humans under certain underwater constraints such as. high pressure, limited light, etc. To properly diagnose in an underwater environment using remotely operated underwater vehicles, it is important to keep autonomously its own position and orientation in order to avoid additional control efforts. In this paper, we propose an efficient method to assist in the operation for the various disturbances of a remotely operated vehicle for the diagnosis of underwater structures. The conventional AHRS-based bearing estimation system did not work well due to incorrect measurements caused by the hard-iron effect when the robot is approaching a ferromagnetic structure. To overcome this drawback, we propose a sensor fusion algorithm with the camera and AHRS for estimating the pose of the ROV. However, the image information in the underwater environment is often unreliable and blurred by turbidity or suspended solids. Thus, we suggest an efficient method for fusing the vision sensor and the AHRS with a criterion which is the amount of blur in the image. To evaluate the amount of blur, we adopt two methods: one is the quantification of high frequency components using the power spectrum density analysis of 2D discrete Fourier transformed image, and the other is identifying the blur parameter based on cepstrum analysis. We evaluate the performance of the robustness of the visual odometry and blur estimation methods according to the change of light and distance. We verify that the blur estimation method based on cepstrum analysis shows a better performance through the experiments.

Analysis and parameter extraction of motion blurred image (움직임 열화 현상이 발생한 영상의 분석과 파라메터 추출)

  • 최지웅;최병철;강문기
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.10B
    • /
    • pp.1953-1962
    • /
    • 1999
  • While acquiring the image, the shaking of the image capturing equipment or the object seriously damages the image quality. This phenomenon, which degrades the clarity and the resolution of the image is called motion blur. In this paper, a newly defined function is introduced for finding the degree and the length of the motion blur. The domain of this function defined as Peak-trace domain. In The Peak-trace domain, the noise dominant region for calculating the noise variance and the signal dominant region for extracting the degree and the length of the motion blur are defined and analyzed. Using the information of the Peak-trace in the signal dominant region, we can find the direction of the motion regardless of the noise corruption. Weighted least mean square method helps extracting the Peak-trace more precisely. After getting the direction of the motion blur, we can find the length of the motion blur based on one dimensional Cepstrum. In the experiment, we could efficiently restore the degraded image using the information obtained by the proposed algorithm.

  • PDF

Object-based Image Restoration Method for Enhancing Motion Blurred Images (움직임열화를 갖는 영상의 화질개선을 위한 객체기반 영상복원기법)

  • Choung, Yoo-Chan;Paik, Joon-Ki
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.12
    • /
    • pp.77-83
    • /
    • 1998
  • Generally a moving picture suffers from motion blur, due to relative motion between moving objects and the image formation system. The purpose of this paper is to propose teh model for the motion blur and the restoration method using the regularized iterative technique. In the proposed model, the boundary effect between moving objects and background is analyzed mathematically to overcome the limit of the spatially invariant model. And we present the motion-based image segmentation technique for the object-based image restoration, which is the modified version of the conventional segmentation method. Based on the proposed model, the restoration technique removes the motion blur by using the estimated motion parameter from the result of the segmentation.

  • PDF

A Defocus Technique based Depth from Lens Translation using Sequential SVD Factorization

  • Kim, Jong-Il;Ahn, Hyun-Sik;Jeong, Gu-Min;Kim, Do-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.383-388
    • /
    • 2005
  • Depth recovery in robot vision is an essential problem to infer the three dimensional geometry of scenes from a sequence of the two dimensional images. In the past, many studies have been proposed for the depth estimation such as stereopsis, motion parallax and blurring phenomena. Among cues for depth estimation, depth from lens translation is based on shape from motion by using feature points. This approach is derived from the correspondence of feature points detected in images and performs the depth estimation that uses information on the motion of feature points. The approaches using motion vectors suffer from the occlusion or missing part problem, and the image blur is ignored in the feature point detection. This paper presents a novel approach to the defocus technique based depth from lens translation using sequential SVD factorization. Solving such the problems requires modeling of mutual relationship between the light and optics until reaching the image plane. For this mutuality, we first discuss the optical properties of a camera system, because the image blur varies according to camera parameter settings. The camera system accounts for the camera model integrating a thin lens based camera model to explain the light and optical properties and a perspective projection camera model to explain the depth from lens translation. Then, depth from lens translation is proposed to use the feature points detected in edges of the image blur. The feature points contain the depth information derived from an amount of blur of width. The shape and motion can be estimated from the motion of feature points. This method uses the sequential SVD factorization to represent the orthogonal matrices that are singular value decomposition. Some experiments have been performed with a sequence of real and synthetic images comparing the presented method with the depth from lens translation. Experimental results have demonstrated the validity and shown the applicability of the proposed method to the depth estimation.

  • PDF

A Guideline for Motion-Image-Quality Improvement of LCD-TVs

  • Kurita, Taiichiro
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2009.10a
    • /
    • pp.1164-1167
    • /
    • 2009
  • Motion-image-quality of LCD-TVs is discussed by dynamic spatial frequency response. Smaller temporal aperture or higher frame rate can improve dynamic response, but an increase of motion velocity easily cancels the improvement. A guideline for deciding the desirable temporal aperture and frame rate of LCD-TVs is described, under the condition that camera and display have the same parameters. Two candidates of the desirable parameter sets will be (240 or 300 Hz, 50 to 100% aperture) and (120Hz, 25 to 50% aperture), from the viewpoint of "limit of acceptance" on motion-imagequality-deterioration for critical picture materials.

  • PDF

Scaling-Translation Parameter Estimation using Genetic Hough Transform for Background Compensation

  • Nguyen, Thuy Tuong;Pham, Xuan Dai;Jeon, Jae-Wook
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.8
    • /
    • pp.1423-1443
    • /
    • 2011
  • Background compensation plays an important role in detecting and isolating object motion in visual tracking. Here, we propose a Genetic Hough Transform, which combines the Hough Transform and Genetic Algorithm, as a method for eliminating background motion. Our method can handle cases in which the background may contain only a few, if any, feature points. These points can be used to estimate the motion between two successive frames. In addition to dealing with featureless backgrounds, our method can successfully handle motion blur. Experimental comparisons of the results obtained using the proposed method with other methods show that the proposed approach yields a satisfactory estimate of background motion.

Adaptive Image Restoration Using Local Characteristics of Degradation (국부 훼손특성을 이용한 적응적 영상복원)

  • 김태선;이태홍
    • Journal of Korea Multimedia Society
    • /
    • v.3 no.4
    • /
    • pp.365-371
    • /
    • 2000
  • To restore image degraded by out-of-focus blur and additive noise, an iterative restoration is used. Acceleration parameter is usually applied equally to all over the image without considering the local characteristics of degraded images. As a result, the conventional methods are not effective in restoring severely degraded edge region and shows slow convergence rate. To solve this problem we propose an adaptive iterative restoration according to local degradation, in which the acceleration parameter has low value in flat region that is less degraded and high value in edge region that is more degraded. Through experiments, we verified that the proposed method showed better results with fast convergence rate, showed Visually better image in edge region and lower MSE than the conventional methods.

  • PDF

An Improved Tracking Parameter File Generation Method using Azimuth Fixing Method (방위각 고정 기법을 이용한 개선된 Tracking Parameter File 생성 방법)

  • Jeon, Moon-Jin;Kim, Eunghyun;Lim, Seong-Bin
    • Aerospace Engineering and Technology
    • /
    • v.12 no.2
    • /
    • pp.1-6
    • /
    • 2013
  • A LEO satellite transmits recorded images to a ground station using an X-band antenna during contact. The X-band antenna points to the ground station according to a TPF (tracking parameter file) during communication time. A TPF generation software generates azimuth and elevation profile which make the antenna point to the ground station using satellite orbit and attitude information and mission information including recording and downlink operation. When the satellite passes above the ground station, azimuth velocity increases rapidly so that jitter may occur if the azimuth velocity is in specific range. In case of realtime mission in which the satellite perform recording and downlink simultaneously, azimuth velocity must be lower than specific value to prevent image blur due to jitter effect. The method to point one virtual ground station has limitation of azimuth velocity reduction. In this paper, we propose the azimuth fixing method to reduce azimuth velocity of X-band antenna. The experimental results show that azimuth velocity of the X-band antenna is remarkably reduced using proposed method.