• Title/Summary/Keyword: Defocus Measure

Search Result 9, Processing Time 0.027 seconds

Depth From Defocus using Wavelet Transform (웨이블릿 변환을 이용한 Depth From Defocus)

  • Choi, Chang-Min;Choi, Tae-Sun
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.42 no.5 s.305
    • /
    • pp.19-26
    • /
    • 2005
  • In this paper, a new method for obtaining three-dimensional shape of an object by measuring relative blur between images using wavelet analysis has been described. Most of the previous methods use inverse filtering to determine the measure of defocus. These methods suffer from some fundamental problems like inaccuracies in finding the frequency domain representation, windowing effects, and border effects. Besides these deficiencies, a filter, such as Laplacian of Gaussian, that produces an aggregate estimate of defocus for an unknown texture, can not lead to accurate depth estimates because of the non-stationary nature of images. We propose a new depth from defocus (DFD) method using wavelet analysis that is capable of performing both the local analysis and the windowing technique with variable-sized regions for non-stationary images with complex textural properties. We show that normalized image ratio of wavelet power by Parseval's theorem is closely related to blur parameter and depth. Experimental results have been presented demonstrating that our DFD method is faster in speed and gives more precise shape estimates than previous DFD techniques for both synthetic and real scenes.

A New Method of Noncontact Measurement for 3D Microtopography in Semiconductor Wafer Implementing a New Optical Probe based on the Precision Defocus Measurement (비초점 정밀 계측 방식에 의한 새로운 광학 프로브를 이용한 반도체 웨이퍼의 삼차원 미소형상 측정 기술)

  • 박희재;안우정
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.17 no.1
    • /
    • pp.129-137
    • /
    • 2000
  • In this paper, a new method of noncontact measurement has been developed for a 3 dimensional topography in semiconductor wafer, implementing a new optical probe based on the precision defocus measurement. The developed technique consists of the new optical probe, precision stages, and the measurement/control system. The basic principle of the technique is to use the reflected slit beam from the specimen surface, and to measure the deviation of the specimen surface. The defocusing distance can be measured by the reflected slit beam, where the defocused image is measured by the proposed optical probe, giving very high resolution. The distance measuring formula has been proposed for the developed probe, using the laws of geometric optics. The precision calibration technique has been applied, giving about 10 nanometer resolution and 72 nanometer of four sigma uncertainty. In order to quantitize the micro pattern in the specimen surface, some efficient analysis algorithms have been developed to analyse the 3D topography pattern and some parameters of the surface. The developed system has been successfully applied to measure the wafer surface, demonstrating the line scanning feature and excellent 3 dimensional measurement capability.

  • PDF

Simulation of the Through-Focus Modulation Transfer Functions According to the Change of Spherical Aberration in Pseudophakic Eyes

  • Kim, Jae-hyung;Kim, Myoung Joon;Yoon, Geunyoung;Kim, Jae Yong;Tchah, Hungwon
    • Journal of the Optical Society of Korea
    • /
    • v.19 no.4
    • /
    • pp.403-408
    • /
    • 2015
  • To evaluate the effects of spherical aberration (SA) correction on optical quality in pseudophakic eyes, we simulated the optical quality of the human eye by computation of the modulation transfer function (MTF). We reviewed the medical records of patients who underwent cataract surgery in Asan Medical Center, retrospectively. A Zywave aberrometer was used to measure optical aberrations at 1-12 postoperative months in patients with AR40e intraocular lens implants. The MTF was calculated for a 5 mm pupil from measured wavefront aberrations. The area under the MTF curve (aMTF) was analyzed and the maximal aMTF was calculated while changing the SA ($-0.2{\sim}+0.2{\mu}m$) and the defocus (-2.0 ~ +2.0 D). Sixty-four eyes in 51 patients were examined. The maximal aMTF was $6.61{\pm}2.16$ at a defocus of $-0.25{\pm}0.66D$ with innate SA, and $7.64{\pm}2.63$ at a defocus of $0.08{\pm}0.53D$ when the SA was 0 (full correction of SA). With full SA correction, the aMTF increased in 47 eyes (73.4%; Group 1) and decreased in 17 eyes (26.6%; Group 2). There were statistically significant differences in Z(3, -1) (vertical coma; P = 0.01) and Z(4, 4) (tetrafoil; P = 0.04) between the groups. The maximal aMTF was obtained at an SA of $+0.01{\mu}m$ in Group 1 and an SA of $+0.13{\mu}m$ in Group 2. Optical quality can be improved by full correction of SA in most pseudophakic eyes. However, residual SA might provide benefits in eyes with significant radially asymmetric aberrations.

On the Measurement of the Depth and Distance from the Defocused Imagesusing the Regularization Method (비초점화 영상에서 정칙화법을 이용한 깊이 및 거리 계측)

  • 차국찬;김종수
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.6
    • /
    • pp.886-898
    • /
    • 1995
  • One of the ways to measure the distance in the computer vision is to use the focus and defocus. There are two methods in this way. The first method is caculating the distance from the focused images in a point (MMDFP: the method measuring the distance to the focal plane). The second method is to measure the distance from the difference of the camera parameters, in other words, the apertures of the focal planes, of two images with having the different parameters (MMDCI: the method to measure the distance by comparing two images). The problem of the existing methods in MMDFP is to decide the thresholding vaue on detecting the most optimally focused object in the defocused image. In this case, it could be solved by comparing only the error energy in 3x3 window between two images. In MMDCI, the difficulty is the influence of the deflection effect. Therefor, to minimize its influence, we utilize two differently focused images instead of different aperture images in this paper. At the first, the amount of defocusing between two images is measured through the introduction of regularization and then the distance from the camera to the objects is caculated by the new equation measuring the distance. In the results of simulation, we see the fact to be able to measure the distance from two differently defocused images, and for our approach to be robuster than the method using the different aperture in the noisy image.

  • PDF

Local Binary Pattern Based Defocus Blur Detection Using Adaptive Threshold

  • Mahmood, Muhammad Tariq;Choi, Young Kyu
    • Journal of the Semiconductor & Display Technology
    • /
    • v.19 no.3
    • /
    • pp.7-11
    • /
    • 2020
  • Enormous methods have been proposed for the detection and segmentation of blur and non-blur regions of the images. Due to the limited available information about the blur type, scenario and the level of blurriness, detection and segmentation is a challenging task. Hence, the performance of the blur measure operators is an essential factor and needs improvement to attain perfection. In this paper, we propose an effective blur measure based on the local binary pattern (LBP) with the adaptive threshold for blur detection. The sharpness metric developed based on LBP uses a fixed threshold irrespective of the blur type and level which may not be suitable for images with large variations in imaging conditions and blur type and level. Contradictory, the proposed measure uses an adaptive threshold for each image based on the image and the blur properties to generate an improved sharpness metric. The adaptive threshold is computed based on the model learned through the support vector machine (SVM). The performance of the proposed method is evaluated using a well-known dataset and compared with five state-of-the-art methods. The comparative analysis reveals that the proposed method performs significantly better qualitatively and quantitatively against all the methods.

Measurement of the Axial Displacement Error of a Segmented Mirror Using a Fizeau Interferometer (피조 간섭계를 이용한 단일 조각거울 광축방향 변위 오차 측정)

  • Ha-Lim, Jang;Jae-Hyuck, Choi;Jae-Bong, Song;Hagyong, Kihm
    • Korean Journal of Optics and Photonics
    • /
    • v.34 no.1
    • /
    • pp.22-30
    • /
    • 2023
  • The use of segmented mirrors is one of the ways to make the primary mirror of a spaceborne satellite larger, where several small mirrors are combined into a large monolithic mirror. To align multiple segmented mirrors as one large mirror, there must be no discontinuity in the x, y-axis (tilt) and axial alignment error (piston) between adjacent mirrors. When the tilt and piston are removed, we can collect the light in one direction and get an expected clear image. Therefore, we need a precise wavefront sensor that can measure the alignment error of the segmented mirrors in nm scale. The tilt error can be easily detected by the point spread image of the segmented mirrors, while the piston error is hard to detect because of the absence of apparent features, but makes a downgraded image. In this paper we used an optical testing interferometer such as a Fizeau interferometer, which has various advantages when aligning the segmented mirror on the ground, and focused on measuring the axial displacement error of a segmented mirror as the basic research of measuring the piston errors between adjacent mirrors. First, we calculated the relationship between the axial displacement error of the segmented mirror and the surface defocus error of the interferometer and verified the calculated formula through experiments. Using the experimental results, we analyzed the measurement uncertainty and obtained the limitation of the Fizeau interferometer in detecting axial displacement errors.

Three Dimensional Shape Recovery from Blurred Images

  • Kyeongwan Roh;Kim, Choongwon;Lee, Gueesang;Kim, Soohyung
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.799-802
    • /
    • 2000
  • There are many methods that extract the depth information based on the blurring ratio for object point in DFD(Depth from Defocus). However, it is often difficult to measure the depth of the object in two-dimensional images that was affected by various elements such as edges, textures, and etc. To solve the problem, new DFD method employing the texture classification with a neural network is proposed. This method extracts the feature of texture from an evaluation window in an image and classifies the texture class. Finally, It allocates the correspondent value for the blurring ratio. The experimental result shows that the method gives more accurate than the previous methods.

  • PDF

A Hybrid Focus Method Using Multiple Laser Slits (다중 레이저 슬릿광을 이용한 하이브리드 초점 방법)

  • Shin Y.S.;Kim G.B.
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2005.10a
    • /
    • pp.706-709
    • /
    • 2005
  • A hybrid focus method with multiple laser slits has been newly proposed and it is based on the integration of DFD and DFF Rough depth information is estimated using DFD equipped with multiple laser slits, and then DFF is applied to only each specific depth range using the depth information resulting from DFD. The proposed hybrid method gives more accurate results than DFD and DFF, and faster measurement than DFF. Its performance has been verified through experiments of calibration blocks with sharp depth discontinuity.

  • PDF

A Study on the Improvement of Wavefront Sensing Accuracy for Shack-Hartmann Sensors (Shack-Hartmann 센서를 이용한 파면측정의 정확도 향상에 관한 연구)

  • Roh, Kyung-Wan;Uhm, Tae-Kyoung;Kim, Ji-Yeon;Park, Sang-Hoon;Youn, Sung-Kie;Lee, Jun-Ho
    • Korean Journal of Optics and Photonics
    • /
    • v.17 no.5
    • /
    • pp.383-390
    • /
    • 2006
  • The SharkHartmann wavefront sensors are the most popular devices to measure wavefront in the field of adaptive optics. The Shack-Hartmann sensors measure the centroids of spot irradiance distribution formed by each corresponding micro-lens. The centroids are linearly proportional to the local mean slopes of the wavefront defined within the corresponding sub-aperture. The wavefront is then reconstructed from the evaluated local mean slopes. The uncertainty of the Shack-Hartmann sensor is caused by various factors including the detector noise, the limited size of the detector, the magnitude and profile of spot irradiance distribution, etc. This paper investigates the noise propagation in two major centroid evaluation algorithms through computer simulation; 1st order moments of the irradiance algorithms i.e. center of gravity algorithm, and correlation algorithm. First, the center of gravity algorithm is shown to have relatively large dependence on the magnitudes of noises and the shape & size of irradiance sidelobes, whose effects are also shown to be minimized by optimal thresholding. Second, the correlation algorithm is shown to be robust over those effects, while its measurement accuracy is vulnerable to the size variation of the reference spot. The investigation is finally confirmed by experimental measurements of defocus wavefront aberrations using a Shack-Hartmann sensor using those two algorithms.