• Title/Summary/Keyword: Defocus

Search Result 70, Processing Time 0.022 seconds

Analysis of focus error signals on land/groove recordable optical disks (랜드/그루브 기록 광디스크에 대한 포커스 에러 신호 분석)

  • 이용재;박병호;신현국
    • Korean Journal of Optics and Photonics
    • /
    • v.8 no.1
    • /
    • pp.73-79
    • /
    • 1997
  • We analyzed the variation of the focus error signal with the effect of land and groove, wavefront error, and optical system parameter variation for the knife-edge and astigmatism methods on the Land/Groove recordable optical disc, using a numerical simulation method. We verified causes of the zero-cross-shift that took place by the effect of land and groove by analyzing the diffraction beam including defocus wavefront errors. We also found that the sensitivty of the focus error signal was reduced by the effect of land and groove in the astigmatism method, as in the analysis of the focus error signal with the each order of the diffraction beam.

  • PDF

Simulation of the Through-Focus Modulation Transfer Functions According to the Change of Spherical Aberration in Pseudophakic Eyes

  • Kim, Jae-hyung;Kim, Myoung Joon;Yoon, Geunyoung;Kim, Jae Yong;Tchah, Hungwon
    • Journal of the Optical Society of Korea
    • /
    • v.19 no.4
    • /
    • pp.403-408
    • /
    • 2015
  • To evaluate the effects of spherical aberration (SA) correction on optical quality in pseudophakic eyes, we simulated the optical quality of the human eye by computation of the modulation transfer function (MTF). We reviewed the medical records of patients who underwent cataract surgery in Asan Medical Center, retrospectively. A Zywave aberrometer was used to measure optical aberrations at 1-12 postoperative months in patients with AR40e intraocular lens implants. The MTF was calculated for a 5 mm pupil from measured wavefront aberrations. The area under the MTF curve (aMTF) was analyzed and the maximal aMTF was calculated while changing the SA ($-0.2{\sim}+0.2{\mu}m$) and the defocus (-2.0 ~ +2.0 D). Sixty-four eyes in 51 patients were examined. The maximal aMTF was $6.61{\pm}2.16$ at a defocus of $-0.25{\pm}0.66D$ with innate SA, and $7.64{\pm}2.63$ at a defocus of $0.08{\pm}0.53D$ when the SA was 0 (full correction of SA). With full SA correction, the aMTF increased in 47 eyes (73.4%; Group 1) and decreased in 17 eyes (26.6%; Group 2). There were statistically significant differences in Z(3, -1) (vertical coma; P = 0.01) and Z(4, 4) (tetrafoil; P = 0.04) between the groups. The maximal aMTF was obtained at an SA of $+0.01{\mu}m$ in Group 1 and an SA of $+0.13{\mu}m$ in Group 2. Optical quality can be improved by full correction of SA in most pseudophakic eyes. However, residual SA might provide benefits in eyes with significant radially asymmetric aberrations.

Investigation on the Applicability of Defocus Blur Variations to Depth Calculation Using Target Sheet Images Captured by a DSLR Camera

  • Seo, Suyoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.2
    • /
    • pp.109-121
    • /
    • 2020
  • Depth calculation of objects in a scene from images is one of the most studied processes in the fields of image processing, computer vision, and photogrammetry. Conventionally, depth is calculated using a pair of overlapped images captured at different view points. However, there have been studies to calculate depths from a single image. Theoretically, it is known to be possible to calculate depth using the diameter of CoC (Circle of Confusion) caused by defocus under the assumption of a thin lens model. Thus, this study aims to verify the validity of the thin lens model to calculate depth from edge blur amount which corresponds to the radius of CoC. For this study, a commercially available DSLR (Digital Single Lens Reflex) camera was used to capture a set of target sheets which had different edge contrasts. In order to find out the pattern of the variations of edge blur against varying combination of FD (Focusing Distance) and OD (Object Distance), the camera was set to varying FD and target sheet images were captured at varying OD under each FD. Then, the edge blur and edge displacement were estimated from edge slope profiles using a brute-force method. The experimental results show that the pattern of the variations of edge blur observed in the target images was apart from their corresponding theoretical amounts derived under the thin lens assumption but can still be utilized to calculate depth from a single image for the cases similar to the limited conditions experimented under which the tendency between FD and OD is manifest.

A Feasibility Study on the Application of Ultrasonic Method for Surface Crack Detection of SiC/SiC Composite Ceramics (SiC/SiC 복합재료 세라믹스 표면균열 탐지를 위한 초음파법 적용에 관한 기초연구)

  • Nam, Ki-Woo;Lee, Kun-Chan;Kohyama, Akira
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.29 no.5
    • /
    • pp.479-484
    • /
    • 2009
  • Nondestructive evaluation(NDE) of ceramic matrix composites is essential for developing reliable ceramics for industrial applications. In the work, C-Scan image analysis has been used to characterize surface crack of SiC ceramics nondestructively. The possibility of detection of surface crack were carried out experimentally by two types of ultrasonic equipment of SDS-win and $\mu$-SDS, and three types of transducer of 25, 50 and 125 MHz. A surface micro-crack of ceramics was not detected by transducer of 25 MHz and 50 MHz. Though the focus method was detected dimly the crack by transducer of 125 MHz, the defocus method could detect the shape of diamond indenter. As a whole, the focus method and the defocus method came to the conclusion that micro crack have a good possibility for detection.

A Study of correlation between spherical refractive error and astigmatism (굴절이상도와 난시와의 관계 연구)

  • Lee, Jeung-Young;Kim, Jae-Do;Kim, Dae-Hyun
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.9 no.2
    • /
    • pp.439-446
    • /
    • 2004
  • Many studies have reported that retinal defocus cause and increase refractive error specially myopia. Uncorrected astigmatism may be one factor of retinal defocus factors. To understand the relationship between myopia and astigmatism 62 college students were participated in this study. Spherical refractive error and astigmatism were measured using N-vision 5001 autorefractor (Shinnippon). Co-relations between spherical refractive error and astigmatism were high both in the with-the-rule astigmatism group(r=0.53; ANOVA F=32.40, N=87, P<0.05) and oblique astigmatism group (r=0.53ANOVA F=5.14, N=15, P<0.001). However it was very low (r=0.09; ANOVA F=0.18, N=22, P<0.001)in the against-the-rule stigmatism group. In the total group co-relation was also high (r=0.56: ANOVA F=77.80, N=173, P<0.001). Uncorrected astigmatism may cause and increase spherical refractive error.

  • PDF

A Defocus Technique based Depth from Lens Translation using Sequential SVD Factorization

  • Kim, Jong-Il;Ahn, Hyun-Sik;Jeong, Gu-Min;Kim, Do-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.383-388
    • /
    • 2005
  • Depth recovery in robot vision is an essential problem to infer the three dimensional geometry of scenes from a sequence of the two dimensional images. In the past, many studies have been proposed for the depth estimation such as stereopsis, motion parallax and blurring phenomena. Among cues for depth estimation, depth from lens translation is based on shape from motion by using feature points. This approach is derived from the correspondence of feature points detected in images and performs the depth estimation that uses information on the motion of feature points. The approaches using motion vectors suffer from the occlusion or missing part problem, and the image blur is ignored in the feature point detection. This paper presents a novel approach to the defocus technique based depth from lens translation using sequential SVD factorization. Solving such the problems requires modeling of mutual relationship between the light and optics until reaching the image plane. For this mutuality, we first discuss the optical properties of a camera system, because the image blur varies according to camera parameter settings. The camera system accounts for the camera model integrating a thin lens based camera model to explain the light and optical properties and a perspective projection camera model to explain the depth from lens translation. Then, depth from lens translation is proposed to use the feature points detected in edges of the image blur. The feature points contain the depth information derived from an amount of blur of width. The shape and motion can be estimated from the motion of feature points. This method uses the sequential SVD factorization to represent the orthogonal matrices that are singular value decomposition. Some experiments have been performed with a sequence of real and synthetic images comparing the presented method with the depth from lens translation. Experimental results have demonstrated the validity and shown the applicability of the proposed method to the depth estimation.

  • PDF

A study on an efficient prediction of welding deformation for T-joint laser welding of sandwich panel PART I : Proposal of a heat source model

  • Kim, Jae Woong;Jang, Beom Seon;Kim, Yong Tai;Chun, Kwang San
    • International Journal of Naval Architecture and Ocean Engineering
    • /
    • v.5 no.3
    • /
    • pp.348-363
    • /
    • 2013
  • The use of I-Core sandwich panel has increased in cruise ship deck structure since it can provide similar bending strength with conventional stiffened plate while keeping lighter weight and lower web height. However, due to its thin plate thickness, i.e. about 4~6 mm at most, it is assembled by high power $CO_2$ laser welding to minimize the welding deformation. This research proposes a volumetric heat source model for T-joint of the I-Core sandwich panel and a method to use shell element model for a thermal elasto-plastic analysis to predict welding deformation. This paper, Part I, focuses on the heat source model. A circular cone type heat source model is newly suggested in heat transfer analysis to realize similar melting zone with that observed in experiment. An additional suggestion is made to consider negative defocus, which is commonly applied in T-joint laser welding since it can provide deeper penetration than zero defocus. The proposed heat source is also verified through 3D thermal elasto-plastic analysis to compare welding deformation with experimental results. A parametric study for different welding speeds, defocus values, and welding powers is performed to investigate the effect on the melting zone and welding deformation. In Part II, focuses on the proposed method to employ shell element model to predict welding deformation in thermal elasto-plastic analysis instead of solid element model.

Parameterized Modeling of Spatially Varying PSF for Lens Aberration and Defocus

  • Wang, Chao;Chen, Juan;Jia, Hongguang;Shi, Baosong;Zhu, Ruifei;Wei, Qun;Yu, Linyao;Ge, Mingda
    • Journal of the Optical Society of Korea
    • /
    • v.19 no.2
    • /
    • pp.136-143
    • /
    • 2015
  • Image deblurring by a deconvolution method requires accurate knowledge of the blur kernel. Existing point spread function (PSF) models in the literature corresponding to lens aberrations and defocus are either parameterized and spatially invariant or spatially varying but discretely defined. In this paper, a parameterized model is developed and presented for a PSF which is spatially varying due to lens aberrations and defocus in an imaging system. The model is established from the Seidel third-order aberration coefficient and the Hu moment. A skew normal Gauss model is selected for parameterized PSF geometry structure. The accuracy of the model is demonstrated with simulations and measurements for a defocused infrared camera and a single spherical lens digital camera. Compared with optical software Code V, the visual results of two optical systems validate our analysis and proposed method in size, shape and direction. Quantitative evaluation results reveal the excellent accuracy of the blur kernel model.

Performance Criterion of Bispectral Speckle Imaging Technique (북스펙트럼 스펙클 영상법의 성능기준)

  • 조두진
    • Korean Journal of Optics and Photonics
    • /
    • v.4 no.1
    • /
    • pp.28-35
    • /
    • 1993
  • In the case of an imaging system affected by aberrations which are not precisely known, the effect of aberrations can be minimized and near-diffraction-limited images can be restored by introducing artificial random phase fluctuations in the exit pupil of the imaging system and using bispectral speckle imaging. In order to determine the optimum value of the correlation length for Gaussian random phase model, computer simulation is performed for 50 image frames of a point object in the presence of defocus, spherical aberration, coma, astigmatism of 1 wave, respectively. In terms of the criterion of performance, the FWHM of the point spread function, normalized peak intensity, MTF and visual inspection of the restored object are employed. The optimum value for the rms difference $\sigma$ of aberration on the exit pupil in the interval of Fried parameter ${\Upsilon}_0$ is given by 0.27-0.53 wave for spherical aberration, and 0.24-0.36 wave for defocus and astigmatism, respectively. It is found that the bispectral speckle imaging technique does not give good results in the case of coma.

  • PDF

3D Depth Estimation by a Single Camera (단일 카메라를 이용한 3D 깊이 추정 방법)

  • Kim, Seunggi;Ko, Young Min;Bae, Chulkyun;Kim, Dae Jin
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.281-291
    • /
    • 2019
  • Depth from defocus estimates the 3D depth by using a phenomenon in which the object in the focal plane of the camera forms a clear image but the object away from the focal plane produces a blurred image. In this paper, algorithms are studied to estimate 3D depth by analyzing the degree of blur of the image taken with a single camera. The optimized object range was obtained by 3D depth estimation derived from depth from defocus using one image of a single camera or two images of different focus of a single camera. For depth estimation using one image, the best performance was achieved using a focal length of 250 mm for both smartphone and DSLR cameras. The depth estimation using two images showed the best 3D depth estimation range when the focal length was set to 150 mm and 250 mm for smartphone camera images and 200 mm and 300 mm for DSLR camera images.