DOI QR코드

DOI QR Code

Investigation on the Applicability of Defocus Blur Variations to Depth Calculation Using Target Sheet Images Captured by a DSLR Camera

  • Seo, Suyoung (Dept. of Civil Engineering, Kyungpook National University)
  • 투고 : 2020.03.18
  • 심사 : 2020.04.23
  • 발행 : 2020.04.30

초록

Depth calculation of objects in a scene from images is one of the most studied processes in the fields of image processing, computer vision, and photogrammetry. Conventionally, depth is calculated using a pair of overlapped images captured at different view points. However, there have been studies to calculate depths from a single image. Theoretically, it is known to be possible to calculate depth using the diameter of CoC (Circle of Confusion) caused by defocus under the assumption of a thin lens model. Thus, this study aims to verify the validity of the thin lens model to calculate depth from edge blur amount which corresponds to the radius of CoC. For this study, a commercially available DSLR (Digital Single Lens Reflex) camera was used to capture a set of target sheets which had different edge contrasts. In order to find out the pattern of the variations of edge blur against varying combination of FD (Focusing Distance) and OD (Object Distance), the camera was set to varying FD and target sheet images were captured at varying OD under each FD. Then, the edge blur and edge displacement were estimated from edge slope profiles using a brute-force method. The experimental results show that the pattern of the variations of edge blur observed in the target images was apart from their corresponding theoretical amounts derived under the thin lens assumption but can still be utilized to calculate depth from a single image for the cases similar to the limited conditions experimented under which the tendency between FD and OD is manifest.

키워드

참고문헌

  1. Cao, Y., Fang, S., and Wang, Z. (2013), Digital multifocusing from a single photograph taken with an uncalibrated conventional camera, IEEE Transactions on Image Processing, Vol. 22, No. 9, pp. 3703-3714. https://doi.org/10.1109/TIP.2013.2270086
  2. Chen, S.-J. and Shen, H.-L. (2015), Multispectral image outof-focus deblurring using interchannel correlation, IEEE Transactions on Image Processing, Vol. 24, No. 11, pp. 4433-4445. https://doi.org/10.1109/TIP.2015.2465162
  3. De, I., Chanda, B., and Chattopadhyay, B. (2006), Enhancing effective depth-of-field by image fusion using mathematical morphology, Image and Vision Computing, Vol. 24, pp. 1278-1287. https://doi.org/10.1016/j.imavis.2006.04.005
  4. Gajjar, R. and Zaveri, T. (2017), Defocus blur parameter estimation using polynomial expression and signature based methods, 4th International Conference on Signal Processing and Integrated Networks, 2-3 February, Noida, India, pp. 71-75.
  5. Green, P., Sun, W., Matusik, W., and Durand, F. (2007), Multi-aperture photography, ACM Transactions on Graphics, Vol. 26, No. 3, Article 68.
  6. Hecht, E. (2001), Optics, 4th ed., Addison Wesley.
  7. Hong, Y., Ren, G., Liu, E., and Sun J. (2015), A blur estimation and detection method for out-of-focus images, Multimedia Tools and Applications, Vol. 75, pp. 10807-10822. https://doi.org/10.1007/s11042-015-2792-1
  8. Karaali, A. and Jung, C.R. (2018), Edge-based defocus blur estimation with adaptive scale selection, IEEE Transactions on Image Processing, Vol. 27, No. 3, pp. 1126-1137. https://doi.org/10.1109/TIP.2017.2771563
  9. Kim, S., Lee, E., Hayes, M.H., and Paik, J. (2012), Multifocusing and depth estimation using a color shift model-based computational camera, IEEE Transactions on Image Processing, Vol. 21, No. 9, pp. 4152-4166. https://doi.org/10.1109/TIP.2012.2202671
  10. Lu. Q., Zhou, W. Fang L., and Li, H. (2016), Robust blur kernel estimation for license plate images from fast moving vehicles, IEEE Transactions on Image Processing, Vol. 25, No. 5, pp. 2311-2323. https://doi.org/10.1109/TIP.2016.2535375
  11. Moghaddam, M.E. (2007), A mathematical model to estimate out of focus blur, 5th International Symposium on Image and Signal Processing and Analysis, 27-29 September, Istanbul, Turkey, pp. 278-281.
  12. Oliveira, J.P., Figueiredo, M.A.T., and Bioucas-Dias, J.M. (2014), Parametric blur estimation for blind restoration of natural images: linear motion and out-of-focus, IEEE Transactions on Image Processing, Vol. 23, No. 1, pp. 466-477. https://doi.org/10.1109/TIP.2013.2286328
  13. Reuter, A., Seidel, H.-P., and Ihrke, I. (2012), BlurTags: spatially varying PSF estimation with out-of-focus patterns, 20th International Conference on Computer Graphics, Visualization and Computer Vision, June, Plenz, Czech Republic, pp. 239-247.
  14. Seo, S. (2017a), Prediction of edge displacement due to image contrast, The Photogrammetric Record, Vol. 32, No. 158, pp. 119-140. https://doi.org/10.1111/phor.12189
  15. Seo, S. (2017b), Estimation of edge displacement against brightness and camera-to-object distance, IET Image Processing, Vol. 11, No. 8, pp. 568-577. https://doi.org/10.1049/iet-ipr.2016.0796
  16. Seo, S. (2018a), Edge modeling by two blur parameters in varying contrasts, IEEE Transactions on Image Processing, Vol. 27, No. 6, pp. 2701-2714. https://doi.org/10.1109/TIP.2018.2810504
  17. Seo, S. (2018b), Subpixel edge localization based on adaptive weighting of gradients, IEEE Transactions on Image Processing, Vol. 27, No. 11, pp. 5501-5513. https://doi.org/10.1109/TIP.2018.2860241
  18. Sun, T.-Y., Ciou, S.-J., Liu, C.-C., and Huo, C.-L. (2009), Outof-focus blur estimation for blind image deconvolution: using particle swarm optimization, Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics, 11-14 October, San Antonio, Texas, pp. 1627-1632.
  19. Tang, C., Hou, C., and Song, Z. (2013), Defocus map estimation from a single image via spectrum contrast, Optics Letters, Vol. 38, No. 10, pp. 1706-1708. https://doi.org/10.1364/OL.38.001706
  20. Wu, S. and Lin, W. (2008), Defocus estimation from a single image, Proceedings of 17th International Conference on Computer Communications and Networks, 3-7 August, St. Thomas, US Virgin Islands, pp. 1-5.
  21. Yuan, L., Sun, J., Quan, L., and Shum, H.-Y. (2007), Image deblurring with blurred/noisy image pairs, SIGGRAPH-07: Special interest group on computer graphics and interactive techniques conference, August, San Diego, California, pp. 1-es.
  22. Zhang, Y. and Hirakawa, K. (2015) Fast spatially varying object motion blur estimation, IEEE International Conference on Image Processing, 27-30 September, Quebec City, Canada, pp. 646-650.
  23. Zhang, Y. and Hirakawa, K. (2016) Blind deblurring and denoising of images corrupted by unidirectional object motion blur and sensor noise, IEEE Transactions on Image Processing, Vol. 25, No. 9, pp. 4129-4144. https://doi.org/10.1109/TIP.2016.2583069
  24. Zhou, C., Lin, S., and Nayar, S. (2009), Coded aperture pairs for depth from defocus, IEEE 12th International Conference on Computer Vision-2009, 29 September-2 October, Kyoto, Japan, pp. 325-332.
  25. Zhu, X., Cohen, S., Schiller, S., and Milnafar, P. (2013), Estimating spatially varying defocus blur from a single image, IEEE Transactions on Image Processing, Vol. 22, No. 12, pp. 4879-4891. https://doi.org/10.1109/TIP.2013.2279316
  26. Zhuo, S. and Sim, T. (2011), Defocus map estimation from a single image, Pattern Recognition, Vol. 44, pp. 1852-1858. https://doi.org/10.1016/j.patcog.2011.03.009