DOI QR코드

DOI QR Code

An Efficient Feature Point Extraction and Comparison Method through Distorted Region Correction in 360-degree Realistic Contents

  • Park, Byeong-Chan (Dept. of Computer Science and Engineering, Soongsil University) ;
  • Kim, Jin-Sung (Dept. of Computer Science and Engineering, Soongsil University) ;
  • Won, Yu-Hyeon (Dept. of Computer Science and Engineering, Soongsil University) ;
  • Kim, Young-Mo (Dept. of Computer Science and Engineering, Soongsil University) ;
  • Kim, Seok-Yoon (Dept. of Computer Science and Engineering, Soongsil University)
  • Received : 2018.12.06
  • Accepted : 2019.01.25
  • Published : 2019.01.31

Abstract

One of critical issues in dealing with 360-degree realistic contents is the performance degradation in searching and recognition process since they support up to 4K UHD quality and have all image angles including the front, back, left, right, top, and bottom parts of a screen. To solve this problem, in this paper, we propose an efficient search and comparison method for 360-degree realistic contents. The proposed method first corrects the distortion at the less distorted regions such as front, left and right parts of the image excluding severely distorted regions such as upper and lower parts, and then it extracts feature points at the corrected region and selects the representative images through sequence classification. When the query image is inputted, the search results are provided through feature points comparison. The experimental results of the proposed method shows that it can solve the problem of performance deterioration when 360-degree realistic contents are recognized comparing with traditional 2D contents.

Keywords

CPTSCQ_2019_v24n1_93_f0001.png 이미지

Fig. 1. OMAF Architecture of MPEG-I

CPTSCQ_2019_v24n1_93_f0002.png 이미지

Fig. 2. Image extraction and ERP

CPTSCQ_2019_v24n1_93_f0003.png 이미지

Fig. 3. Image matching process

CPTSCQ_2019_v24n1_93_f0004.png 이미지

Fig. 4. SIFT Algorithm

CPTSCQ_2019_v24n1_93_f0005.png 이미지

Fig. 5. Frame extraction in VR content

CPTSCQ_2019_v24n1_93_f0006.png 이미지

Fig. 6. Expression of ERP

CPTSCQ_2019_v24n1_93_f0007.png 이미지

Fig. 7. Image area specification ratio

CPTSCQ_2019_v24n1_93_f0008.png 이미지

Fig. 8. Distortion correction

CPTSCQ_2019_v24n1_93_f0009.png 이미지

Fig. 9. Matching Process

CPTSCQ_2019_v24n1_93_f0010.png 이미지

Fig. 10. 360 VR content used in experiment

Table 1. Sequence classification in Fig. 5

CPTSCQ_2019_v24n1_93_t0001.png 이미지

Table 2. Image resize

CPTSCQ_2019_v24n1_93_t0002.png 이미지

Table 3. Specify area

CPTSCQ_2019_v24n1_93_t0003.png 이미지

Table 4. ① Experiment Result

CPTSCQ_2019_v24n1_93_t0004.png 이미지

Table 5. ② Experiment Result

CPTSCQ_2019_v24n1_93_t0005.png 이미지

Table 6. ③ Experiment Result

CPTSCQ_2019_v24n1_93_t0006.png 이미지

Table 7. ④ Experiment Result

CPTSCQ_2019_v24n1_93_t0007.png 이미지

Table 8. Matching Result

CPTSCQ_2019_v24n1_93_t0008.png 이미지

References

  1. H. W. Chun, M. K. Han, and J. H. Jang, "Application trends in virtual reality," 2017 Electronics and Telecommunica tions Trends, 2017.
  2. S. E. Chen, "Quicktime VR: An image-based approach to virtual environment navigation," Proc. of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, ACM, 1995.
  3. J. Y. Kim, "Design of 360 degree video and VR contents," Communication Books, 2017.
  4. R. Kijima and K. Yamaguchi, "VR device time Hi-precision time management by synchronizing times between devices and host PC through USB," IEEE Virtual Reality(VR), Mar. 2016.
  5. Y. S. Ho, "MPEG-I standard and 360 degree video content generation," Journal of Electrical Engineering, Aug. 2017.
  6. W16824, Text of ISO/IEC DIS 23090-2 Omnidirectional MediA Format (OMAF).
  7. Y Ke and R Sukthankar, "PCA-SIFT: A more distinctive representation for local image descriptors," IEEE CVPR, May 2004.
  8. H Bay, T Tuytelaars, and L Van Gool, "SURF: Speeded Up Robust Features," ECCV, May 2006.
  9. W. J. Ha and K. A. Sohn, "Image classification approach for Improving CBIR system performance," in Proc. 2016 KICS Conf. Winter, pp. 308-309, Jeongseon, Korea, 2016.
  10. J. S. Song, S. J. Hur, Y. W. Park, and J. H. Choi, "User positioning method based on image similarity comparison using single camera," J. KICS, vol. 40, no. 8, pp. 1655-1666, 2015. https://doi.org/10.7840/kics.2015.40.8.1655
  11. M. Yasmin, S. Mohsin, I. Irum, and M. Sharif, "Content based image retrieval by shape, color and relevance feedback." Life Science Journal, 10(4s), pp. 593-598, 2013.
  12. M. Everingham, et al., "The pascal visual object classes (voc) challenge," Int. J. Computer Vision, vol. 88, no. 2, pp. 303-338, 2010. https://doi.org/10.1007/s11263-009-0275-4
  13. Y. Ke, and R. Sukthankar, "PCA-SIFT: A more distinctive representation for local image descriptors," in Proc. IEEE Computer Soc. Conf. CVPR 2004, vol. 2, 2004.
  14. H. J. Jung and J. S. Yoo, "Feature matching algorithm robust to viewpoint change," J. KICS, vol. 40, no. 12, pp. 2363-2371, 2015.12. https://doi.org/10.7840/kics.2015.40.12.2363