• Title/Summary/Keyword: SIFT descriptor

Search Result 52, Processing Time 0.018 seconds

Framework Implementation of Image-Based Indoor Localization System Using Parallel Distributed Computing (병렬 분산 처리를 이용한 영상 기반 실내 위치인식 시스템의 프레임워크 구현)

  • Kwon, Beom;Jeon, Donghyun;Kim, Jongyoo;Kim, Junghwan;Kim, Doyoung;Song, Hyewon;Lee, Sanghoon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.11
    • /
    • pp.1490-1501
    • /
    • 2016
  • In this paper, we propose an image-based indoor localization system using parallel distributed computing. In order to reduce computation time for indoor localization, an scale invariant feature transform (SIFT) algorithm is performed in parallel by using Apache Spark. Toward this goal, we propose a novel image processing interface of Apache Spark. The experimental results show that the speed of the proposed system is about 3.6 times better than that of the conventional system.

3D feature point extraction technique using a mobile device (모바일 디바이스를 이용한 3차원 특징점 추출 기법)

  • Kim, Jin-Kyum;Seo, Young-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.256-257
    • /
    • 2022
  • In this paper, we introduce a method of extracting three-dimensional feature points through the movement of a single mobile device. Using a monocular camera, a 2D image is acquired according to the camera movement and a baseline is estimated. Perform stereo matching based on feature points. A feature point and a descriptor are acquired, and the feature point is matched. Using the matched feature points, the disparity is calculated and a depth value is generated. The 3D feature point is updated according to the camera movement. Finally, the feature point is reset at the time of scene change by using scene change detection. Through the above process, an average of 73.5% of additional storage space can be secured in the key point database. By applying the algorithm proposed to the depth ground truth value of the TUM Dataset and the RGB image, it was confirmed that the\re was an average distance difference of 26.88mm compared with the 3D feature point result.

  • PDF