• Title/Summary/Keyword: Fast Landmark Matching

Search Result 9, Processing Time 0.022 seconds

Fast landmark matching algorithm using moving guide-line image

  • Seo Seok-Bae;Kang Chi-Ho;Ahn Sang-Il;Choi Hae-Jin
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.208-211
    • /
    • 2004
  • Landmark matching is one of an important algorithm for navigation of satellite images. This paper proposes a fast landmark matching algorithm using a MGLI (Moving Guide-Line Image). For searching the matched point between the landmark chip and a part of image, correlation matrix is used generally, but the full-sized correlation matrix has a drawback requiring plenty of time for matching point calculation. MGLI includes thick lines for fast calculation of correlation matrix. In the MGLI, width of the thick lines should be determined by satellite position changes and navigation error range. For the fast landmark matching, the MGLI provides guided line for a landmark chip we want to match, so that the proposed method should reduce candidate areas for correlation matrix calculation. This paper will show how much time is reduced in the proposed fast landmark matching algorithm compared to general ones.

  • PDF

Efficient Visual Place Recognition by Adaptive CNN Landmark Matching

  • Chen, Yutian;Gan, Wenyan;Zhu, Yi;Tian, Hui;Wang, Cong;Ma, Wenfeng;Li, Yunbo;Wang, Dong;He, Jixian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.11
    • /
    • pp.4084-4104
    • /
    • 2021
  • Visual place recognition (VPR) is a fundamental yet challenging task of mobile robot navigation and localization. The existing VPR methods are usually based on some pairwise similarity of image descriptors, so they are sensitive to visual appearance change and also computationally expensive. This paper proposes a simple yet effective four-step method that achieves adaptive convolutional neural network (CNN) landmark matching for VPR. First, based on the features extracted from existing CNN models, the regions with higher significance scores are selected as landmarks. Then, according to the coordinate positions of potential landmarks, landmark matching is improved by removing mismatched landmark pairs. Finally, considering the significance scores obtained in the first step, robust image retrieval is performed based on adaptive landmark matching, and it gives more weight to the landmark matching pairs with higher significance scores. To verify the efficiency and robustness of the proposed method, evaluations are conducted on standard benchmark datasets. The experimental results indicate that the proposed method reduces the feature representation space of place images by more than 75% with negligible loss in recognition precision. Also, it achieves a fast matching speed in similarity calculation, satisfying the real-time requirement.

SMTG 알고리즘을 이용한 랜드마크의 고속정합

  • Seo, Seok-Bae;Kang, Chi-Ho
    • Aerospace Engineering and Technology
    • /
    • v.4 no.2
    • /
    • pp.230-235
    • /
    • 2005
  • As a precedence research for the COMS(Communication, Oceanic, and Meteorological Satellite), this paper proposes the SMTC(Soble Masked Tracking Guideline) algorighm for a fast landmark matching. The experimental results show that proposed algorithm should recude a lot of calculative time.

  • PDF

Simultaneous Localization and Mapping For Swarm Robot (군집 로봇의 동시적 위치 추정 및 지도 작성)

  • Mun, Hyun-Su;Shin, Sang-Geun;Joo, Young-Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.3
    • /
    • pp.296-301
    • /
    • 2011
  • This paper deals with the simultaneous localization and mapping system using cooperative robot. For recognizing environment, swarm robot uses the ultrasonic sensors and vision sensor. Ultrasonic sensors measure the distance information, and vision sensor recognizes the predefined landmark. we used SURF with excellent quality and fast matching in order to recognize landmark. Due to measurement error of sensors, we fusion them using particle filter for accurate localization and mapping. Finally, we show the feasibility of the proposed method through some experiments.

Automated Geometric Correction of Geostationary Weather Satellite Images (정지궤도 기상위성의 자동기하보정)

  • Kim, Hyun-Suk;Lee, Tae-Yoon;Hur, Dong-Seok;Rhee, Soo-Ahm;Kim, Tae-Jung
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.4
    • /
    • pp.297-309
    • /
    • 2007
  • The first Korean geostationary weather satellite, Communications, Oceanography and Meteorology Satellite (COMS) will be launched in 2008. The ground station for COMS needs to perform geometric correction to improve accuracy of satellite image data and to broadcast geometrically corrected images to users within 30 minutes after image acquisition. For such a requirement, we developed automated and fast geometric correction techniques. For this, we generated control points automatically by matching images against coastline data and by applying a robust estimation called RANSAC. We used GSHHS (Global Self-consistent Hierarchical High-resolution Shoreline) shoreline database to construct 211 landmark chips. We detected clouds within the images and applied matching to cloud-free sub images. When matching visible channels, we selected sub images located in day-time. We tested the algorithm with GOES-9 images. Control points were generated by matching channel 1 and channel 2 images of GOES against the 211 landmark chips. The RANSAC correctly removed outliers from being selected as control points. The accuracy of sensor models established using the automated control points were in the range of $1{\sim}2$ pixels. Geometric correction was performed and the performance was visually inspected by projecting coastline onto the geometrically corrected images. The total processing time for matching, RANSAC and geometric correction was around 4 minutes.

Omni Camera Vision-Based Localization for Mobile Robots Navigation Using Omni-Directional Images (옴니 카메라의 전방향 영상을 이용한 이동 로봇의 위치 인식 시스템)

  • Kim, Jong-Rok;Lim, Mee-Seub;Lim, Joon-Hong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.3
    • /
    • pp.206-210
    • /
    • 2011
  • Vision-based robot localization is challenging due to the vast amount of visual information available, requiring extensive storage and processing time. To deal with these challenges, we propose the use of features extracted from omni-directional panoramic images and present a method for localization of a mobile robot equipped with an omni-directional camera. The core of the proposed scheme may be summarized as follows : First, we utilize an omni-directional camera which can capture instantaneous $360^{\circ}$ panoramic images around a robot. Second, Nodes around the robot are extracted by the correlation coefficients of Circular Horizontal Line between the landmark and the current captured image. Third, the robot position is determined from the locations by the proposed correlation-based landmark image matching. To accelerate computations, we have assigned the node candidates using color information and the correlation values are calculated based on Fast Fourier Transforms. Experiments show that the proposed method is effective in global localization of mobile robots and robust to lighting variations.

Human Pose Matching Using Skeleton-type Active Shape Models (뼈대-구조 능동형태모델을 이용한 사람의 자세 정합)

  • Jang, Chang-Hyuk
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.12
    • /
    • pp.996-1008
    • /
    • 2009
  • This paper proposes a novel approach for the model-based pose matching of a human body using Active Shape Models. To improve the processing time of model creation and registration, we use a skeleton-type model instead of the conventional silhouette-based models. The skeleton model defines feature information that is used to match the human pose. Images used to make the model are for 600 human bodies, and the model has 17 landmarks which indicate the body junction and key features of a human pose. When applying primary Active Shape Models to the skeleton-type model in the matching process, a problem may occur in the proximal joints of the arm and leg due to the color variations on a human body and the insufficient information for the fore-rear directions of profile normals. This problem is solved by using the background subtraction information of a body region in the input image and adding a 4-directions feature of the profile normal in the proximal parts of the arm and leg. In the matching process, the maximum iteration is less than 30 times. As a result, the execution time is quite fast, and was observed to be less than 0.03 sec in an experiment.

A Mobile Landmarks Guide : Outdoor Augmented Reality based on LOD and Contextual Device (모바일 랜드마크 가이드 : LOD와 문맥적 장치 기반의 실외 증강현실)

  • Zhao, Bi-Cheng;Rosli, Ahmad Nurzid;Jang, Chol-Hee;Lee, Kee-Sung;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.1-21
    • /
    • 2012
  • In recent years, mobile phone has experienced an extremely fast evolution. It is equipped with high-quality color displays, high resolution cameras, and real-time accelerated 3D graphics. In addition, some other features are includes GPS sensor and Digital Compass, etc. This evolution advent significantly helps the application developers to use the power of smart-phones, to create a rich environment that offers a wide range of services and exciting possibilities. To date mobile AR in outdoor research there are many popular location-based AR services, such Layar and Wikitude. These systems have big limitation the AR contents hardly overlaid on the real target. Another research is context-based AR services using image recognition and tracking. The AR contents are precisely overlaid on the real target. But the real-time performance is restricted by the retrieval time and hardly implement in large scale area. In our work, we exploit to combine advantages of location-based AR with context-based AR. The system can easily find out surrounding landmarks first and then do the recognition and tracking with them. The proposed system mainly consists of two major parts-landmark browsing module and annotation module. In landmark browsing module, user can view an augmented virtual information (information media), such as text, picture and video on their smart-phone viewfinder, when they pointing out their smart-phone to a certain building or landmark. For this, landmark recognition technique is applied in this work. SURF point-based features are used in the matching process due to their robustness. To ensure the image retrieval and matching processes is fast enough for real time tracking, we exploit the contextual device (GPS and digital compass) information. This is necessary to select the nearest and pointed orientation landmarks from the database. The queried image is only matched with this selected data. Therefore, the speed for matching will be significantly increased. Secondly is the annotation module. Instead of viewing only the augmented information media, user can create virtual annotation based on linked data. Having to know a full knowledge about the landmark, are not necessary required. They can simply look for the appropriate topic by searching it with a keyword in linked data. With this, it helps the system to find out target URI in order to generate correct AR contents. On the other hand, in order to recognize target landmarks, images of selected building or landmark are captured from different angle and distance. This procedure looks like a similar processing of building a connection between the real building and the virtual information existed in the Linked Open Data. In our experiments, search range in the database is reduced by clustering images into groups according to their coordinates. A Grid-base clustering method and user location information are used to restrict the retrieval range. Comparing the existed research using cluster and GPS information the retrieval time is around 70~80ms. Experiment results show our approach the retrieval time reduces to around 18~20ms in average. Therefore the totally processing time is reduced from 490~540ms to 438~480ms. The performance improvement will be more obvious when the database growing. It demonstrates the proposed system is efficient and robust in many cases.

Descent Dataset Generation and Landmark Extraction for Terrain Relative Navigation on Mars (화성 지형상대항법을 위한 하강 데이터셋 생성과 랜드마크 추출 방법)

  • Kim, Jae-In
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1015-1023
    • /
    • 2022
  • The Entry-Descent-Landing process of a lander involves many environmental and technical challenges. To solve these problems, recently, terrestrial relative navigation (TRN) technology has been essential for landers. TRN is a technology for estimating the position and attitude of a lander by comparing Inertial Measurement Unit (IMU) data and image data collected from a descending lander with pre-built reference data. In this paper, we present a method for generating descent dataset and extracting landmarks, which are key elements for developing TRN technologies to be used on Mars. The proposed method generates IMU data of a descending lander using a simulated Mars landing trajectory and generates descent images from high-resolution ortho-map and digital elevation map through a ray tracing technique. Landmark extraction is performed by an area-based extraction method due to the low-textured surfaces on Mars. In addition, search area reduction is carried out to improve matching accuracy and speed. The performance evaluation result for the descent dataset generation method showed that the proposed method can generate images that satisfy the imaging geometry. The performance evaluation result for the landmark extraction method showed that the proposed method ensures several meters of positioning accuracy while ensuring processing speed as fast as the feature-based methods.