• Title/Summary/Keyword: visual place recognition

Search Result 34, Processing Time 0.024 seconds

Semantic Visual Place Recognition in Dynamic Urban Environment (동적 도시 환경에서 의미론적 시각적 장소 인식)

  • Arshad, Saba;Kim, Gon-Woo
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.334-338
    • /
    • 2022
  • In visual simultaneous localization and mapping (vSLAM), the correct recognition of a place benefits in relocalization and improved map accuracy. However, its performance is significantly affected by the environmental conditions such as variation in light, viewpoints, seasons, and presence of dynamic objects. This research addresses the problem of feature occlusion caused by interference of dynamic objects leading to the poor performance of visual place recognition algorithm. To overcome the aforementioned problem, this research analyzes the role of scene semantics in correct detection of a place in challenging environments and presents a semantics aided visual place recognition method. Semantics being invariant to viewpoint changes and dynamic environment can improve the overall performance of the place matching method. The proposed method is evaluated on the two benchmark datasets with dynamic environment and seasonal changes. Experimental results show the improved performance of the visual place recognition method for vSLAM.

A Study on Visual Contextual Awareness in Ubiquitous Computing (유비쿼터스 환경에서의 시각문맥정보인식에 대한 연구)

  • Han, Dong-Ju;Kim, Jong-Bok;Lee, Sang-Hoon;Suh, Il-Hong
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.19-21
    • /
    • 2004
  • In many cases, human's visual recognition depends on contextual information. We need to use effective feature information for performing vigorous place recognition to illumination, noise, etc. In the existing cases that use edge and color, etc., visual recognition doesn't cope effectively with real environment. To solve this problem, using natural marker, we improve the efficiency of place recognition.

  • PDF

Large-scale Language-image Model-based Bag-of-Objects Extraction for Visual Place Recognition (영상 기반 위치 인식을 위한 대규모 언어-이미지 모델 기반의 Bag-of-Objects 표현)

  • Seung Won Jung;Byungjae Park
    • Journal of Sensor Science and Technology
    • /
    • v.33 no.2
    • /
    • pp.78-85
    • /
    • 2024
  • We proposed a method for visual place recognition that represents images using objects as visual words. Visual words represent the various objects present in urban environments. To detect various objects within the images, we implemented and used a zero-shot detector based on a large-scale image language model. This zero-shot detector enables the detection of various objects in urban environments without additional training. In the process of creating histograms using the proposed method, frequency-based weighting was applied to consider the importance of each object. Through experiments with open datasets, the potential of the proposed method was demonstrated by comparing it with another method, even in situations involving environmental or viewpoint changes.

Visual Location Recognition Using Time-Series Streetview Database (시계열 스트리트뷰 데이터베이스를 이용한 시각적 위치 인식 알고리즘)

  • Park, Chun-Su;Choeh, Joon-Yeon
    • Journal of the Semiconductor & Display Technology
    • /
    • v.18 no.4
    • /
    • pp.57-61
    • /
    • 2019
  • Nowadays, portable digital cameras such as smart phone cameras are being popularly used for entertainment and visual information recording. Given a database of geo-tagged images, a visual location recognition system can determine the place depicted in a query photo. One of the most common visual location recognition approaches is the bag-of-words method where local image features are clustered into visual words. In this paper, we propose a new bag-of-words-based visual location recognition algorithm using time-series streetview database. The proposed algorithm selects only a small subset of image features which will be used in image retrieval process. By reducing the number of features to be used, the proposed algorithm can reduce the memory requirement of the image database and accelerate the retrieval process.

Fast and Accurate Visual Place Recognition Using Street-View Images

  • Lee, Keundong;Lee, Seungjae;Jung, Won Jo;Kim, Kee Tae
    • ETRI Journal
    • /
    • v.39 no.1
    • /
    • pp.97-107
    • /
    • 2017
  • A fast and accurate building-level visual place recognition method built on an image-retrieval scheme using street-view images is proposed. Reference images generated from street-view images usually depict multiple buildings and confusing regions, such as roads, sky, and vehicles, which degrades retrieval accuracy and causes matching ambiguity. The proposed practical database refinement method uses informative reference image and keypoint selection. For database refinement, the method uses a spatial layout of the buildings in the reference image, specifically a building-identification mask image, which is obtained from a prebuilt three-dimensional model of the site. A global-positioning-system-aware retrieval structure is incorporated in it. To evaluate the method, we constructed a dataset over an area of $0.26km^2$. It was comprised of 38,700 reference images and corresponding building-identification mask images. The proposed method removed 25% of the database images using informative reference image selection. It achieved 85.6% recall of the top five candidates in 1.25 s of full processing. The method thus achieved high accuracy at a low computational complexity.

Pseudo-RGB-based Place Recognition through Thermal-to-RGB Image Translation (열화상 영상의 Image Translation을 통한 Pseudo-RGB 기반 장소 인식 시스템)

  • Seunghyeon Lee;Taejoo Kim;Yukyung Choi
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.1
    • /
    • pp.48-52
    • /
    • 2023
  • Many studies have been conducted to ensure that Visual Place Recognition is reliable in various environments, including edge cases. However, existing approaches use visible imaging sensors, RGB cameras, which are greatly influenced by illumination changes, as is widely known. Thus, in this paper, we use an invisible imaging sensor, a long wave length infrared camera (LWIR) instead of RGB, that is shown to be more reliable in low-light and highly noisy conditions. In addition, although the camera sensor used to solve this problem is an LWIR camera, but since the thermal image is converted into RGB image the proposed method is highly compatible with existing algorithms and databases. We demonstrate that the proposed method outperforms the baseline method by about 0.19 for recall performance.

Efficient Visual Place Recognition by Adaptive CNN Landmark Matching

  • Chen, Yutian;Gan, Wenyan;Zhu, Yi;Tian, Hui;Wang, Cong;Ma, Wenfeng;Li, Yunbo;Wang, Dong;He, Jixian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.11
    • /
    • pp.4084-4104
    • /
    • 2021
  • Visual place recognition (VPR) is a fundamental yet challenging task of mobile robot navigation and localization. The existing VPR methods are usually based on some pairwise similarity of image descriptors, so they are sensitive to visual appearance change and also computationally expensive. This paper proposes a simple yet effective four-step method that achieves adaptive convolutional neural network (CNN) landmark matching for VPR. First, based on the features extracted from existing CNN models, the regions with higher significance scores are selected as landmarks. Then, according to the coordinate positions of potential landmarks, landmark matching is improved by removing mismatched landmark pairs. Finally, considering the significance scores obtained in the first step, robust image retrieval is performed based on adaptive landmark matching, and it gives more weight to the landmark matching pairs with higher significance scores. To verify the efficiency and robustness of the proposed method, evaluations are conducted on standard benchmark datasets. The experimental results indicate that the proposed method reduces the feature representation space of place images by more than 75% with negligible loss in recognition precision. Also, it achieves a fast matching speed in similarity calculation, satisfying the real-time requirement.

Condition-invariant Place Recognition Using Deep Convolutional Auto-encoder (Deep Convolutional Auto-encoder를 이용한 환경 변화에 강인한 장소 인식)

  • Oh, Junghyun;Lee, Beomhee
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.1
    • /
    • pp.8-13
    • /
    • 2019
  • Visual place recognition is widely researched area in robotics, as it is one of the elemental requirements for autonomous navigation, simultaneous localization and mapping for mobile robots. However, place recognition in changing environment is a challenging problem since a same place look different according to the time, weather, and seasons. This paper presents a feature extraction method using a deep convolutional auto-encoder to recognize places under severe appearance changes. Given database and query image sequences from different environments, the convolutional auto-encoder is trained to predict the images of the desired environment. The training process is performed by minimizing the loss function between the predicted image and the desired image. After finishing the training process, the encoding part of the structure transforms an input image to a low dimensional latent representation, and it can be used as a condition-invariant feature for recognizing places in changing environment. Experiments were conducted to prove the effective of the proposed method, and the results showed that our method outperformed than existing methods.

Collaborative Place and Object Recognition in Video using Bidirectional Context Information (비디오에서 양방향 문맥 정보를 이용한 상호 협력적인 위치 및 물체 인식)

  • Kim, Sung-Ho;Kweon, In-So
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.2
    • /
    • pp.172-179
    • /
    • 2006
  • In this paper, we present a practical place and object recognition method for guiding visitors in building environments. Recognizing places or objects in real world can be a difficult problem due to motion blur and camera noise. In this work, we present a modeling method based on the bidirectional interaction between places and objects for simultaneous reinforcement for the robust recognition. The unification of visual context including scene context, object context, and temporal context is also. The proposed system has been tested to guide visitors in a large scale building environment (10 topological places, 80 3D objects).

  • PDF

A Salient Based Bag of Visual Word Model (SBBoVW): Improvements toward Difficult Object Recognition and Object Location in Image Retrieval

  • Mansourian, Leila;Abdullah, Muhamad Taufik;Abdullah, Lilli Nurliyana;Azman, Azreen;Mustaffa, Mas Rina
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.2
    • /
    • pp.769-786
    • /
    • 2016
  • Object recognition and object location have always drawn much interest. Also, recently various computational models have been designed. One of the big issues in this domain is the lack of an appropriate model for extracting important part of the picture and estimating the object place in the same environments that caused low accuracy. To solve this problem, a new Salient Based Bag of Visual Word (SBBoVW) model for object recognition and object location estimation is presented. Contributions lied in the present study are two-fold. One is to introduce a new approach, which is a Salient Based Bag of Visual Word model (SBBoVW) to recognize difficult objects that have had low accuracy in previous methods. This method integrates SIFT features of the original and salient parts of pictures and fuses them together to generate better codebooks using bag of visual word method. The second contribution is to introduce a new algorithm for finding object place based on the salient map automatically. The performance evaluation on several data sets proves that the new approach outperforms other state-of-the-arts.