• Title/Summary/Keyword: appearance based localization

Search Result 13, Processing Time 0.034 seconds

An Approach for Localization Around Indoor Corridors Based on Visual Attention Model (시각주의 모델을 적용한 실내 복도에서의 위치인식 기법)

  • Yoon, Kook-Yeol;Choi, Sun-Wook;Lee, Chong-Ho
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.2
    • /
    • pp.93-101
    • /
    • 2011
  • For mobile robot, recognizing its current location is very important to navigate autonomously. Especially, loop closing detection that robot recognize location where it has visited before is a kernel problem to solve localization. A considerable amount of research has been conducted on loop closing detection and localization based on appearance because vision sensor has an advantage in terms of costs and various approaching methods to solve this problem. In case of scenes that consist of repeated structures like in corridors, perceptual aliasing in which, the two different locations are recognized as the same, occurs frequently. In this paper, we propose an improved method to recognize location in the scenes which have similar structures. We extracted salient regions from images using visual attention model and calculated weights using distinctive features in the salient region. It makes possible to emphasize unique features in the scene to classify similar-looking locations. In the results of corridor recognition experiments, proposed method showed improved recognition performance. It shows 78.2% in the accuracy of single floor corridor recognition and 71.5% for multi floor corridors recognition.

Localization of a mobile robot using the appearance-based approach (외향 기반 환경 인식을 사용한 이동 로봇의 위치인식 알고리즘)

  • 이희성;김은태
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.6
    • /
    • pp.47-53
    • /
    • 2004
  • This paper proposes an algerian for determining robot location using appearance-based paradigm. First, this algorithm compresses the image set using Principal Component Analysis(PCA) to obtain a low-dimensional subspace, called the eigenspace, and it makes a manifold that represent a continuous-appearance function. Neural network is employed to estimate the location of the mobile robot from the coefficients of the eigenspace. Then, Kalman filtering scheme is used for the fine estimation of the robot location. The algorithm has been implemented and tested on a mobile robot system. It is shown that the robot location is estimated accurately in several trials.

Neural Network-based place localization for a mobile Robot using eigenspace (Eigenspace를 이용한 신경회로망 기반의 로봇 위치 인식 시스템)

  • Lee, Hui-Seong;Lee, Yun-Hui;Kim, Eun-Tae;Park, Min-Yong
    • Proceedings of the KIEE Conference
    • /
    • 2003.11c
    • /
    • pp.1010-1013
    • /
    • 2003
  • This paper describes an algorithm for determining robot location using appearance-based paradigm. This algorithm compress the image set using PCA(principal component analysis) to obtain a low-dimensional subspace, called the eigenspace, and it makes a manifold that represent a continuous-appearance function. To determine robot location, given an unknown input image, the recognition system first projects the image to eigenspace. Neural network use coefficients of the eigenspace to estimate the location of the mobile robot. The algorithm has been implemented and tested on a mobile robot system. In several trials it computes location accurately.

  • PDF

Appearance Based Object Identification for Mobile Robot Localization in Intelligent Space with Distributed Vision Sensors

  • Jin, TaeSeok;Morioka, Kazuyuki;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.2
    • /
    • pp.165-171
    • /
    • 2004
  • Robots will be able to coexist with humans and support humans effectively in near future. One of the most important aspects in the development of human-friendly robots is to cooperation between humans and robots. In this paper, we proposed a method for multi-object identification in order to achieve such human-centered system and robot localization in intelligent space. The intelligent space is the space where many intelligent devices, such as computers and sensors, are distributed. The Intelligent Space achieves the human centered services by accelerating the physical and psychological interaction between humans and intelligent devices. As an intelligent device of the Intelligent Space, a color CCD camera module, which includes processing and networking part, has been chosen. The Intelligent Space requires functions of identifying and tracking the multiple objects to realize appropriate services to users under the multi-camera environments. In order to achieve seamless tracking and location estimation many camera modules are distributed. They causes some errors about object identification among different camera modules. This paper describes appearance based object representation for the distributed vision system in Intelligent Space to achieve consistent labeling of all objects. Then, we discuss how to learn the object color appearance model and how to achieve the multi-object tracking under occlusions.

Robust Face Recognition System using AAM and Gabor Feature Vectors (AAM과 가버 특징 벡터를 이용한 강인한 얼굴 인식 시스템)

  • Kim, Sang-Hoon;Jung, Sou-Hwan;Jeon, Seoung-Seon;Kim, Jae-Min;Cho, Seong-Won;Chung, Sun-Tae
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.1-10
    • /
    • 2007
  • In this paper, we propose a face recognition system using AAM and Gabor feature vectors. EBGM, which is prominent among face recognition algorithms employing Gabor feature vectors, requires localization of facial feature points where Gabor feature vectors are extracted. However, localization of facial feature points employed in EBGM is based on Gator jet similarity and is sensitive to initial points. Wrong localization of facial feature points affects face recognition rate. AAM is known to be successfully applied to localization of facial feature points. In this paper, we propose a facial feature point localization method which first roughly estimate facial feature points using AAM and refine facial feature points using Gabor jet similarity-based localization method with initial points set by the facial feature points estimated from AAM, and propose a face recognition system based on the proposed localization method. It is verified through experiments that the proposed face recognition system using the combined localization performs better than the conventional face recognition system using the Gabor similarity-based localization only like EBGM.

Topological Localization of Mobile Robots in Real Indoor Environment (실제 실내 환경에서 이동로봇의 위상학적 위치 추정)

  • Park, Young-Bin;Suh, Il-Hong;Choi, Byung-Uk
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.1
    • /
    • pp.25-33
    • /
    • 2009
  • One of the main problems of topological localization in a real indoor environment is variations in the environment caused by dynamic objects and changes in illumination. Another problem arises from the sense of topological localization itself. Thus, a robot must be able to recognize observations at slightly different positions and angles within a certain topological location as identical in terms of topological localization. In this paper, a possible solution to these problems is addressed in the domain of global topological localization for mobile robots, in which environments are represented by their visual appearance. Our approach is formulated on the basis of a probabilistic model called the Bayes filter. Here, marginalization of dynamics in the environment, marginalization of viewpoint changes in a topological location, and fusion of multiple visual features are employed to measure observations reliably, and action-based view transition model and action-associated topological map are used to predict the next state. We performed experiments to demonstrate the validity of our proposed approach among several standard approaches in the field of topological localization. The results clearly demonstrated the value of our approach.

  • PDF

Extended Support Vector Machines for Object Detection and Localization

  • Feyereisl, Jan;Han, Bo-Hyung
    • The Magazine of the IEIE
    • /
    • v.39 no.2
    • /
    • pp.45-54
    • /
    • 2012
  • Object detection is a fundamental task for many high-level computer vision applications such as image retrieval, scene understanding, activity recognition, visual surveillance and many others. Although object detection is one of the most popular problems in computer vision and various algorithms have been proposed thus far, it is also notoriously difficult, mainly due to lack of proper models for object representation, that handle large variations of object structure and appearance. In this article, we review a branch of object detection algorithms based on Support Vector Machines (SVMs), a well-known max-margin technique to minimize classification error. We introduce a few variations of SVMs-Structural SVMs and Latent SVMs-and discuss their applications to object detection and localization.

  • PDF

A novel robot localization algorithm based on neural network and Kalman filter (신경 회로망과 칼만 필터를 결합한 새로운 방식의 로봇 위치인식 알고리즘)

  • 이희성;김은태;박민용
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2004.04a
    • /
    • pp.519-522
    • /
    • 2004
  • 본 논문에서는 외향 기반 접근법을 기반으로 한 로봇의 위치 추정 알고리즘을 제안한다. 로봇이 작업을 수행할 공간에서 강한 상관관계를 갖는 영상들을 취득하여 eigenspace로 투영 시킴으로써 주성분의 추출을 수행한다. 이 추출된 주성분은 신경 회로망을 이용해 eigenspace에서의 연속 외향 함수(continuous appearance function)로 나타낼 수 있다. 로봇의 위치 추정을 위해 새로운 영상이 주어지면 이것을 eigenspace로 투영 시킨 후 연속 외향 함수를 통해 로봇의 현재 위치를 추정한다. 최종적으로는, 영상안의 데이터에 칼만 필터를 적용함으로써 로봇의 정확한 위치와 영상으로 획득된 정보 사이의 오차를 이용하여 보다 정확한 이동 로봇의 위치를 추정하는 알고리즘을 제안한다.

  • PDF

Real-time Human Pose Estimation using RGB-D images and Deep Learning

  • Rim, Beanbonyka;Sung, Nak-Jun;Ma, Jun;Choi, Yoo-Joo;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.3
    • /
    • pp.113-121
    • /
    • 2020
  • Human Pose Estimation (HPE) which localizes the human body joints becomes a high potential for high-level applications in the field of computer vision. The main challenges of HPE in real-time are occlusion, illumination change and diversity of pose appearance. The single RGB image is fed into HPE framework in order to reduce the computation cost by using depth-independent device such as a common camera, webcam, or phone cam. However, HPE based on the single RGB is not able to solve the above challenges due to inherent characteristics of color or texture. On the other hand, depth information which is fed into HPE framework and detects the human body parts in 3D coordinates can be usefully used to solve the above challenges. However, the depth information-based HPE requires the depth-dependent device which has space constraint and is cost consuming. Especially, the result of depth information-based HPE is less reliable due to the requirement of pose initialization and less stabilization of frame tracking. Therefore, this paper proposes a new method of HPE which is robust in estimating self-occlusion. There are many human parts which can be occluded by other body parts. However, this paper focuses only on head self-occlusion. The new method is a combination of the RGB image-based HPE framework and the depth information-based HPE framework. We evaluated the performance of the proposed method by COCO Object Keypoint Similarity library. By taking an advantage of RGB image-based HPE method and depth information-based HPE method, our HPE method based on RGB-D achieved the mAP of 0.903 and mAR of 0.938. It proved that our method outperforms the RGB-based HPE and the depth-based HPE.

Efficient Visual Place Recognition by Adaptive CNN Landmark Matching

  • Chen, Yutian;Gan, Wenyan;Zhu, Yi;Tian, Hui;Wang, Cong;Ma, Wenfeng;Li, Yunbo;Wang, Dong;He, Jixian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.11
    • /
    • pp.4084-4104
    • /
    • 2021
  • Visual place recognition (VPR) is a fundamental yet challenging task of mobile robot navigation and localization. The existing VPR methods are usually based on some pairwise similarity of image descriptors, so they are sensitive to visual appearance change and also computationally expensive. This paper proposes a simple yet effective four-step method that achieves adaptive convolutional neural network (CNN) landmark matching for VPR. First, based on the features extracted from existing CNN models, the regions with higher significance scores are selected as landmarks. Then, according to the coordinate positions of potential landmarks, landmark matching is improved by removing mismatched landmark pairs. Finally, considering the significance scores obtained in the first step, robust image retrieval is performed based on adaptive landmark matching, and it gives more weight to the landmark matching pairs with higher significance scores. To verify the efficiency and robustness of the proposed method, evaluations are conducted on standard benchmark datasets. The experimental results indicate that the proposed method reduces the feature representation space of place images by more than 75% with negligible loss in recognition precision. Also, it achieves a fast matching speed in similarity calculation, satisfying the real-time requirement.