• 제목/요약/키워드: appearance based localization

검색결과 13건 처리시간 0.023초

시각주의 모델을 적용한 실내 복도에서의 위치인식 기법 (An Approach for Localization Around Indoor Corridors Based on Visual Attention Model)

  • 윤국열;최선욱;이종호
    • 제어로봇시스템학회논문지
    • /
    • 제17권2호
    • /
    • pp.93-101
    • /
    • 2011
  • For mobile robot, recognizing its current location is very important to navigate autonomously. Especially, loop closing detection that robot recognize location where it has visited before is a kernel problem to solve localization. A considerable amount of research has been conducted on loop closing detection and localization based on appearance because vision sensor has an advantage in terms of costs and various approaching methods to solve this problem. In case of scenes that consist of repeated structures like in corridors, perceptual aliasing in which, the two different locations are recognized as the same, occurs frequently. In this paper, we propose an improved method to recognize location in the scenes which have similar structures. We extracted salient regions from images using visual attention model and calculated weights using distinctive features in the salient region. It makes possible to emphasize unique features in the scene to classify similar-looking locations. In the results of corridor recognition experiments, proposed method showed improved recognition performance. It shows 78.2% in the accuracy of single floor corridor recognition and 71.5% for multi floor corridors recognition.

외향 기반 환경 인식을 사용한 이동 로봇의 위치인식 알고리즘 (Localization of a mobile robot using the appearance-based approach)

  • 이희성;김은태
    • 전자공학회논문지CI
    • /
    • 제41권6호
    • /
    • pp.47-53
    • /
    • 2004
  • 본 논문에서는 외향 기반 접근법을 기반으로 한 로봇의 위치 추정 알고리즘을 제안한다. 우선, 제안한 알고리즘은 주성분 분석(PCA: Principal Component Analysis)을 이용하여 취득한 영상들을 eigenspace로 투영시킴으로써 영상을 압축한다. 추출된 주성분은 eigenspace에서의 연속 외향 함수(continuous appearance function)로 나타낼 수 있다. 신경 회로망은 로봇의 위치추정을 위해 새로운 영상이 주어지면 이것을 eigenspace로 투영 시킨 후 연속 외향 함수를 통해 로봇의 현재 위치를 추정한다. 최종적으로는, 영상안의 데이터에 칼만 필터를 적용함으로써 로봇의 정확한 위치를 추정할 수 있다. 제안한 알고리즘을 실제 이동 로봇에 탑재하여 적용시킨 결과 로봇의 위치를 정확히 추정할 수 있음을 확인 할 수 있었다.

Eigenspace를 이용한 신경회로망 기반의 로봇 위치 인식 시스템 (Neural Network-based place localization for a mobile Robot using eigenspace)

  • 이희성;이윤희;김은태;박민용
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2003년도 학술회의 논문집 정보 및 제어부문 B
    • /
    • pp.1010-1013
    • /
    • 2003
  • This paper describes an algorithm for determining robot location using appearance-based paradigm. This algorithm compress the image set using PCA(principal component analysis) to obtain a low-dimensional subspace, called the eigenspace, and it makes a manifold that represent a continuous-appearance function. To determine robot location, given an unknown input image, the recognition system first projects the image to eigenspace. Neural network use coefficients of the eigenspace to estimate the location of the mobile robot. The algorithm has been implemented and tested on a mobile robot system. In several trials it computes location accurately.

  • PDF

Appearance Based Object Identification for Mobile Robot Localization in Intelligent Space with Distributed Vision Sensors

  • Jin, TaeSeok;Morioka, Kazuyuki;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제4권2호
    • /
    • pp.165-171
    • /
    • 2004
  • Robots will be able to coexist with humans and support humans effectively in near future. One of the most important aspects in the development of human-friendly robots is to cooperation between humans and robots. In this paper, we proposed a method for multi-object identification in order to achieve such human-centered system and robot localization in intelligent space. The intelligent space is the space where many intelligent devices, such as computers and sensors, are distributed. The Intelligent Space achieves the human centered services by accelerating the physical and psychological interaction between humans and intelligent devices. As an intelligent device of the Intelligent Space, a color CCD camera module, which includes processing and networking part, has been chosen. The Intelligent Space requires functions of identifying and tracking the multiple objects to realize appropriate services to users under the multi-camera environments. In order to achieve seamless tracking and location estimation many camera modules are distributed. They causes some errors about object identification among different camera modules. This paper describes appearance based object representation for the distributed vision system in Intelligent Space to achieve consistent labeling of all objects. Then, we discuss how to learn the object color appearance model and how to achieve the multi-object tracking under occlusions.

AAM과 가버 특징 벡터를 이용한 강인한 얼굴 인식 시스템 (Robust Face Recognition System using AAM and Gabor Feature Vectors)

  • 김상훈;정수환;전승선;김재민;조성원;정선태
    • 한국콘텐츠학회논문지
    • /
    • 제7권2호
    • /
    • pp.1-10
    • /
    • 2007
  • 본 논문에서는 AAM(Active Appearance Model)과 가버 특징 벡터를 이용한 얼굴 인식 시스템을 제안한다. 가버 특징 벡터를 사용하는 대표적인 얼굴 인식 알고리즘인 EBGM(Elastic Bunch Graph Matching)은 가버 특징 벡터를 추출하기 위해 얼굴 특징점들의 검출을 필요로 한다. 그런데, EBGM에서 사용되는 얼굴 특징점 검출 방법은 가버젯 유사도에 기반하는데 이는 초기점에 민감하다. 잘못된 특징점 검출은 얼굴 인식에 영향을 미친다. AAM은 얼굴 특징점 검출에 효과적인 것으로 알려져 있다. 본 논문에서는 AAM으로 얼굴 특징점들을 대략적으로 추정하고 추정된 특징점들을 초기점으로 하여 가버젯 유사도 기반 특징점 검출방법으로 특징점 검출을 정교화하는 얼굴 특징점 검출 방법과 이에 기반한 얼굴 인식 시스템을 제안한다. 실험을 통해 제안된 특징점 검출 방법을 사용한 얼굴 인식 시스템이 EBGM과 같이 기존 가버젯 유사도만의 얼굴 특징점 검출을 이용한 얼굴 인식 시스템보다 더 나은 성능 개선을 보임을 실험을 통해 확인하였다.

실제 실내 환경에서 이동로봇의 위상학적 위치 추정 (Topological Localization of Mobile Robots in Real Indoor Environment)

  • 박영빈;서일홍;최병욱
    • 로봇학회논문지
    • /
    • 제4권1호
    • /
    • pp.25-33
    • /
    • 2009
  • One of the main problems of topological localization in a real indoor environment is variations in the environment caused by dynamic objects and changes in illumination. Another problem arises from the sense of topological localization itself. Thus, a robot must be able to recognize observations at slightly different positions and angles within a certain topological location as identical in terms of topological localization. In this paper, a possible solution to these problems is addressed in the domain of global topological localization for mobile robots, in which environments are represented by their visual appearance. Our approach is formulated on the basis of a probabilistic model called the Bayes filter. Here, marginalization of dynamics in the environment, marginalization of viewpoint changes in a topological location, and fusion of multiple visual features are employed to measure observations reliably, and action-based view transition model and action-associated topological map are used to predict the next state. We performed experiments to demonstrate the validity of our proposed approach among several standard approaches in the field of topological localization. The results clearly demonstrated the value of our approach.

  • PDF

Extended Support Vector Machines for Object Detection and Localization

  • Feyereisl, Jan;Han, Bo-Hyung
    • 전자공학회지
    • /
    • 제39권2호
    • /
    • pp.45-54
    • /
    • 2012
  • Object detection is a fundamental task for many high-level computer vision applications such as image retrieval, scene understanding, activity recognition, visual surveillance and many others. Although object detection is one of the most popular problems in computer vision and various algorithms have been proposed thus far, it is also notoriously difficult, mainly due to lack of proper models for object representation, that handle large variations of object structure and appearance. In this article, we review a branch of object detection algorithms based on Support Vector Machines (SVMs), a well-known max-margin technique to minimize classification error. We introduce a few variations of SVMs-Structural SVMs and Latent SVMs-and discuss their applications to object detection and localization.

  • PDF

신경 회로망과 칼만 필터를 결합한 새로운 방식의 로봇 위치인식 알고리즘 (A novel robot localization algorithm based on neural network and Kalman filter)

  • 이희성;김은태;박민용
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2004년도 춘계학술대회 학술발표 논문집 제14권 제1호
    • /
    • pp.519-522
    • /
    • 2004
  • 본 논문에서는 외향 기반 접근법을 기반으로 한 로봇의 위치 추정 알고리즘을 제안한다. 로봇이 작업을 수행할 공간에서 강한 상관관계를 갖는 영상들을 취득하여 eigenspace로 투영 시킴으로써 주성분의 추출을 수행한다. 이 추출된 주성분은 신경 회로망을 이용해 eigenspace에서의 연속 외향 함수(continuous appearance function)로 나타낼 수 있다. 로봇의 위치 추정을 위해 새로운 영상이 주어지면 이것을 eigenspace로 투영 시킨 후 연속 외향 함수를 통해 로봇의 현재 위치를 추정한다. 최종적으로는, 영상안의 데이터에 칼만 필터를 적용함으로써 로봇의 정확한 위치와 영상으로 획득된 정보 사이의 오차를 이용하여 보다 정확한 이동 로봇의 위치를 추정하는 알고리즘을 제안한다.

  • PDF

Real-time Human Pose Estimation using RGB-D images and Deep Learning

  • 림빈보니카;성낙준;마준;최유주;홍민
    • 인터넷정보학회논문지
    • /
    • 제21권3호
    • /
    • pp.113-121
    • /
    • 2020
  • Human Pose Estimation (HPE) which localizes the human body joints becomes a high potential for high-level applications in the field of computer vision. The main challenges of HPE in real-time are occlusion, illumination change and diversity of pose appearance. The single RGB image is fed into HPE framework in order to reduce the computation cost by using depth-independent device such as a common camera, webcam, or phone cam. However, HPE based on the single RGB is not able to solve the above challenges due to inherent characteristics of color or texture. On the other hand, depth information which is fed into HPE framework and detects the human body parts in 3D coordinates can be usefully used to solve the above challenges. However, the depth information-based HPE requires the depth-dependent device which has space constraint and is cost consuming. Especially, the result of depth information-based HPE is less reliable due to the requirement of pose initialization and less stabilization of frame tracking. Therefore, this paper proposes a new method of HPE which is robust in estimating self-occlusion. There are many human parts which can be occluded by other body parts. However, this paper focuses only on head self-occlusion. The new method is a combination of the RGB image-based HPE framework and the depth information-based HPE framework. We evaluated the performance of the proposed method by COCO Object Keypoint Similarity library. By taking an advantage of RGB image-based HPE method and depth information-based HPE method, our HPE method based on RGB-D achieved the mAP of 0.903 and mAR of 0.938. It proved that our method outperforms the RGB-based HPE and the depth-based HPE.

Efficient Visual Place Recognition by Adaptive CNN Landmark Matching

  • Chen, Yutian;Gan, Wenyan;Zhu, Yi;Tian, Hui;Wang, Cong;Ma, Wenfeng;Li, Yunbo;Wang, Dong;He, Jixian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권11호
    • /
    • pp.4084-4104
    • /
    • 2021
  • Visual place recognition (VPR) is a fundamental yet challenging task of mobile robot navigation and localization. The existing VPR methods are usually based on some pairwise similarity of image descriptors, so they are sensitive to visual appearance change and also computationally expensive. This paper proposes a simple yet effective four-step method that achieves adaptive convolutional neural network (CNN) landmark matching for VPR. First, based on the features extracted from existing CNN models, the regions with higher significance scores are selected as landmarks. Then, according to the coordinate positions of potential landmarks, landmark matching is improved by removing mismatched landmark pairs. Finally, considering the significance scores obtained in the first step, robust image retrieval is performed based on adaptive landmark matching, and it gives more weight to the landmark matching pairs with higher significance scores. To verify the efficiency and robustness of the proposed method, evaluations are conducted on standard benchmark datasets. The experimental results indicate that the proposed method reduces the feature representation space of place images by more than 75% with negligible loss in recognition precision. Also, it achieves a fast matching speed in similarity calculation, satisfying the real-time requirement.