• 제목/요약/키워드: Visual Feature

검색결과 747건 처리시간 0.028초

사각형 특징 기반 Visual SLAM을 위한 자세 추정 방법 (A Camera Pose Estimation Method for Rectangle Feature based Visual SLAM)

  • 이재민;김곤우
    • 로봇학회논문지
    • /
    • 제11권1호
    • /
    • pp.33-40
    • /
    • 2016
  • In this paper, we propose a method for estimating the pose of the camera using a rectangle feature utilized for the visual SLAM. A warped rectangle feature as a quadrilateral in the image by the perspective transformation is reconstructed by the Coupled Line Camera algorithm. In order to fully reconstruct a rectangle in the real world coordinate, the distance between the features and the camera is needed. The distance in the real world coordinate can be measured by using a stereo camera. Using properties of the line camera, the physical size of the rectangle feature can be induced from the distance. The correspondence between the quadrilateral in the image and the rectangle in the real world coordinate can restore the relative pose between the camera and the feature through obtaining the homography. In order to evaluate the performance, we analyzed the result of proposed method with its reference pose in Gazebo robot simulator.

다중 채널 동적 객체 정보 추정을 통한 특징점 기반 Visual SLAM (A New Feature-Based Visual SLAM Using Multi-Channel Dynamic Object Estimation)

  • 박근형;조형기
    • 대한임베디드공학회논문지
    • /
    • 제19권1호
    • /
    • pp.65-71
    • /
    • 2024
  • An indirect visual SLAM takes raw image data and exploits geometric information such as key-points and line edges. Due to various environmental changes, SLAM performance may decrease. The main problem is caused by dynamic objects especially in highly crowded environments. In this paper, we propose a robust feature-based visual SLAM, building on ORB-SLAM, via multi-channel dynamic objects estimation. An optical flow and deep learning-based object detection algorithm each estimate different types of dynamic object information. Proposed method incorporates two dynamic object information and creates multi-channel dynamic masks. In this method, information on actually moving dynamic objects and potential dynamic objects can be obtained. Finally, dynamic objects included in the masks are removed in feature extraction part. As a results, proposed method can obtain more precise camera poses. The superiority of our ORB-SLAM was verified to compared with conventional ORB-SLAM by the experiment using KITTI odometry dataset.

적응적 이진화를 이용하여 빛의 변화에 강인한 영상거리계를 통한 위치 추정 (Robust Visual Odometry System for Illumination Variations Using Adaptive Thresholding)

  • 황요섭;유호윤;이장명
    • 제어로봇시스템학회논문지
    • /
    • 제22권9호
    • /
    • pp.738-744
    • /
    • 2016
  • In this paper, a robust visual odometry system has been proposed and implemented in an environment with dynamic illumination. Visual odometry is based on stereo images to estimate the distance to an object. It is very difficult to realize a highly accurate and stable estimation because image quality is highly dependent on the illumination, which is a major disadvantage of visual odometry. Therefore, in order to solve the problem of low performance during the feature detection phase that is caused by illumination variations, it is suggested to determine an optimal threshold value in the image binarization and to use an adaptive threshold value for feature detection. A feature point direction and a magnitude of the motion vector that is not uniform are utilized as the features. The performance of feature detection has been improved by the RANSAC algorithm. As a result, the position of a mobile robot has been estimated using the feature points. The experimental results demonstrated that the proposed approach has superior performance against illumination variations.

주의 기반 시각정보처리체계 시스템 구현을 위한 스테레오 영상의 변위도를 이용한 새로운 특징맵 구성 및 통합 방법 (A Novel Feature Map Generation and Integration Method for Attention Based Visual Information Processing System using Disparity of a Stereo Pair of Images)

  • 박민철;최경주
    • 정보처리학회논문지B
    • /
    • 제17B권1호
    • /
    • pp.55-62
    • /
    • 2010
  • 인간의 시각 주의 시스템은 주어진 시각장면을 모두 다 처리하기보다는 주의가 집중되는 일정한 작은 영역들을 순간적으로 선택하여 그 부분만을 순차적으로 처리함으로써 복잡한 시각장면을 단순화시켜 쉽게 분석할 수 있는 능력을 가지고 있다. 본 논문에서는 주의 기반 시각정보 처리체계 시스템 구현을 위한 새로운 특징맵 구성 및 통합 방법을 제안한다. 제안하는 시스템에서는 시각특징으로서 색상, 명도, 방위, 형태 외에 2개의 스테레오 영상 쌍으로부터 얻을 수 있는 깊이 정보를 추가하여 사용하였다. 실험결과를 통해 깊이 정보를 사용함으로써 주의 영역의 정탐지율이 개선됨을 확인하였다.

Adaptive Processing for Feature Extraction: Application of Two-Dimensional Gabor Function

  • Lee, Dong-Cheon
    • 대한원격탐사학회지
    • /
    • 제17권4호
    • /
    • pp.319-334
    • /
    • 2001
  • Extracting primitives from imagery plays an important task in visual information processing since the primitives provide useful information about characteristics of the objects and patterns. The human visual system utilizes features without difficulty for image interpretation, scene analysis and object recognition. However, to extract and to analyze feature are difficult processing. The ultimate goal of digital image processing is to extract information and reconstruct objects automatically. The objective of this study is to develop robust method to achieve the goal of the image processing. In this study, an adaptive strategy was developed by implementing Gabor filters in order to extract feature information and to segment images. The Gabor filters are conceived as hypothetical structures of the retinal receptive fields in human vision system. Therefore, to develop a method which resembles the performance of human visual perception is possible using the Gabor filters. A method to compute appropriate parameters of the Gabor filters without human visual inspection is proposed. The entire framework is based on the theory of human visual perception. Digital images were used to evaluate the performance of the proposed strategy. The results show that the proposed adaptive approach improves performance of the Gabor filters for feature extraction and segmentation.

카메라 디포커싱을 이용한 로보트의 시각 서보

  • 신진우;고국현;조형석
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 1994년도 추계학술대회 논문집
    • /
    • pp.559-564
    • /
    • 1994
  • Recently, a visual servoing for an eye-in-hand robot has become an interesting problem. A distance between a camera and a task object is very useful information for visual servoing. In the previous works for visual servoing, the distance can be obtained from the difference between a reference and a measured feature value of the object such as area on image plane. However, since this feature depends on the object, the reference feature value must be changed when other task object is taken. To overcome this difficulty, this paper presents a novel method for visual servoing. In the proposed method, a blur is used to obtain the distance. The blur, one of the most important features, depends on the focal length of camera. Since it is not affected by the change of object, the reference feature value is not changed although other task object is taken. In this paper, we show a relationship between the distance and the blur, and define the feature jacobian matrix based on camera defocusing to operate the robot. A series of experiments is performed to verify the proposed method.

  • PDF

지면 특징점을 이용한 영상 주행기록계에 관한 연구 (A Study on the Visual Odometer using Ground Feature Point)

  • 이윤섭;노경곤;김진걸
    • 한국정밀공학회지
    • /
    • 제28권3호
    • /
    • pp.330-338
    • /
    • 2011
  • Odometry is the critical factor to estimate the location of the robot. In the mobile robot with wheels, odometry can be performed using the information from the encoder. However, the information of location in the encoder is inaccurate because of the errors caused by the wheel's alignment or slip. In general, visual odometer has been used to compensate for the kinetic errors of robot. In case of using the visual odometry under some robot system, the kinetic analysis is required for compensation of errors, which means that the conventional visual odometry cannot be easily applied to the implementation of the other type of the robot system. In this paper, the novel visual odometry, which employs only the single camera toward the ground, is proposed. The camera is mounted at the center of the bottom of the mobile robot. Feature points of the ground image are extracted by using median filter and color contrast filter. In addition, the linear and angular vectors of the mobile robot are calculated with feature points matching, and the visual odometry is performed by using these linear and angular vectors. The proposed odometry is verified through the experimental results of driving tests using the encoder and the new visual odometry.

A Study on Feature-Based Visual Servoing Control of Robot System by Utilizing Redundant Feature

  • Han, Sung-Hyun;Hideki Hashimoto
    • Journal of Mechanical Science and Technology
    • /
    • 제16권6호
    • /
    • pp.762-769
    • /
    • 2002
  • This paper presents how effective it is to use many features for improving the speed and accuracy of visual servo systems. Some rank conditions which relate the image Jacobian to the control performance are derived. The focus is to describe that the accuracy of the camera position control in the world coordinate system is increased by utilizing redundant features in this paper. It is also proven that the accuracy is improved by increasing the number of features involved. Effectiveness of the redundant features is evaluated by the smallest singular value of the image Jacobian which is closely related to the accuracy with respect to the world coordinate system. Usefulness of the redundant features is verified by the real time experiments on a Dual-Arm robot manipulator made by Samsung Electronic Co. Ltd..

특징점이 Field of View를 벗어나지 않는 새로운 Visual Servoing 기법 (A Novel Visual Servoing Approach For Keeping Feature Points Within The Field-of-View)

  • 박도환;염준형;박노용;하인중
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2007년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.322-324
    • /
    • 2007
  • In this paper, an eye-in-hand visual servoing strategy for keeping feature points within the FOV(field-of-view) is proposed. We first specify the FOV constraint which must be satisfied to keep the feature points within the FOV. It is expressed as the inequality relationship between (i) the LOS(jine-of-sight) angles of the center of the feature points from the optical axis of the camera and (ii) the distance between the object and the camera. We then design a nonlinear feedback controller which decouples linearly the translational and rotational control loops. Finally, we show that appropriate choice of the controller gains assures to satisfy the FOV constraint. The main advantage of our approach over the previous ones is that the trajectory of the camera is smooth and circular-like. Furthermore, ours can be applied to the large camera displacement problem.

  • PDF

외란 관측기를 이용한 새로운 시각구동 방법 (A Novel Visual Servoing Method involving Disturbance Observer)

  • 이준수;서일홍
    • 대한전기학회논문지:전력기술부문A
    • /
    • 제48권3호
    • /
    • pp.294-303
    • /
    • 1999
  • To improve the visual servoing performance, several strategies were proposed in the past such as redundant feature points, using a point with different height and weighted selection of image features. The performance of these visual servoing methods depends on the configuration between the camera and object. And redundant feature points require much computation efforts. This paper proposes the visual servoing method based on the disturbance obsever, which compensates the upper off-diagonal component of image feature jacobian to be the null. The performance indices such as sensitivity for a measure of richness, sensitivity of the control to noise, and comtrollability are shown to be improved when the image feature Jacobian is given as a block diagonal matrix. Computer simulations are carried out for a UUMA560 robot and show some results to verify the effectiveness of the proposed method.

  • PDF