• 제목/요약/키워드: vision-based method

검색결과 1,454건 처리시간 0.031초

Vision based place recognition using Bayesian inference with feedback of image retrieval

  • Yi, Hu;Lee, Chang-Woo
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2006년도 추계학술발표대회
    • /
    • pp.19-22
    • /
    • 2006
  • In this paper we present a vision based place recognition method which uses Bayesian method with feed back of image retrieval. Both Bayesian method and image retrieval method are based on interest features that are invariant to many image transformations. The interest features are detected using Harris-Laplacian detector and then descriptors are generated from the image patches centered at the features' position in the same manner of SIFT. The Bayesian method contains two stages: learning and recognition. The image retrieval result is fed back to the Bayesian recognition to achieve robust and confidence. The experimental results show the effectiveness of our method.

  • PDF

신호세기를 이용한 2차원 레이저 스캐너 기반 노면표시 분류 기법 (Road marking classification method based on intensity of 2D Laser Scanner)

  • 박성현;최정희;박용완
    • 대한임베디드공학회논문지
    • /
    • 제11권5호
    • /
    • pp.313-323
    • /
    • 2016
  • With the development of autonomous vehicle, there has been active research on advanced driver assistance system for road marking detection using vision sensor and 3D Laser scanner. However, vision sensor has the weak points that detection is difficult in situations involving severe illumination variance, such as at night, inside a tunnel or in a shaded area; and that processing time is long because of a large amount of data from both vision sensor and 3D Laser scanner. Accordingly, this paper proposes a road marking detection and classification method using single 2D Laser scanner. This method road marking detection and classification based on accumulation distance data and intensity data acquired through 2D Laser scanner. Experiments using a real autonomous vehicle in a real environment showed that calculation time decreased in comparison with 3D Laser scanner-based method, thus demonstrating the possibility of road marking type classification using single 2D Laser scanner.

Augmented Feature Point Initialization Method for Vision/Lidar Aided 6-DoF Bearing-Only Inertial SLAM

  • Yun, Sukchang;Lee, Byoungjin;Kim, Yeon-Jo;Lee, Young Jae;Sung, Sangkyung
    • Journal of Electrical Engineering and Technology
    • /
    • 제11권6호
    • /
    • pp.1846-1856
    • /
    • 2016
  • This study proposes a novel feature point initialization method in order to improve the accuracy of feature point positions by fusing a vision sensor and a lidar. The initialization is a process that determines three dimensional positions of feature points through two dimensional image data, which has a direct influence on performance of a 6-DoF bearing-only SLAM. Prior to the initialization, an extrinsic calibration method which estimates rotational and translational relationships between a vision sensor and lidar using multiple calibration tools was employed, then the feature point initialization method based on the estimated extrinsic calibration parameters was presented. In this process, in order to improve performance of the accuracy of the initialized feature points, an iterative automatic scaling parameter tuning technique was presented. The validity of the proposed feature point initialization method was verified in a 6-DoF bearing-only SLAM framework through an indoor and outdoor tests that compare estimation performance with the previous initialization method.

스테레오 비전 센서의 깊이 및 색상 정보를 이용한 환경 모델링 기반의 이동로봇 주행기술 (Direct Depth and Color-based Environment Modeling and Mobile Robot Navigation)

  • 박순용;박민용;박성기
    • 로봇학회논문지
    • /
    • 제3권3호
    • /
    • pp.194-202
    • /
    • 2008
  • This paper describes a new method for indoor environment mapping and localization with stereo camera. For environmental modeling, we directly use the depth and color information in image pixels as visual features. Furthermore, only the depth and color information at horizontal centerline in image is used, where optical axis passes through. The usefulness of this method is that we can easily build a measure between modeling and sensing data only on the horizontal centerline. That is because vertical working volume between model and sensing data can be changed according to robot motion. Therefore, we can build a map about indoor environment as compact and efficient representation. Also, based on such nodes and sensing data, we suggest a method for estimating mobile robot positioning with random sampling stochastic algorithm. With basic real experiments, we show that the proposed method can be an effective visual navigation algorithm.

  • PDF

구조화된 환경에서의 가중치 템플릿 매칭을 이용한 자율 수중 로봇의 비전 기반 위치 인식 (Vision-based Localization for AUVs using Weighted Template Matching in a Structured Environment)

  • 김동훈;이동화;명현;최현택
    • 제어로봇시스템학회논문지
    • /
    • 제19권8호
    • /
    • pp.667-675
    • /
    • 2013
  • This paper presents vision-based techniques for underwater landmark detection, map-based localization, and SLAM (Simultaneous Localization and Mapping) in structured underwater environments. A variety of underwater tasks require an underwater robot to be able to successfully perform autonomous navigation, but the available sensors for accurate localization are limited. A vision sensor among the available sensors is very useful for performing short range tasks, in spite of harsh underwater conditions including low visibility, noise, and large areas of featureless topography. To overcome these problems and to a utilize vision sensor for underwater localization, we propose a novel vision-based object detection technique to be applied to MCL (Monte Carlo Localization) and EKF (Extended Kalman Filter)-based SLAM algorithms. In the image processing step, a weighted correlation coefficient-based template matching and color-based image segmentation method are proposed to improve the conventional approach. In the localization step, in order to apply the landmark detection results to MCL and EKF-SLAM, dead-reckoning information and landmark detection results are used for prediction and update phases, respectively. The performance of the proposed technique is evaluated by experiments with an underwater robot platform in an indoor water tank and the results are discussed.

고개운동에 의한 단순 비언어 의사표현의 비전인식 (Vision-based recognition of a simple non-verbal intent representation by head movements)

  • 유기호;노덕수;이성철
    • 대한인간공학회지
    • /
    • 제19권1호
    • /
    • pp.91-100
    • /
    • 2000
  • In this paper the intent recognition system which recognizes the human's head movements as a simple non-verbal intent representation is presented. The system recognizes five basic intent representations. i.e., strong/weak affirmation. strong/weak negation, and ambiguity by image processing of nodding or shaking movements of head. The vision system for tracking the head movements is composed of CCD camera, image processing board and personal computer. The modified template matching method which replaces the reference image with the searched target image in the previous step is used for the robust tracking of the head movements. For the improvement of the processing speed, the searching is performed in the pyramid representation of the original image. By inspecting the variance of the head movement trajectories. we can recognizes the two basic intent representations - affirmation and negation. Also, by focusing the speed of the head movements, we can see the possibility which recognizes the strength of the intent representation.

  • PDF

Vision-Based Indoor Localization Using Artificial Landmarks and Natural Features on the Ceiling with Optical Flow and a Kalman Filter

  • Rusdinar, Angga;Kim, Sungshin
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제13권2호
    • /
    • pp.133-139
    • /
    • 2013
  • This paper proposes a vision-based indoor localization method for autonomous vehicles. A single upward-facing digital camera was mounted on an autonomous vehicle and used as a vision sensor to identify artificial landmarks and any natural corner features. An interest point detector was used to find the natural features. Using an optical flow detection algorithm, information related to the direction and vehicle translation was defined. This information was used to track the vehicle movements. Random noise related to uneven light disrupted the calculation of the vehicle translation. Thus, to estimate the vehicle translation, a Kalman filter was used to calculate the vehicle position. These algorithms were tested on a vehicle in a real environment. The image processing method could recognize the landmarks precisely, while the Kalman filter algorithm could estimate the vehicle's position accurately. The experimental results confirmed that the proposed approaches can be implemented in practical situations.

Computer Vision용 조명 설계코드 개발 (Development of Lighting Design Code for Computer Vision)

  • 안인모;이기상
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2002년도 학술대회 논문집 전문대학교육위원
    • /
    • pp.41-45
    • /
    • 2002
  • In industrial computer vision systems, the image quality is dependent on the parameters such as light source, illumination method, optics, and surface properties. Most of them are related with the lighting system, which is designed in heuristic, based on the designer's experimental knowledge, In this paper, a design code by which the optimal lighting method and light source for computer vision systems can be found are suggested based on experimental results, To prove the usefulness of the design code, it is applied to the lighting system design of the transistor marking inspection system and the results are presented.

  • PDF

Computer Vision-based Method to Detect Fire Using Color Variation in Temporal Domain

  • Hwang, Ung;Jeong, Jechang;Kim, Jiyeon;Cho, JunSang;Kim, SungHwan
    • Quantitative Bio-Science
    • /
    • 제37권2호
    • /
    • pp.81-89
    • /
    • 2018
  • It is commonplace that high false detection rates interfere with immediate vision-based fire monitoring system. To circumvent this challenge, we propose a fire detection algorithm that can accommodate color variations of RGB in temporal domain, aiming at reducing false detection rates. Despite interrupting images (e.g., background noise and sudden intervention), the proposed method is proved robust in capturing distinguishable features of fire in temporal domain. In numerical studies, we carried out extensive real data experiments related to fire detection using 24 video sequences, implicating that the propose algorithm is found outstanding as an effective decision rule for fire detection (e.g., false detection rate <10%).

비젼시스템을 이용한 로봇시스템의 점배치실험에 관한 연구 (A Study on the Point Placement Task of Robot System Based on the Vision System)

  • 장완식;유창규
    • 한국정밀공학회지
    • /
    • 제13권8호
    • /
    • pp.175-183
    • /
    • 1996
  • This paper presents three-dimensional robot task using the vision control method. A minimum of two cameras is required to place points on end dffectors of n degree-of-freedom manipulators relative to other bodies. This is accomplished using a sequential estimation scheme that permits placement of these points in each of the two-dimensional image planes of monitoring cameras. Estimation model is developed based on a model that generalizes known three-axis manipulator kinematics to accommodate unknown relative camera position and orientation, etc. This model uses six uncertainty-of-view parameters estimated by the iteration method.

  • PDF