• Title/Summary/Keyword: Feature Point

Search Result 1,329, Processing Time 0.032 seconds

3-D Working Point Decision Method for Industrial Robot (산업용 로봇의 3차원 작업 위치 결정 방법)

  • Ryu, Hang-Ki;Lee, Jae-Kook;Kim, Byeong-Woo;Choi, Won-Ho
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.57 no.1
    • /
    • pp.121-127
    • /
    • 2008
  • In this paper, we propose a new 3-D working point determination method for industrial robot using vision camera system and block interpolation technique with feature points in a vehicle body. To detect the feature points in a vehicle body, we applied the pattern matching method. For determination of working point, we applied block interpolation method. The block consists of 3-D type blocks with detected feature points per section. 3-D position is selected by Euclidean distance between 245 feature values and an acquired feature point. In order to evaluate the proposed algorithm, experiments are performed in glass equipment process in real industrial vehicle assembly line.

Pose Tracking of Moving Sensor using Monocular Camera and IMU Sensor

  • Jung, Sukwoo;Park, Seho;Lee, KyungTaek
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.8
    • /
    • pp.3011-3024
    • /
    • 2021
  • Pose estimation of the sensor is important issue in many applications such as robotics, navigation, tracking, and Augmented Reality. This paper proposes visual-inertial integration system appropriate for dynamically moving condition of the sensor. The orientation estimated from Inertial Measurement Unit (IMU) sensor is used to calculate the essential matrix based on the intrinsic parameters of the camera. Using the epipolar geometry, the outliers of the feature point matching are eliminated in the image sequences. The pose of the sensor can be obtained from the feature point matching. The use of IMU sensor can help initially eliminate erroneous point matches in the image of dynamic scene. After the outliers are removed from the feature points, these selected feature points matching relations are used to calculate the precise fundamental matrix. Finally, with the feature point matching relation, the pose of the sensor is estimated. The proposed procedure was implemented and tested, comparing with the existing methods. Experimental results have shown the effectiveness of the technique proposed in this paper.

Line-Based SLAM Using Vanishing Point Measurements Loss Function (소실점 정보의 Loss 함수를 이용한 특징선 기반 SLAM)

  • Hyunjun Lim;Hyun Myung
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.3
    • /
    • pp.330-336
    • /
    • 2023
  • In this paper, a novel line-based simultaneous localization and mapping (SLAM) using a loss function of vanishing point measurements is proposed. In general, the Huber norm is used as a loss function for point and line features in feature-based SLAM. The proposed loss function of vanishing point measurements is based on the unit sphere model. Because the point and line feature measurements define the reprojection error in the image plane as a residual, linear loss functions such as the Huber norm is used. However, the typical loss functions are not suitable for vanishing point measurements with unbounded problems. To tackle this problem, we propose a loss function for vanishing point measurements. The proposed loss function is based on unit sphere model. Finally, we prove the validity of the loss function for vanishing point through experiments on a public dataset.

Feature-Point Extraction by Dynamic Linking Model bas Wavelets and Fuzzy C-Means Clustering Algorithm (Gabor 웨이브렛과 FCM 군집화 알고리즘에 기반한 동적 연결모형에 의한 얼굴표정에서 특징점 추출)

  • 신영숙
    • Korean Journal of Cognitive Science
    • /
    • v.14 no.1
    • /
    • pp.11-16
    • /
    • 2003
  • This Paper extracts the edge of main components of face with Gator wavelets transformation in facial expression images. FCM(Fuzzy C-Means) clustering algorithm then extracts the representative feature points of low dimensionality from the edge extracted in neutral face. The feature-points of the neutral face is used as a template to extract the feature-points of facial expression images. To match point to Point feature points on an expression face against each feature point on a neutral face, it consists of two steps using a dynamic linking model, which are called the coarse mapping and the fine mapping. This paper presents an automatic extraction of feature-points by dynamic linking model based on Gabor wavelets and fuzzy C-means(FCM) algorithm. The result of this study was applied to extract features automatically in facial expression recognition based on dimension[1].

  • PDF

Method for Road Vanishing Point Detection Using DNN and Hog Feature (DNN과 HoG Feature를 이용한 도로 소실점 검출 방법)

  • Yoon, Dae-Eun;Choi, Hyung-Il
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.1
    • /
    • pp.125-131
    • /
    • 2019
  • A vanishing point is a point on an image to which parallel lines projected from a real space gather. A vanishing point in a road space provides important spatial information. It is possible to improve the position of an extracted lane or generate a depth map image using a vanishing point in the road space. In this paper, we propose a method of detecting vanishing points on images taken from a vehicle's point of view using Deep Neural Network (DNN) and Histogram of Oriented Gradient (HoG). The proposed algorithm is divided into a HoG feature extraction step, in which the edge direction is extracted by dividing an image into blocks, a DNN learning step, and a test step. In the learning stage, learning is performed using 2,300 road images taken from a vehicle's point of views. In the test phase, the efficiency of the proposed algorithm using the Normalized Euclidean Distance (NormDist) method is measured.

Depth Image Based Feature Detection Method Using Hybrid Filter (융합형 필터를 이용한 깊이 영상 기반 특징점 검출 기법)

  • Jeon, Yong-Tae;Lee, Hyun;Choi, Jae-Sung
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.12 no.6
    • /
    • pp.395-403
    • /
    • 2017
  • Image processing for object detection and identification has been studied for supply chain management application with various approaches. Among them, feature pointed detection algorithm is used to track an object or to recognize a position in automated supply chain systems and a depth image based feature point detection is recently highlighted in the application. The result of feature point detection is easily influenced by image noise. Also, the depth image has noise itself and it also affects to the accuracy of the detection results. In order to solve these problems, we propose a novel hybrid filtering mechanism for depth image based feature point detection, it shows better performance compared with conventional hybrid filtering mechanism.

EKF-based SLAM Using Sonar Salient Feature and Line Feature for Mobile Robots (이동로봇을 위한 Sonar Salient 형상과 선 형상을 이용한 EKF 기반의 SLAM)

  • Heo, Young-Jin;Lim, Jong-Hwan;Lee, Se-Jin
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.28 no.10
    • /
    • pp.1174-1180
    • /
    • 2011
  • Not all line or point features capable of being extracted by sonar sensors from cluttered home environments are useful for simultaneous localization and mapping (SLAM) due to their ambiguity because it is difficult to determine the correspondence of line or point features with previously registered feature. Confused line and point features in cluttered environments leads to poor SLAM performance. We introduce a sonar feature structure suitable for a cluttered environment and the extended Kalman filter (EKF)-based SLAM scheme. The reliable line feature is expressed by its end points and engaged togather in EKF SLAM to overcome the geometric limits and maintain the map consistency. Experimental results demonstrate the validity and robustness of the proposed method.

Spherical Signature Description of 3D Point Cloud and Environmental Feature Learning based on Deep Belief Nets for Urban Structure Classification (도시 구조물 분류를 위한 3차원 점 군의 구형 특징 표현과 심층 신뢰 신경망 기반의 환경 형상 학습)

  • Lee, Sejin;Kim, Donghyun
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.3
    • /
    • pp.115-126
    • /
    • 2016
  • This paper suggests the method of the spherical signature description of 3D point clouds taken from the laser range scanner on the ground vehicle. Based on the spherical signature description of each point, the extractor of significant environmental features is learned by the Deep Belief Nets for the urban structure classification. Arbitrary point among the 3D point cloud can represents its signature in its sky surface by using several neighborhood points. The unit spherical surface centered on that point can be considered to accumulate the evidence of each angular tessellation. According to a kind of point area such as wall, ground, tree, car, and so on, the results of spherical signature description look so different each other. These data can be applied into the Deep Belief Nets, which is one of the Deep Neural Networks, for learning the environmental feature extractor. With this learned feature extractor, 3D points can be classified due to its urban structures well. Experimental results prove that the proposed method based on the spherical signature description and the Deep Belief Nets is suitable for the mobile robots in terms of the classification accuracy.

Image Feature Point Selection Method Using Nearest Neighbor Distance Ratio Matching (최인접 거리 비율 정합을 이용한 영상 특징점 선택 방법)

  • Lee, Jun-Woo;Jeong, Jea-Hyup;Kang, Jong-Wook;Na, Sang-Il;Jeong, Dong-Seok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.12
    • /
    • pp.124-130
    • /
    • 2012
  • In this paper, we propose a feature point selection method for MPEG CDVS CE-7 which is processing on International Standard task. Among a large number of extracted feature points, more important feature points which is used in image matching should be selected for the compactness of image descriptor. The proposed method is that remove the feature point in the extraction phase which is filtered by nearest neighbor distance ratio matching in the matching phase. We can avoid the waste of the feature point and employ additional feature points by the proposed method. The experimental results show that our proposed method can obtain true positive rate improvement about 2.3% in pair-wise matching test compared with Test Model.

Real-Time Feature Point Matching Using Local Descriptor Derived by Zernike Moments (저니키 모멘트 기반 지역 서술자를 이용한 실시간 특징점 정합)

  • Hwang, Sun-Kyoo;Kim, Whoi-Yul
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.4
    • /
    • pp.116-123
    • /
    • 2009
  • Feature point matching, which is finding the corresponding points from two images with different viewpoint, has been used in various vision-based applications and the demand for the real-time operation of the matching is increasing these days. This paper presents a real-time feature point matching method by using a local descriptor derived by Zernike moments. From an input image, we find a set of feature points by using an existing fast corner detection algorithm and compute a local descriptor derived by Zernike moments at each feature point. The local descriptor based on Zernike moments represents the properties of the image patch around the feature points efficiently and is robust to rotation and illumination changes. In order to speed up the computation of Zernike moments, we compute the Zernike basis functions with fixed size in advance and store them in lookup tables. The initial matching results are acquired by an Approximate Nearest Neighbor (ANN) method and false matchings are eliminated by a RANSAC algorithm. In the experiments we confirmed that the proposed method matches the feature points in images with various transformations in real-time and outperforms existing methods.