• Title/Summary/Keyword: Ground Feature Point

Search Result 48, Processing Time 0.027 seconds

3D feature point extraction technique using a mobile device (모바일 디바이스를 이용한 3차원 특징점 추출 기법)

  • Kim, Jin-Kyum;Seo, Young-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.256-257
    • /
    • 2022
  • In this paper, we introduce a method of extracting three-dimensional feature points through the movement of a single mobile device. Using a monocular camera, a 2D image is acquired according to the camera movement and a baseline is estimated. Perform stereo matching based on feature points. A feature point and a descriptor are acquired, and the feature point is matched. Using the matched feature points, the disparity is calculated and a depth value is generated. The 3D feature point is updated according to the camera movement. Finally, the feature point is reset at the time of scene change by using scene change detection. Through the above process, an average of 73.5% of additional storage space can be secured in the key point database. By applying the algorithm proposed to the depth ground truth value of the TUM Dataset and the RGB image, it was confirmed that the\re was an average distance difference of 26.88mm compared with the 3D feature point result.

  • PDF

Spherical Signature Description of 3D Point Cloud and Environmental Feature Learning based on Deep Belief Nets for Urban Structure Classification (도시 구조물 분류를 위한 3차원 점 군의 구형 특징 표현과 심층 신뢰 신경망 기반의 환경 형상 학습)

  • Lee, Sejin;Kim, Donghyun
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.3
    • /
    • pp.115-126
    • /
    • 2016
  • This paper suggests the method of the spherical signature description of 3D point clouds taken from the laser range scanner on the ground vehicle. Based on the spherical signature description of each point, the extractor of significant environmental features is learned by the Deep Belief Nets for the urban structure classification. Arbitrary point among the 3D point cloud can represents its signature in its sky surface by using several neighborhood points. The unit spherical surface centered on that point can be considered to accumulate the evidence of each angular tessellation. According to a kind of point area such as wall, ground, tree, car, and so on, the results of spherical signature description look so different each other. These data can be applied into the Deep Belief Nets, which is one of the Deep Neural Networks, for learning the environmental feature extractor. With this learned feature extractor, 3D points can be classified due to its urban structures well. Experimental results prove that the proposed method based on the spherical signature description and the Deep Belief Nets is suitable for the mobile robots in terms of the classification accuracy.

A Study on the Visual Odometer using Ground Feature Point (지면 특징점을 이용한 영상 주행기록계에 관한 연구)

  • Lee, Yoon-Sub;Noh, Gyung-Gon;Kim, Jin-Geol
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.28 no.3
    • /
    • pp.330-338
    • /
    • 2011
  • Odometry is the critical factor to estimate the location of the robot. In the mobile robot with wheels, odometry can be performed using the information from the encoder. However, the information of location in the encoder is inaccurate because of the errors caused by the wheel's alignment or slip. In general, visual odometer has been used to compensate for the kinetic errors of robot. In case of using the visual odometry under some robot system, the kinetic analysis is required for compensation of errors, which means that the conventional visual odometry cannot be easily applied to the implementation of the other type of the robot system. In this paper, the novel visual odometry, which employs only the single camera toward the ground, is proposed. The camera is mounted at the center of the bottom of the mobile robot. Feature points of the ground image are extracted by using median filter and color contrast filter. In addition, the linear and angular vectors of the mobile robot are calculated with feature points matching, and the visual odometry is performed by using these linear and angular vectors. The proposed odometry is verified through the experimental results of driving tests using the encoder and the new visual odometry.

Box Feature Estimation from LiDAR Point Cluster using Maximum Likelihood Method (최대우도법을 이용한 라이다 포인트군집의 박스특징 추정)

  • Kim, Jongho;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.4
    • /
    • pp.123-128
    • /
    • 2021
  • This paper present box feature estimation from LiDAR point cluster using maximum likelihood Method. Previous LiDAR tracking method for autonomous driving shows high accuracy about velocity and heading of point cluster. However, Assuming the average position of a point cluster as the vehicle position has a lower accuracy than ground truth. Therefore, the box feature estimation algorithm to improve position accuracy of autonomous driving perception consists of two procedures. Firstly, proposed algorithm calculates vehicle candidate position based on relative position of point cluster. Secondly, to reflect the features of the point cluster in estimation, the likelihood of the particle scattered around the candidate position is used. The proposed estimation method has been implemented in robot operating system (ROS) environment, and investigated via simulation and actual vehicle test. The test result show that proposed cluster position estimation enhances perception and path planning performance in autonomous driving.

A Feature Based Approach to Extracting Ground Points from LIDAR Data (LIDAR 데이터로부터 지표점 추출을 위한 피쳐 기반 방법)

  • Lee, Im-Pyeong
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.4
    • /
    • pp.265-274
    • /
    • 2006
  • Extracting ground points is the kernel of DTM generation being considered as one of the most popular LIDAR applications. The previous extraction approaches can be mostly characterized as a point based approach, which sequentially examines every individual point to determine whether it is measured from ground surfaces. The number of examinations to be performed is then equivalent to the number of points. Particularly in a large set, the heavy computational requirement associated with the examinations is obviously an obstacle to employing more sophisticated criteria for the examination. To reduce the number of entities to be examined and produce more robust results, we developed an approach based on features rather than points, where a feature indicates an entity constructed by grouping some points. In the proposed approach, we first generate a set of features by organizing points into surface patches and grouping the patches into surface clusters. Among these features, we then attempt to identify the ground features with the criteria based on the attributes of the features. The points grouped into these identified features are labeled ground points, being used for DTM generation afterward. The Proposed approach was applied to many real airborne LIDAR data sets. The analysis on the results strongly supports the prominent performance of the proposed approach in terms of not only the computational requirement but also the quality of the DTM.

Approaches for Automatic GCP Extraction and Localization in Airborne SAR Images and Some Test Results

  • Tsay, Jaan-Rong;Liu, Pang-Wei
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.360-362
    • /
    • 2003
  • This paper presents simple feature-based approaches for full- and/or semi-automatic extraction, selection, and localization (center-determination) of ground control points (GCPs) for radargrammetry using airborne synthetic aperture radar (SAR) images. Test results using airborne NASA/JPL TOPSAR images in Taiwan verify that the registration accuracy is about 0.8${\sim}$1.4 pixels. In c.a. 30 minutes, 1500${\sim}$3000 GCPs are extracted and their point centers in a SAR image of about 512 ${\times}$ 512 pixels are determined on a personal computer.

  • PDF

Panorama Image Stitching Using Sythetic Fisheye Image (Synthetic fisheye 이미지를 이용한 360° 파노라마 이미지 스티칭)

  • Kweon, Hyeok-Joon;Cho, Donghyeon
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.20-30
    • /
    • 2022
  • Recently, as VR (Virtual Reality) technology has been in the spotlight, 360° panoramic images that can view lively VR contents are attracting a lot of attention. Image stitching technology is a major technology for producing 360° panorama images, and many studies are being actively conducted. Typical stitching algorithms are based on feature point-based image stitching. However, conventional feature point-based image stitching methods have a problem that stitching results are intensely affected by feature points. To solve this problem, deep learning-based image stitching technologies have recently been studied, but there are still many problems when there are few overlapping areas between images or large parallax. In addition, there is a limit to complete supervised learning because labeled ground-truth panorama images cannot be obtained in a real environment. Therefore, we produced three fisheye images with different camera centers and corresponding ground truth image through carla simulator that is widely used in the autonomous driving field. We propose image stitching model that creates a 360° panorama image with the produced fisheye image. The final experimental results are virtual datasets configured similar to the actual environment, verifying stitching results that are strong against various environments and large parallax.

A Study on the Extraction of Linear Features from Satellite Images and Automatic GCP Filing (위성영상의 선형특징 추출과 이를 이용한 자동 GCP 화일링에 관한 연구)

  • 김정기;강치우;박래홍;이쾌희
    • Korean Journal of Remote Sensing
    • /
    • v.5 no.2
    • /
    • pp.133-145
    • /
    • 1989
  • This paper describes an implementation of linear feature extraction algorithms for satellite images and a method of automatic GCP(Ground Control Point) filing using the extracted linear feature. We propose a new linear feature extraction algorithm which uses magnitude and direction information of edges. The result of applying the proposed algorithm to satellite images are presented and compared with those of the other algorithms. By using the proposed algorithm, automatic GCP filing was successfully performed.

Feedback Analysis for Tunnel Safety using displacements measured during the tunnel excavation (터널굴착에 의한 변위계측값을 활용한 역해석 기법 연구)

  • Park, Si-Hyun;Song, Won-Gen;Oh, Young-Seok;Shin, Yong-Seok
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 2007.04a
    • /
    • pp.199-204
    • /
    • 2007
  • This research aimed at to develop a quantitative assesment technique which uses the measured displacements at the excavated plane during tunnel construction. Tunnel structure has a feature with long extents comparing to the excavated section so that the tunnel safety assesment is more effective by using the measured data of displacements. Tunnel structures show different structural behaviors due to the mechanical characteristics of ground and supports themselves, excavation methods and construction methods of supports, etc. From this point of view, it has very important meanings on the practical aspects that the measured data from the construction cite represent the features of the interaction effects between ground and supports as they are. In this study, both the stress state and the properties of surrounding ground are analyzed by newly incorporated feedback analysis technique which can use the measured displacements directly. Then, the stress state and the properties of ground will be used to obtain the strain distribution of surrounding ground. Finally the tunnel safety can be assessed by comparing the estimated strain through the analysis to the allowable strain of ground quantitatively.

  • PDF

SIFT Image Feature Extraction based on Deep Learning (딥 러닝 기반의 SIFT 이미지 특징 추출)

  • Lee, Jae-Eun;Moon, Won-Jun;Seo, Young-Ho;Kim, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.234-242
    • /
    • 2019
  • In this paper, we propose a deep neural network which extracts SIFT feature points by determining whether the center pixel of a cropped image is a SIFT feature point. The data set of this network consists of a DIV2K dataset cut into $33{\times}33$ size and uses RGB image unlike SIFT which uses black and white image. The ground truth consists of the RobHess SIFT features extracted by setting the octave (scale) to 0, the sigma to 1.6, and the intervals to 3. Based on the VGG-16, we construct an increasingly deep network of 13 to 23 and 33 convolution layers, and experiment with changing the method of increasing the image scale. The result of using the sigmoid function as the activation function of the output layer is compared with the result using the softmax function. Experimental results show that the proposed network not only has more than 99% extraction accuracy but also has high extraction repeatability for distorted images.