• Title/Summary/Keyword: 영상기반 3차원 점군

Search Result 19, Processing Time 0.025 seconds

SIFT Weighting Based Iterative Closest Points Method in 3D Object Reconstruction (3차원 객체 복원을 위한 SIFT 특징점 가중치 기반 반복적 점군 정합 방법)

  • Shin, Dong-Won;Ho, Yo-Sung
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.06a
    • /
    • pp.309-312
    • /
    • 2016
  • 최근 실세계에 존재하는 물체의 3차원 형상과 색상을 디지털화하는 3차원 객체 복원에 대한 관심이 날로 증가하고 있다. 3차원 객체 복원은 영상 획득, 영상 보정, 점군 획득, 반복적 점군 정합, 무리 조정, 3차원 모델 표현과 같은 단계를 거처 통합된 3차원 모델을 생성한다. 그 중 반복적 점군 정합 방법은 카메라 궤적의 초기 값을 획득하는 방법으로서 무리 조정 단계에서 전역 최적 값으로의 수렴을 보장하기 위해 중요한 단계이다. 기존의 반복적 점군 정합 (iterative closest points) 방법에서는 시간이 지남에 따라 누적된 궤적 오차 때문에 발생하는 객체 표류 문제가 발생한다. 본 논문에서는 이 문제를 해결하기 위해 색상 영상에서 SIFT 특징점을 획득하고 3차원 점군을 얻은 뒤 가중치를 부여함으로써 점 군 간의 더 정확한 정합을 수행한다. 실험결과에서 기존의 방법과 비교하여 제안하는 방법이 절대 궤적 오차 (absolute trajectory error)가 감소하는 것을 확인 했고 복원된 3차원 모델에서 객체 표류 현상이 줄어드는 것을 확인했다.

  • PDF

Online Multi-view Range Image Registration using Geometric and Photometric Features (3차원 기하정보 및 특징점 추적을 이용한 다시점 거리영상의 온라인 정합)

  • Baek, Jae-Won;Park, Soon-Yong
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.1000-1005
    • /
    • 2007
  • 본 논문에서는 실물체의 3차원 모델을 복원하기 위해 거리영상 카메라에서 획득된 3차원 점군에 대한 온라인 정합 기법을 제안한다. 제안하는 방법은 거리영상 카메라를 사용하여 연속된 거리영상과 사진영상을 획득하고 문턱값(threshold)을 이용하여 물체와 배경에 대한 정보를 분류한다. 거리영상에서 특징점을 선택하고 특징점에 해당하는 거리영상의 3차원 점군을 이용하여 투영 기반 정합을 실시한다. 초기정합이 종료되면 사진영상간의 대응점을 추적하여 거리영상을 정제하는 과정을 거치는데 대응점 추적에 사용되는 KLT(Kanade-Lucas-Tomasi) 추적기를 수정하여 초기정합의 결과를 대응점 탐색에 이용함으로써 탐색의 속도와 성공률을 증가시켰다. 특징점과 추적된 대응점에 해당하는 3차원 점군을 이용하여 거리영상의 정제를 수행하고 정합이 완료되면 오프라인에서 3차원 모델을 합성하였다. 제안한 알고리듬을 적용하여 2개의 실물체에 대하여 실험을 수행하고 3차원 모델을 생성하였다.

  • PDF

Registration Technique of Partial 3D Point Clouds Acquired from a Multi-view Camera for Indoor Scene Reconstruction (실내환경 복원을 위한 다시점 카메라로 획득된 부분적 3차원 점군의 정합 기법)

  • Kim Sehwan;Woo Woontack
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.3 s.303
    • /
    • pp.39-52
    • /
    • 2005
  • In this paper, a registration method is presented to register partial 3D point clouds, acquired from a multi-view camera, for 3D reconstruction of an indoor environment. In general, conventional registration methods require a high computational complexity and much time for registration. Moreover, these methods are not robust for 3D point cloud which has comparatively low precision. To overcome these drawbacks, a projection-based registration method is proposed. First, depth images are refined based on temporal property by excluding 3D points with a large variation, and spatial property by filling up holes referring neighboring 3D points. Second, 3D point clouds acquired from two views are projected onto the same image plane, and two-step integer mapping is applied to enable modified KLT (Kanade-Lucas-Tomasi) to find correspondences. Then, fine registration is carried out through minimizing distance errors based on adaptive search range. Finally, we calculate a final color referring colors of corresponding points and reconstruct an indoor environment by applying the above procedure to consecutive scenes. The proposed method not only reduces computational complexity by searching for correspondences on a 2D image plane, but also enables effective registration even for 3D points which have low precision. Furthermore, only a few color and depth images are needed to reconstruct an indoor environment.

Feature-based Matching Algorithms for Registration between LiDAR Point Cloud Intensity Data Acquired from MMS and Image Data from UAV (MMS로부터 취득된 LiDAR 점군데이터의 반사강도 영상과 UAV 영상의 정합을 위한 특징점 기반 매칭 기법 연구)

  • Choi, Yoonjo;Farkoushi, Mohammad Gholami;Hong, Seunghwan;Sohn, Hong-Gyoo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.6
    • /
    • pp.453-464
    • /
    • 2019
  • Recently, as the demand for 3D geospatial information increases, the importance of rapid and accurate data construction has increased. Although many studies have been conducted to register UAV (Unmanned Aerial Vehicle) imagery based on LiDAR (Light Detection and Ranging) data, which is capable of precise 3D data construction, studies using LiDAR data embedded in MMS (Mobile Mapping System) are insufficient. Therefore, this study compared and analyzed 9 matching algorithms based on feature points for registering reflectance image converted from LiDAR point cloud intensity data acquired from MMS with image data from UAV. Our results indicated that when the SIFT (Scale Invariant Feature Transform) algorithm was applied, it was able to stable secure a high matching accuracy, and it was confirmed that sufficient conjugate points were extracted even in various road environments. For the registration accuracy analysis, the SIFT algorithm was able to secure the accuracy at about 10 pixels except the case when the overlapping area is low and the same pattern is repeated. This is a reasonable result considering that the distortion of the UAV altitude is included at the time of UAV image capturing. Therefore, the results of this study are expected to be used as a basic research for 3D registration of LiDAR point cloud intensity data and UAV imagery.

Development of Mean Stand Height Module Using Image-Based Point Cloud and FUSION S/W (영상 기반 3차원 점군과 FUSION S/W 기반의 임분고 분석 모듈 개발)

  • KIM, Kyoung-Min
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.19 no.4
    • /
    • pp.169-185
    • /
    • 2016
  • Recently mean stand height has been added as new attribute to forest type maps, but it is often too costly and time consuming to manually measure 9,100,000 points from countrywide stereo aerial photos. In addition, tree heights are frequently measured around tombs and forest edges, which are poor representations of the interior tree stand. This work proposes an estimation of mean stand height using an image-based point cloud, which was extracted from stereo aerial photo with FUSION S/W. Then, a digital terrain model was created by filtering the DSM point cloud and subtracting the DTM from DSM, resulting in nDSM, which represents object heights (buildings, trees, etc.). The RMSE was calculated to compare differences in tree heights between those observed and extracted from the nDSM. The resulting RMSE of average total plot height was 0.96 m. Individual tree heights of the whole study site area were extracted using the USDA Forest Service's FUSION S/W. Finally, mean stand height was produced by averaging individual tree heights in a stand polygon of the forest type map. In order to automate the mean stand height extraction using photogrammetric methods, a module was developed as an ArcGIS add-in toolbox.

Automatic Registration of Point Cloud Data between MMS and UAV using ICP Method (ICP 기법을 이용한 MSS 및 UAV 간 점군 데이터 자동정합)

  • KIM, Jae-Hak;LEE, Chang-Min;KIM, Hyeong-Joon;LEE, Dong-Ha
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.22 no.4
    • /
    • pp.229-240
    • /
    • 2019
  • 3D geo-spatial model have been widely used in the field of Civil Engineering, Medical, Computer Graphics, Urban Management and many other. Especially, the demand for high quality 3D spatial information such as precise road map construction has explosively increased, MMS and UAV techniques have been actively used to acquire them more easily and conveniently in surveying and geo-spatial field. However, in order to perform 3D modeling by integrating the two data set from MMS and UAV, its so needed an proper registration method is required to efficiently correct the difference between the raw data acquisition sensor, the point cloud data generation method, and the observation accuracy occurred when the two techniques are applied. In this study, we obtained UAV point colud data in Yeouido area as the study area in order to determine the automatic registration performance between MMS and UAV point cloud data using ICP(Iterative Closet Point) method. MMS observations was then performed in the study area by dividing 4 zones according to the level of overlap ratio and observation noise with based on UAV data. After we manually registered the MMS data to the UAV data, then compared the results which automatic registered using ICP method. In conclusion, the higher the overlap ratio and the lower the noise level, can bring the more accurate results in the automatic registration using ICP method.

Online Multi-view Range Image Registration using Geometric and Photometric Feature Tracking (3차원 기하정보 및 특징점 추적을 이용한 다시점 거리영상의 온라인 정합)

  • Baek, Jae-Won;Moon, Jae-Kyoung;Park, Soon-Yong
    • The KIPS Transactions:PartB
    • /
    • v.14B no.7
    • /
    • pp.493-502
    • /
    • 2007
  • An on-line registration technique is presented to register multi-view range images for the 3D reconstruction of real objects. Using a range camera, we first acquire range images and photometric images continuously. In the range images, we divide object and background regions using a predefined threshold value. For the coarse registration of the range images, the centroid of the images are used. After refining the registration of range images using a projection-based technique, we use a modified KLT(Kanade-Lucas-Tomasi) tracker to match photometric features in the object images. Using the modified KLT tracker, we can track image features fast and accurately. If a range image fails to register, we acquire new range images and try to register them continuously until the registration process resumes. After enough range images are registered, they are integrated into a 3D model in offline step. Experimental results and error analysis show that the proposed method can be used to reconstruct 3D model very fast and accurately.

A Hybrid Approach for Automated Building Area Extraction from High-Resolution Satellite Imagery (고해상도 위성영상을 활용한 자동화된 건물 영역 추출 하이브리드 접근법)

  • An, Hyowon;Kim, Changjae;Lee, Hyosung;Kwon, Wonsuk
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.6
    • /
    • pp.545-554
    • /
    • 2019
  • This research aims to provide a building area extraction approach over the areas where data acquisition is impossible through field surveying, aerial photography and lidar scanning. Hence, high-resolution satellite images, which have high accessibility over the earth, are utilized for the automated building extraction in this study. 3D point clouds or DSM (Digital Surface Models), derived from the stereo image matching process, provides low quality of building area extraction due to their high level of noises and holes. In this regards, this research proposes a hybrid building area extraction approach which utilizes 3D point clouds (from image matching), and color and linear information (from imagery). First of all, ground and non-ground points are separated from 3D point clouds; then, the initial building hypothesis is extracted from the non-ground points. Secondly, color based building hypothesis is produced by considering the overlapping between the initial building hypothesis and the color segmentation result. Afterwards, line detection and space partitioning results are utilized to acquire the final building areas. The proposed approach shows 98.44% of correctness, 95.05% of completeness, and 1.05m of positional accuracy. Moreover, we see the possibility that the irregular shapes of building areas can be extracted through the proposed approach.

Real-time Localization of An UGV based on Uniform Arc Length Sampling of A 360 Degree Range Sensor (전방향 거리 센서의 균일 원호길이 샘플링을 이용한 무인 이동차량의 실시간 위치 추정)

  • Park, Soon-Yong;Choi, Sung-In
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.6
    • /
    • pp.114-122
    • /
    • 2011
  • We propose an automatic localization technique based on Uniform Arc Length Sampling (UALS) of 360 degree range sensor data. The proposed method samples 3D points from dense a point-cloud which is acquired by the sensor, registers the sampled points to a digital surface model(DSM) in real-time, and determines the location of an Unmanned Ground Vehicle(UGV). To reduce the sampling and registration time of a sequence of dense range data, 3D range points are sampled uniformly in terms of ground sample distance. Using the proposed method, we can reduce the number of 3D points while maintaining their uniformity over range data. We compare the registration speed and accuracy of the proposed method with a conventional sample method. Through several experiments by changing the number of sampling points, we analyze the speed and accuracy of the proposed method.

Three-Dimensional Positional Accuracy Analysis of UAV Imagery Using Ground Control Points Acquired from Multisource Geospatial Data (다종 공간정보로부터 취득한 지상기준점을 활용한 UAV 영상의 3차원 위치 정확도 비교 분석)

  • Park, Soyeon;Choi, Yoonjo;Bae, Junsu;Hong, Seunghwan;Sohn, Hong-Gyoo
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1013-1025
    • /
    • 2020
  • Unmanned Aerial Vehicle (UAV) platform is being widely used in disaster monitoring and smart city, having the advantage of being able to quickly acquire images in small areas at a low cost. Ground Control Points (GCPs) for positioning UAV images are essential to acquire cm-level accuracy when producing UAV-based orthoimages and Digital Surface Model (DSM). However, the on-site acquisition of GCPs takes considerable manpower and time. This research aims to provide an efficient and accurate way to replace the on-site GNSS surveying with three different sources of geospatial data. The three geospatial data used in this study is as follows; 1) 25 cm aerial orthoimages, and Digital Elevation Model (DEM) based on 1:1000 digital topographic map, 2) point cloud data acquired by Mobile Mapping System (MMS), and 3) hybrid point cloud data created by merging MMS data with UAV data. For each dataset a three-dimensional positional accuracy analysis of UAV-based orthoimage and DSM was performed by comparing differences in three-dimensional coordinates of independent check point obtained with those of the RTK-GNSS survey. The result shows the third case, in which MMS data and UAV data combined, to be the most accurate, showing an RMSE accuracy of 8.9 cm in horizontal and 24.5 cm in vertical, respectively. In addition, it has been shown that the distribution of geospatial GCPs has more sensitive on the vertical accuracy than on horizontal accuracy.