• 제목/요약/키워드: Lidar Intensity

검색결과 31건 처리시간 0.02초

디중분광영상과 LIDAR자료를 이용한 농업지역 토지피복 분류 (Rural Land Cover Classification using Multispectral Image and LIDAR Data)

  • 장재동
    • 대한원격탐사학회지
    • /
    • 제22권2호
    • /
    • pp.101-110
    • /
    • 2006
  • 본 연구에서는 항공 관측으로 얻어진 다중분광영상과 LIDAR (LIght Detection And Ranging) 자료를 이용하여 농업지역의 토지피복 분류 정도를 분석하였다. 다중분광영상은 녹색, 적색, 근적외역의 3분광으로 이루어져 있다. LIDAR 벡터 자료로부터 최초 반사강도 영상과 최초 반사 표고 자료와 최후 반사의 지상 표고 자료의 차이로 산출된 식생 높이 영상이 얻어졌다. 토지피복 분류 방법은 최대우도법을 사용했으며, 다중분광영상의 3밴드 영상 LIDAR의 반사강도 영상, 식생 높이 영상을 이용하였다. 모든 영상을 이용한 토지피복 분류의 전체 정도는 85.6%로 다중분광영상만을 이용한 정도보다 10%이상 향상되었다. 여러 농작물간의 높이의 차이, 수목과 농작물 높이의 차이와 LIDAR 반사강도 차이로 인하여 다중분광영상과 LIDAR 영상을 사용한 토지피복 분류의 정도가 향상되었다.

GENERATION OF AIRBORNE LIDAR INTENSITY IMAGE BY NORMALIZAING RANGE DIFFERENCES

  • Shin, Jung-Il;Yoon, Jong-Suk;Lee, Kyu-Sung
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2006년도 Proceedings of ISRS 2006 PORSEC Volume I
    • /
    • pp.504-507
    • /
    • 2006
  • Airborn Lidar technology has been applied to diverse applications with the advantages of accurate 3D information. Further, Lidar intensity, backscattered signal power, can provid us additional information regarding target's characteristics. Lidar intensity varies by the target reflectance, moisture condition, range, and viewing geometry. This study purposes to generate normalized airborne LiDAR intensity image considering those influential factors such as reflectance, range and geometric/topographic factors (scan angle, ground height, aspect, slope, local incidence angle: LIA). Laser points from one flight line were extracted to simplify the geometric conditions. Laser intensities of sample plots, selected by using a set of reference data and ground survey, werethen statistically analyzed with independent variables. Target reflectance, range between sensor and target, and surface slope were main factors to influence the laser intensity. Intensity of laser points was initially normalized by removing range effect only. However, microsite topographic factor, such as slope angle, was not normalized due to difficulty of automatic calculation.

  • PDF

2D 라이다 데이터베이스 기반 장애물 분류 기법 (Obstacle Classification Method Based on Single 2D LIDAR Database)

  • 이무현;허수정;박용완
    • 대한임베디드공학회논문지
    • /
    • 제10권3호
    • /
    • pp.179-188
    • /
    • 2015
  • We propose obstacle classification method based on 2D LIDAR(Light Detecting and Ranging) database. The existing obstacle classification method based on 2D LIDAR, has an advantage in terms of accuracy and shorter calculation time. However, it was difficult to classifier the type of obstacle and therefore accurate path planning was not possible. In order to overcome this problem, a method of classifying obstacle type based on width data of obstacle was proposed. However, width data was not sufficient to improve accuracy. In this paper, database was established by width, intensity, variance of range, variance of intensity data. The first classification was processed by the width data, and the second classification was processed by the intensity data, and the third classification was processed by the variance of range, intensity data. The classification was processed by comparing to database, and the result of obstacle classification was determined by finding the one with highest similarity values. An experiment using an actual autonomous vehicle under real environment shows that calculation time declined in comparison to 3D LIDAR and it was possible to classify obstacle using single 2D LIDAR.

라이다와 광학영상을 이용한 토지피복분류 (Land Cover Classification Using Lidar and Optical Image)

  • 조우석;장휘정;김유석
    • 한국측량학회지
    • /
    • 제24권1호
    • /
    • pp.139-145
    • /
    • 2006
  • 라이다 데이터는 데이터 취득시간과 처리시간이 짧으며 높은 점밀도와 정확도를 가지고 있다. 그러나 광학영상과는 달리 3차원 형태의 비정규 점군의 형태이기 때문에 지표면에 대한 정확한 분류가 어렵다. 본 연구에서는 라이다 데이터와 광학영상을 동시에 이용해서 감독분류 기법을 통해 토지피복분류를 수행하였다. 먼저 라이다 데이터로부터 격자 크기가 1m인 DSM 영상과 DEM 영상을 제작하고 이를 이용하여 nDSM 영상을 제작하였다. 또한 라이다 데이터의 인텐서티(intensity) 정보를 이용해서 인텐서티 영상을 제작하였다. 광학영상의 입력데이터는 CCD 영상의 적색, 청색, 녹색 파장영역과 IKONOS 영상의 근적외선 파장영역이다. 그리고 CCD 영상의 적생광 파장영역을 이용해서 제작한 식생지수 영상이다. 광학영상과 라이다 데이터를 동시에 이용해서 토지피복 분류를 수행한 결과 74%의 분류 정확도를 얻을 수 있었다. 추가적으로 그림자 지역의 재분류, 수계지역의 처리 그리고 숲과 건물의 오분류 수정 과정을 수행하여 최종적으로 81.8%의 분류 정확도를 얻을 수 있었다.

라이다 점군 밀도에 강인한 맵 오차 측정 기구 설계 및 알고리즘 (Map Error Measuring Mechanism Design and Algorithm Robust to Lidar Sparsity)

  • 정상우;정민우;김아영
    • 로봇학회논문지
    • /
    • 제16권3호
    • /
    • pp.189-198
    • /
    • 2021
  • In this paper, we introduce the software/hardware system that can reliably calculate the distance from sensor to the model regardless of point cloud density. As the 3d point cloud map is widely adopted for SLAM and computer vision, the accuracy of point cloud map is of great importance. However, the 3D point cloud map obtained from Lidar may reveal different point cloud density depending on the choice of sensor, measurement distance and the object shape. Currently, when measuring map accuracy, high reflective bands are used to generate specific points in point cloud map where distances are measured manually. This manual process is time and labor consuming being highly affected by Lidar sparsity level. To overcome these problems, this paper presents a hardware design that leverage high intensity point from three planar surface. Furthermore, by calculating distance from sensor to the device, we verified that the automated method is much faster than the manual procedure and robust to sparsity by testing with RGB-D camera and Lidar. As will be shown, the system performance is not limited to indoor environment by progressing the experiment using Lidar sensor at outdoor environment.

비정형의 건설환경 매핑을 위한 레이저 반사광 강도와 주변광을 활용한 향상된 라이다-관성 슬램 (Intensity and Ambient Enhanced Lidar-Inertial SLAM for Unstructured Construction Environment)

  • 정민우;정상우;장혜수;김아영
    • 로봇학회논문지
    • /
    • 제16권3호
    • /
    • pp.179-188
    • /
    • 2021
  • Construction monitoring is one of the key modules in smart construction. Unlike structured urban environment, construction site mapping is challenging due to the characteristics of an unstructured environment. For example, irregular feature points and matching prohibit creating a map for management. To tackle this issue, we propose a system for data acquisition in unstructured environment and a framework for Intensity and Ambient Enhanced Lidar Inertial Odometry via Smoothing and Mapping, IA-LIO-SAM, that achieves highly accurate robot trajectories and mapping. IA-LIO-SAM utilizes a factor graph same as Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping (LIO-SAM). Enhancing the existing LIO-SAM, IA-LIO-SAM leverages point's intensity and ambient value to remove unnecessary feature points. These additional values also perform as a new factor of the K-Nearest Neighbor algorithm (KNN), allowing accurate comparisons between stored points and scanned points. The performance was verified in three different environments and compared with LIO-SAM.

데이터 누적을 이용한 반사도 지역 지도 생성과 반사도 지도 기반 정밀 차량 위치 추정 (Intensity Local Map Generation Using Data Accumulation and Precise Vehicle Localization Based on Intensity Map)

  • 김규원;이병현;임준혁;지규인
    • 제어로봇시스템학회논문지
    • /
    • 제22권12호
    • /
    • pp.1046-1052
    • /
    • 2016
  • For the safe driving of autonomous vehicles, accurate position estimation is required. Generally, position error must be less than 1m because of lane keeping. However, GPS positioning error is more than 1m. Therefore, we must correct this error and a map matching algorithm is generally used. Especially, road marking intensity map have been used in many studies. In previous work, 3D LIDAR with many vertical layers was used to generate a local intensity map. Because it can be obtained sufficient longitudinal information for map matching. However, it is expensive and sufficient road marking information cannot be obtained in rush hour situations. In this paper, we propose a localization algorithm using an accumulated intensity local map. An accumulated intensity local map can be generated with sufficient longitudinal information using 3D LIDAR with a few vertical layers. Using this algorithm, we can also obtain sufficient intensity information in rush hour situations. Thus, it is possible to increase the reliability of the map matching and get accurate position estimation result. In the experimental result, the lateral RMS position error is about 0.12m and the longitudinal RMS error is about 0.19m.

GPS/INS와 LIDAR자료를 이용한 자동 항공영상 정사보정 개발 (Development of Automatic Airborne Image Orthorectification Using GPS/INS and LIDAR Data)

  • 장재동
    • 한국정보통신학회논문지
    • /
    • 제10권4호
    • /
    • pp.693-699
    • /
    • 2006
  • 항공관측으로 얻어지는 디지털 영상은 지리정보로써의 가치를 가지기 위해서는 정밀하게 정사보정되어야 한다. 항공영상의 자동 정사보정을 위해 카메라와 함께 설치된 GPS/INS (Global Positioning System/Inertial Navigation System) 자료와 LIDAR (LIght Detection And Ranging) 지표고도 자료를 이용하였다. 본 연구에서 635개 항공영상이 생산되고 LIDAR 자료는 정사보정에 적용하기 위하여 격자영상 형태로 변환되었다. 영상 전체적으로 일정한 명도를 가지기 위해서, flat field 수정을 영상에 적용하였다. 영상은 내부방위와 GPS/INS를 이용한 외부방위를 계산하여 기하보정되고, LIDAR 지표고도 영상을 이용하여 정사보정되었다. 정사보정의 정도는 임의의 5개 영상과 LIDAR 반사강도 영상에서 50개 지상기준점을 수집하여 검증되었다. 검정된 결과로써 RMSE (Root Mean Square Error)는 화소 해상도의 단지 2배에 해당하는 0.387 m를 도출하였다. 높은 정도를 가진 자동 항공영상 정사보정 방법은 항공영상 산업에 적용 가능할 것이다.

AUTOMATIC ORTHORECTIFICATION OF AIRBORNE IMAGERY USING GPS/INS DATA

  • Jang, Jae-Dong;Kim, Young-Seup;Yoon, Hong-Joo
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2006년도 Proceedings of ISRS 2006 PORSEC Volume II
    • /
    • pp.684-687
    • /
    • 2006
  • Airborne imagery must be precisely orthorectified to be used as geographical information data. GPS/INS (Global Positioning System/Inertial Navigation System) and LIDAR (LIght Detection And Ranging) data were employed to automatically orthorectify airborne images. In this study, 154 frame airborne images and LIDAR vector data were acquired. LIDAR vector data were converted to raster image for employing as reference data. To derive images with constant brightness, flat field correction was applied to the whole images. The airborne images were geometrically corrected by calculating internal orientation and external orientation using GPS/INS data and then orthorectified using LIDAR digital elevation model image. The precision of orthorectified images was validated using 50 ground control points collected in arbitrary selected five images and LIDAR intensity image. In validation results, RMSE (Root Mean Square Error) was 0.365 smaller then two times of pixel spatial resolution at the surface. It is possible that the derived mosaicked airborne image by this automatic orthorectification method is employed as geographical information data.

  • PDF