• Title/Summary/Keyword: LiDAR Sensor

Search Result 142, Processing Time 0.02 seconds

Geometric calibration of digital photogrammetric camera in Sejong Test-bed (세종 테스트베드에서 항측용 디지털카메라의 기하학적 검정)

  • Seo, Sang-Il;Won, Jae-Ho;Lee, Jae-One;Park, Byoung-Uk
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.2
    • /
    • pp.181-188
    • /
    • 2012
  • The most recent, Digital photogrammetric camera, Airborne LiDAR and GPS/INS same sensors are used to acquire spatial information of various kinds in the field of aerial survey. In addition, Direct Georeferencing technology has been widely utilized with digital photogrammetric camera and GPS/INS. However, the sensor Calibration to be performed according to the combination of various sensors is followed by problems. Most of all, boresight calibration of integrated sensors is a critical element in the mapping process when using direct georeferencing or using the GPS/INS aerotriangulation. The establishment of a national test-bed in Sejong-si for aerial sensor calibration is absolutely necessary to solve this problem. And accurate calibration with used to integration of GPS/INS by aerotriangulation of aerial imagery was necessary for determination of system parameters, evaluation of systematic errors. Also, an investigation of efficient method for Direct georeferencing to determine the exterior orientation parameters and assessment of geometric accuracy of integrated sensors are performed.

Detecting and Restoring the Occlusion Area for Generating the True Orthoimage Using IKONOS Image (IKONOS 정사영상제작을 위한 폐색 영역의 탐지와 복원)

  • Seo Min-Ho;Lee Byoung-Kil;Kim Yong-Il;Han Dong-Yeob
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.2
    • /
    • pp.131-139
    • /
    • 2006
  • IKONOS images have the perspective geometry in CCD sensor line like aerial images with central perspective geometry. So the occlusion by buildings, terrain or other objects exist in the image. It is difficult to detect the occlusion with RPCs(rational polynomial coefficients) for ortho-rectification of image. Therefore, in this study, we detected the occlusion areas in IKONOS images using the nominal collection elevation/azimuth angle and restored the hidden areas using another stereo images, from which the rue ortho image could be produced. The algorithm's validity was evaluated using the geometric accuracy of the generated ortho image.

Requirements Analysis of Image-Based Positioning Algorithm for Vehicles

  • Lee, Yong;Kwon, Jay Hyoun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.5
    • /
    • pp.397-402
    • /
    • 2019
  • Recently, with the emergence of autonomous vehicles and the increasing interest in safety, a variety of research has been being actively conducted to precisely estimate the position of a vehicle by fusing sensors. Previously, researches were conducted to determine the location of moving objects using GNSS (Global Navigation Satellite Systems) and/or IMU (Inertial Measurement Unit). However, precise positioning of a moving vehicle has lately been performed by fusing data obtained from various sensors, such as LiDAR (Light Detection and Ranging), on-board vehicle sensors, and cameras. This study is designed to enhance kinematic vehicle positioning performance by using feature-based recognition. Therefore, an analysis of the required precision of the observations obtained from the images has carried out in this study. Velocity and attitude observations, which are assumed to be obtained from images, were generated by simulation. Various magnitudes of errors were added to the generated velocities and attitudes. By applying these observations to the positioning algorithm, the effects of the additional velocity and attitude information on positioning accuracy in GNSS signal blockages were analyzed based on Kalman filter. The results have shown that yaw information with a precision smaller than 0.5 degrees should be used to improve existing positioning algorithms by more than 10%.

3D Costmap Generation and Path Planning for Reliable Autonomous Flight in Complex Indoor Environments (복합적인 실내 환경 내 신뢰성 있는 자율 비행을 위한 3차원 장애물 지도 생성 및 경로 계획 알고리즘)

  • Boseong Kim;Seungwook Lee;Jaeyong Park;Hyunchul Shim
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.3
    • /
    • pp.337-345
    • /
    • 2023
  • In this paper, we propose a 3D LiDAR sensor-based costmap generation and path planning algorithm using it for reliable autonomous flight in complex indoor environments. 3D path planning is essential for reliable operation of UAVs. However, existing grid search-based or random sampling-based path planning algorithms in 3D space require a large amount of computation, and UAVs with weight constraints require reliable path planning results in real time. To solve this problem, we propose a method that divides a 3D space into several 2D spaces and a path planning algorithm that considers the distance to obstacles within each space. Among the paths generated in each space, the final path (Best path) that the UAV will follow is determined through the proposed objective function, and for this purpose, we consider the rotation angle of the 2D space, the path length, and the previous best path information. The proposed methods have been verified through autonomous flight of UAVs in real environments, and shows reliable obstacle avoidance performance in various complex environments.

Reliable Autonomous Reconnaissance System for a Tracked Robot in Multi-floor Indoor Environments with Stairs (다층 실내 환경에서 계단 극복이 가능한 궤도형 로봇의 신뢰성 있는 자율 주행 정찰 시스템)

  • Juhyeong Roh;Boseong Kim;Dokyeong Kim;Jihyeok Kim;D. Hyunchul Shim
    • The Journal of Korea Robotics Society
    • /
    • v.19 no.2
    • /
    • pp.149-158
    • /
    • 2024
  • This paper presents a robust autonomous navigation and reconnaissance system for tracked robots, designed to handle complex multi-floor indoor environments with stairs. We introduce a localization algorithm that adjusts scan matching parameters to robustly estimate positions and create maps in environments with scarce features, such as narrow rooms and staircases. Our system also features a path planning algorithm that calculates distance costs from surrounding obstacles, integrated with a specialized PID controller tuned to the robot's differential kinematics for collision-free navigation in confined spaces. The perception module leverages multi-image fusion and camera-LiDAR fusion to accurately detect and map the 3D positions of objects around the robot in real time. Through practical tests in real settings, we have verified that our system performs reliably. Based on this reliability, we expect that our research team's autonomous reconnaissance system will be practically utilized in actual disaster situations and environments that are difficult for humans to access, thereby making a significant contribution.

Entropy-Based 6 Degrees of Freedom Extraction for the W-band Synthetic Aperture Radar Image Reconstruction (W-band Synthetic Aperture Radar 영상 복원을 위한 엔트로피 기반의 6 Degrees of Freedom 추출)

  • Hyokbeen Lee;Duk-jin Kim;Junwoo Kim;Juyoung Song
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1245-1254
    • /
    • 2023
  • Significant research has been conducted on the W-band synthetic aperture radar (SAR) system that utilizes the 77 GHz frequency modulation continuous wave (FMCW) radar. To reconstruct the high-resolution W-band SAR image, it is necessary to transform the point cloud acquired from the stereo cameras or the LiDAR in the direction of 6 degrees of freedom (DOF) and apply them to the SAR signal processing. However, there are difficulties in matching images due to the different geometric structures of images acquired from different sensors. In this study, we present the method to extract an optimized depth map by obtaining 6 DOF of the point cloud using a gradient descent method based on the entropy of the SAR image. An experiment was conducted to reconstruct a tree, which is a major road environment object, using the constructed W-band SAR system. The SAR image, reconstructed using the entropy-based gradient descent method, showed a decrease of 53.2828 in mean square error and an increase of 0.5529 in the structural similarity index, compared to SAR images reconstructed from radar coordinates.

Evaluation of Applicability for 3D Scanning of Abandoned or Flooded Mine Sites Using Unmanned Mobility (무인 이동체를 이용한 폐광산 갱도 및 수몰 갱도의 3차원 형상화 위한 적용성 평가)

  • Soolo Kim;Gwan-in Bak;Sang-Wook Kim;Seung-han Baek
    • Tunnel and Underground Space
    • /
    • v.34 no.1
    • /
    • pp.1-14
    • /
    • 2024
  • An image-reconstruction technology, involving the deployment of an unmanned mobility equipped with high-speed LiDAR (Light Detection And Ranging) has been proposed to reconstruct the shape of abandoned mine. Unmanned mobility operation is remarkably useful in abandoned mines fraught with operational difficulties including, but not limited to, obstacles, sludge, underwater and narrow tunnel with the diameter of 1.5 m or more. For cases of real abandoned mines, quadruped robots, quadcopter drones and underwater drones are respectively deployed on land, air, and water-filled sites. In addition to the advantage of scanning the abandoned mines with 2D solid-state lidar sensors, rotation of radiation at an inclination angle offers an increased efficiency for simultaneous reconstruction of mineshaft shapes and detecting obstacles. Sensor and robot posture were used for computing rotation matrices that helped compute geographical coordinates of the solid-state lidar data. Next, the quadruped robot scanned the actual site to reconstruct tunnel shape. Lastly, the optimal elements necessary to increase utility in actual fields were found and proposed.

Geometric Regualrization of Irregular Building Polygons: A Comparative Study

  • Sohn, Gun-Ho;Jwa, Yoon-Seok;Tao, Vincent;Cho, Woo-Sug
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.6_1
    • /
    • pp.545-555
    • /
    • 2007
  • 3D buildings are the most prominent feature comprising urban scene. A few of mega-cities in the globe are virtually reconstructed in photo-realistic 3D models, which becomes accessible by the public through the state-of-the-art online mapping services. A lot of research efforts have been made to develop automatic reconstruction technique of large-scale 3D building models from remotely sensed data. However, existing methods still produce irregular building polygons due to errors induced partly by uncalibrated sensor system, scene complexity and partly inappropriate sensor resolution to observed object scales. Thus, a geometric regularization technique is urgently required to rectify such irregular building polygons that are quickly captured from low sensory data. This paper aims to develop a new method for regularizing noise building outlines extracted from airborne LiDAR data, and to evaluate its performance in comparison with existing methods. These include Douglas-Peucker's polyline simplication, total least-squared adjustment, model hypothesis-verification, and rule-based rectification. Based on Minimum Description Length (MDL) principal, a new objective function, Geometric Minimum Description Length (GMDL), to regularize geometric noises is introduced to enhance the repetition of identical line directionality, regular angle transition and to minimize the number of vertices used. After generating hypothetical regularized models, a global optimum of the geometric regularity is achieved by verifying the entire solution space. A comparative evaluation of the proposed geometric regulator is conducted using both simulated and real building vectors with various levels of noise. The results show that the GMDL outperforms the selected existing algorithms at the most of noise levels.

Physical Offset of UAVs Calibration Method for Multi-sensor Fusion (다중 센서 융합을 위한 무인항공기 물리 오프셋 검보정 방법)

  • Kim, Cheolwook;Lim, Pyeong-chae;Chi, Junhwa;Kim, Taejung;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1125-1139
    • /
    • 2022
  • In an unmanned aerial vehicles (UAVs) system, a physical offset can be existed between the global positioning system/inertial measurement unit (GPS/IMU) sensor and the observation sensor such as a hyperspectral sensor, and a lidar sensor. As a result of the physical offset, a misalignment between each image can be occurred along with a flight direction. In particular, in a case of multi-sensor system, an observation sensor has to be replaced regularly to equip another observation sensor, and then, a high cost should be paid to acquire a calibration parameter. In this study, we establish a precise sensor model equation to apply for a multiple sensor in common and propose an independent physical offset estimation method. The proposed method consists of 3 steps. Firstly, we define an appropriate rotation matrix for our system, and an initial sensor model equation for direct-georeferencing. Next, an observation equation for the physical offset estimation is established by extracting a corresponding point between a ground control point and the observed data from a sensor. Finally, the physical offset is estimated based on the observed data, and the precise sensor model equation is established by applying the estimated parameters to the initial sensor model equation. 4 region's datasets(Jeon-ju, Incheon, Alaska, Norway) with a different latitude, longitude were compared to analyze the effects of the calibration parameter. We confirmed that a misalignment between images were adjusted after applying for the physical offset in the sensor model equation. An absolute position accuracy was analyzed in the Incheon dataset, compared to a ground control point. For the hyperspectral image, root mean square error (RMSE) for X, Y direction was calculated for 0.12 m, and for the point cloud, RMSE was calculated for 0.03 m. Furthermore, a relative position accuracy for a specific point between the adjusted point cloud and the hyperspectral images were also analyzed for 0.07 m, so we confirmed that a precise data mapping is available for an observation without a ground control point through the proposed estimation method, and we also confirmed a possibility of multi-sensor fusion. From this study, we expect that a flexible multi-sensor platform system can be operated through the independent parameter estimation method with an economic cost saving.

Development of LiDAR-Based MRM Algorithm for LKS System (LKS 시스템을 위한 라이다 기반 MRM 알고리즘 개발)

  • Son, Weon Il;Oh, Tae Young;Park, Kihong
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.20 no.1
    • /
    • pp.174-192
    • /
    • 2021
  • The LIDAR sensor, which provides higher cognitive performance than cameras and radar, is difficult to apply to ADAS or autonomous driving because of its high price. On the other hand, as the price is decreasing rapidly, expectations are rising to improve existing autonomous driving functions by taking advantage of the LIDAR sensor. In level 3 autonomous vehicles, when a dangerous situation in the cognitive module occurs due to a sensor defect or sensor limit, the driver must take control of the vehicle for manual driving. If the driver does not respond to the request, the system must automatically kick in and implement a minimum risk maneuver to maintain the risk within a tolerable level. In this study, based on this background, a LIDAR-based LKS MRM algorithm was developed for the case when the normal operation of LKS was not possible due to troubles in the cognitive system. From point cloud data collected by LIDAR, the algorithm generates the trajectory of the vehicle in front through object clustering and converts it to the target waypoints of its own. Hence, if the camera-based LKS is not operating normally, LIDAR-based path tracking control is performed as MRM. The HAZOP method was used to identify the risk sources in the LKS cognitive systems. B, and based on this, test scenarios were derived and used in the validation process by simulation. The simulation results indicated that the LIDAR-based LKS MRM algorithm of this study prevents lane departure in dangerous situations caused by various problems or difficulties in the LKS cognitive systems and could prevent possible traffic accidents.