• Title/Summary/Keyword: 2D lidar sensor

Search Result 29, Processing Time 0.025 seconds

Image Classification using Deep Learning Algorithm and 2D Lidar Sensor (딥러닝 알고리즘과 2D Lidar 센서를 이용한 이미지 분류)

  • Lee, Junho;Chang, Hyuk-Jun
    • Journal of IKEEE
    • /
    • v.23 no.4
    • /
    • pp.1302-1308
    • /
    • 2019
  • This paper presents an approach for classifying image made by acquired position data from a 2D Lidar sensor with a convolutional neural network (CNN). Lidar sensor has been widely used for unmanned devices owing to advantages in term of data accuracy, robustness against geometry distortion and light variations. A CNN algorithm consists of one or more convolutional and pooling layers and has shown a satisfactory performance for image classification. In this paper, different types of CNN architectures based on training methods, Gradient Descent(GD) and Levenberg-arquardt(LM), are implemented. The LM method has two types based on the frequency of approximating Hessian matrix, one of the factors to update training parameters. Simulation results of the LM algorithms show better classification performance of the image data than that of the GD algorithm. In addition, the LM algorithm with more frequent Hessian matrix approximation shows a smaller error than the other type of LM algorithm.

Development of Highly Sensitive SWIR Photodetectors based on MAPI-capped PbS QDs (MAPI 리간드 치환형 PbS 양자점 기반의 고감도 단파장 적외선 광 검출기 개발)

  • Suji Choi;JinBeom Kwon;Yuntae Ha;Daewoong Jung
    • Journal of Sensor Science and Technology
    • /
    • v.33 no.2
    • /
    • pp.93-97
    • /
    • 2024
  • With the development of promising future mobility and urban air mobility (UAM) technologies, the demand for LIDAR sensors has increased. The SWIR photodetector is a sensor that detects lasers for the 3D mapping of lidar sensor and is the most important technology of LIDAR sensor. An SWIR photodetector based on QDs in an eye-safe wavelength band of over 1400 nm has been reported. QDs-based SWIR photodetectors can be synthesized and processed through a solution process and have the advantages of low cost and simple processing. However, the organic ligands of QDs have insulating properties that limit their ability to improve the sensitivity and stability of photodetectors. Therefore, the technology to replace organic ligands with inorganic ligands must be developed. In this study, the organic ligand of the synthesized PbS QDs was replaced with a MAPI inorganic ligand, and an SWIR photodetector was fabricated. The analysis of the characteristics of the manufactured photodetector confirmed that the photodetector based on MAPI-capped PbS QDs exhibited up to 26.5% higher responsivity than that based on organic ligand PbS QDs.

Build a Multi-Sensor Dataset for Autonomous Driving in Adverse Weather Conditions (열악한 환경에서의 자율주행을 위한 다중센서 데이터셋 구축)

  • Sim, Sungdae;Min, Jihong;Ahn, Seongyong;Lee, Jongwoo;Lee, Jung Suk;Bae, Gwangtak;Kim, Byungjun;Seo, Junwon;Choe, Tok Son
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.245-254
    • /
    • 2022
  • Sensor dataset for autonomous driving is one of the essential components as the deep learning approaches are widely used. However, most driving datasets are focused on typical environments such as sunny or cloudy. In addition, most datasets deal with color images and lidar. In this paper, we propose a driving dataset with multi-spectral images and lidar in adverse weather conditions such as snowy, rainy, smoky, and dusty. The proposed data acquisition system has 4 types of cameras (color, near-infrared, shortwave, thermal), 1 lidar, 2 radars, and a navigation sensor. Our dataset is the first dataset that handles multi-spectral cameras in adverse weather conditions. The Proposed dataset is annotated as 2D semantic labels, 3D semantic labels, and 2D/3D bounding boxes. Many tasks are available on our dataset, for example, object detection and driveable region detection. We also present some experimental results on the adverse weather dataset.

Analysis of Traversable Candidate Region for Unmanned Ground Vehicle Using 3D LIDAR Reflectivity (3D LIDAR 반사율을 이용한 무인지상차량의 주행가능 후보 영역 분석)

  • Kim, Jun;Ahn, Seongyong;Min, Jihong;Bae, Keunsung
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.41 no.11
    • /
    • pp.1047-1053
    • /
    • 2017
  • The range data acquired by 2D/3D LIDAR, a core sensor for autonomous navigation of an unmanned ground vehicle, is effectively used for ground modeling and obstacle detection. Within the ambiguous boundary of a road environment, however, LIDAR does not provide enough information to analyze the traversable region. This paper presents a new method to analyze a candidate area using the characteristics of LIDAR reflectivity for better detection of a traversable region. We detected a candidate traversable area through the front zone of the vehicle using the learning process of LIDAR reflectivity, after calibration of the reflectivity of each channel. We validated the proposed method of a candidate traversable region detection by performing experiments in the real operating environment of the unmanned ground vehicle.

Depthmap Generation with Registration of LIDAR and Color Images with Different Field-of-View (다른 화각을 가진 라이다와 칼라 영상 정보의 정합 및 깊이맵 생성)

  • Choi, Jaehoon;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.6
    • /
    • pp.28-34
    • /
    • 2020
  • This paper proposes an approach to the fusion of two heterogeneous sensors with two different fields-of-view (FOV): LIDAR and an RGB camera. Registration between data captured by LIDAR and an RGB camera provided the fusion results. Registration was completed once a depthmap corresponding to a 2-dimensional RGB image was generated. For this fusion, RPLIDAR-A3 (manufactured by Slamtec) and a general digital camera were used to acquire depth and image data, respectively. LIDAR sensor provided distance information between the sensor and objects in a scene nearby the sensor, and an RGB camera provided a 2-dimensional image with color information. Fusion of 2D image and depth information enabled us to achieve better performance with applications of object detection and tracking. For instance, automatic driver assistance systems, robotics or other systems that require visual information processing might find the work in this paper useful. Since the LIDAR only provides depth value, processing and generation of a depthmap that corresponds to an RGB image is recommended. To validate the proposed approach, experimental results are provided.

A Novel Human Detection Scheme using a Human Characteristics Function in a Low Resolution 2D LIDAR (저해상도 2D 라이다의 사람 특성 함수를 이용한 새로운 사람 감지 기법)

  • Kwon, Seong Kyung;Hyun, Eugin;Lee, Jin-Hee;Lee, Jonghun;Son, Sang Hyuk
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.5
    • /
    • pp.267-276
    • /
    • 2016
  • Human detection technologies are widely used in smart homes and autonomous vehicles. However, in order to detect human, autonomous vehicle researchers have used a high-resolution LIDAR and smart home researchers have applied a camera with a narrow detection range. In this paper, we propose a novel method using a low-cost and low-resolution LIDAR that can detect human fast and precisely without complex learning algorithm and additional devices. In other words, human can be distinguished from objects by using a new human characteristics function which is empirically extracted from the characteristics of a human. In addition, we verified the effectiveness of the proposed algorithm through a number of experiments.

A Study on Displacement Measurement Hardware of Retaining Walls based on Laser Sensor for Small and Medium-sized Urban Construction Sites

  • Kim, Jun-Sang;Kim, Jung-Yeol;Kim, Young-Suk
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1250-1251
    • /
    • 2022
  • Measuring management is an important part of preventing the collapse of retaining walls in advance by evaluating their stability with a variety of measuring instruments. The current work of measuring management requires considerable human and material resources since measurement companies need to install measuring instruments at various places on the retaining wall and visit the construction site to collect measurement data and evaluate the stability of the retaining wall. It was investigated that the applicability of the current work of measuring management is poor at small and medium-sized urban construction sites(excavation depth<10m) where measuring management is not essential. Therefore, the purpose of this study is to develop a laser sensor-based hardware to support the wall displacement measurements and their control software applicable to small and medium-sized urban construction sites. The 2D lidar sensor, which is more economical than a 3D laser scanner, is applied as element technology. Additionally, the hardware is mounted on the corner strut of the retaining wall, and it collects point cloud data of the retaining wall by rotating the 2D lidar sensor 360° through a servo motor. Point cloud data collected from the hardware can be transmitted through Wi-Fi to a displacement analysis device (notebook). The hardware control software is designed to control the 2D lidar sensor and servo motor in the displacement analysis device by remote access. The process of analyzing the displacement of a retaining wall using the developed hardware and software is as follows: the construction site manager uses the displacement analysis device to 1)collect the initial point cloud data, and after a certain period 2)comparative point cloud data is collected, and 3)the distance between the initial point and comparison point cloud data is calculated in order. As a result of performing an indoor experiment, the analyses show that a displacement of approximately 15 mm can be identified. In the future, the integrated system of the hardware designed here, and the displacement analysis software to be developed can be applied to small and medium-sized urban construction sites through several field experiments. Therefore, effective management of the displacement of the retaining wall is possible in comparison with the current measuring management work in terms of ease of installation, dismantlement, displacement measurement, and economic feasibility.

  • PDF

Complexity Estimation Based Work Load Balancing for a Parallel Lidar Waveform Decomposition Algorithm

  • Jung, Jin-Ha;Crawford, Melba M.;Lee, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.25 no.6
    • /
    • pp.547-557
    • /
    • 2009
  • LIDAR (LIght Detection And Ranging) is an active remote sensing technology which provides 3D coordinates of the Earth's surface by performing range measurements from the sensor. Early small footprint LIDAR systems recorded multiple discrete returns from the back-scattered energy. Recent advances in LIDAR hardware now make it possible to record full digital waveforms of the returned energy. LIDAR waveform decomposition involves separating the return waveform into a mixture of components which are then used to characterize the original data. The most common statistical mixture model used for this process is the Gaussian mixture. Waveform decomposition plays an important role in LIDAR waveform processing, since the resulting components are expected to represent reflection surfaces within waveform footprints. Hence the decomposition results ultimately affect the interpretation of LIDAR waveform data. Computational requirements in the waveform decomposition process result from two factors; (1) estimation of the number of components in a mixture and the resulting parameter estimates, which are inter-related and cannot be solved separately, and (2) parameter optimization does not have a closed form solution, and thus needs to be solved iteratively. The current state-of-the-art airborne LIDAR system acquires more than 50,000 waveforms per second, so decomposing the enormous number of waveforms is challenging using traditional single processor architecture. To tackle this issue, four parallel LIDAR waveform decomposition algorithms with different work load balancing schemes - (1) no weighting, (2) a decomposition results-based linear weighting, (3) a decomposition results-based squared weighting, and (4) a decomposition time-based linear weighting - were developed and tested with varying number of processors (8-256). The results were compared in terms of efficiency. Overall, the decomposition time-based linear weighting work load balancing approach yielded the best performance among four approaches.

Development of Low Cost Autonomous-Driving Delivery Robot System Using SLAM Technology (SLAM 기술을 활용한 저가형 자율주행 배달 로봇 시스템 개발)

  • Donghoon Lee;Jehyun Park;Kyunghoon Jung
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.5
    • /
    • pp.249-257
    • /
    • 2023
  • This paper discusses the increasing need for autonomous delivery robots due to the current growth in the delivery market, rising delivery fees, high costs of hiring delivery personnel, and the need for contactless services. Additionally, the cost of hardware and complex software systems required to build and operate autonomous delivery robots is high. To provide a low-cost alternative to this, this paper proposes a autonomous delivery robot platform using a low-cost sensor combination of 2D LIDAR, depth camera and tracking camera to replace the existing expensive 3D LIDAR. The proposed robot was developed using the RTAB-Map SLAM open source package for 2D mapping and overcomes the limitations of low-cost sensors by using the convex hull algorithm. The paper details the hardware and software configuration of the robot and presents the results of driving experiments. The proposed platform has significant potential for various industries, including the delivery and other industries.

Calibration of VLP-16 Lidar Sensor and Vision Cameras Using the Center Coordinates of a Spherical Object (구형물체의 중심좌표를 이용한 VLP-16 라이다 센서와 비전 카메라 사이의 보정)

  • Lee, Ju-Hwan;Lee, Geun-Mo;Park, Soon-Yong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.2
    • /
    • pp.89-96
    • /
    • 2019
  • 360 degree 3-dimensional lidar sensors and vision cameras are commonly used in the development of autonomous driving techniques for automobile, drone, etc. By the way, existing calibration techniques for obtaining th e external transformation of the lidar and the camera sensors have disadvantages in that special calibration objects are used or the object size is too large. In this paper, we introduce a simple calibration method between two sensors using a spherical object. We calculated the sphere center coordinates using four 3-D points selected by RANSAC of the range data of the sphere. The 2-dimensional coordinates of the object center in the camera image are also detected to calibrate the two sensors. Even when the range data is acquired from various angles, the image of the spherical object always maintains a circular shape. The proposed method results in about 2 pixel reprojection error, and the performance of the proposed technique is analyzed by comparing with the existing methods.