• Title/Summary/Keyword: LiDAR sensor

Search Result 136, Processing Time 0.036 seconds

Anomaly Event Detection Algorithm of Single-person Households Fusing Vision, Activity, and LiDAR Sensors

  • Lee, Do-Hyeon;Ahn, Jun-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.6
    • /
    • pp.23-31
    • /
    • 2022
  • Due to the recent outbreak of COVID-19 and an aging population and an increase in single-person households, the amount of time that household members spend doing various activities at home has increased significantly. In this study, we propose an algorithm for detecting anomalies in members of single-person households, including the elderly, based on the results of human movement and fall detection using an image sensor algorithm through home CCTV, an activity sensor algorithm using an acceleration sensor built into a smartphone, and a 2D LiDAR sensor-based LiDAR sensor algorithm. However, each single sensor-based algorithm has a disadvantage in that it is difficult to detect anomalies in a specific situation due to the limitations of the sensor. Accordingly, rather than using only a single sensor-based algorithm, we developed a fusion method that combines each algorithm to detect anomalies in various situations. We evaluated the performance of algorithms through the data collected by each sensor, and show that even in situations where only one algorithm cannot be used to detect accurate anomaly event through certain scenarios we can complement each other to efficiently detect accurate anomaly event.

Algorithm on Detection and Measurement for Proximity Object based on the LiDAR Sensor (LiDAR 센서기반 근접물체 탐지계측 알고리즘)

  • Jeong, Jong-teak;Choi, Jo-cheon
    • Journal of Advanced Navigation Technology
    • /
    • v.24 no.3
    • /
    • pp.192-197
    • /
    • 2020
  • Recently, the technologies related to autonomous drive has studying the goal for safe operation and prevent accidents of vehicles. There is radar and camera technologies has used to detect obstacles in these autonomous vehicle research. Now a day, the method for using LiDAR sensor has considering to detect nearby objects and accurately measure the separation distance in the autonomous navigation. It is calculates the distance by recognizing the time differences between the reflected beams and it allows precise distance measurements. But it also has the disadvantage that the recognition rate of object in the atmospheric environment can be reduced. In this paper, point cloud data by triangular functions and Line Regression model are used to implement measurement algorithm, that has improved detecting objects in real time and reduce the error of measuring separation distances based on improved reliability of raw data from LiDAR sensor. It has verified that the range of object detection errors can be improved by using the Python imaging library.

Development of Human Following Method of Mobile Robot Using QR Code and 2D LiDAR Sensor (QR 2D 코드와 라이다 센서를 이용한 모바일 로봇의 사람 추종 기법 개발)

  • Lee, SeungHyeon;Choi, Jae Won;Van Dang, Chien;Kim, Jong-Wook
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.15 no.1
    • /
    • pp.35-42
    • /
    • 2020
  • In this paper, we propose a method to keep the robot at a distance of 30 to 45cm from the user in consideration of each individual's minimum area and inconvenience by using a 2D LiDAR sensor LDS-01 as the secondary sensor along with a QR code. First, the robot determines the brightness of the video and the presence of a QR code. If the light is bright and there is a QR code due to human's presence, the range of the 2D LiDAR sensor is set based on the position of the QR code in the captured image to find and follow the correct target. On the other hand, when the robot does not recognize the QR code due to the low light, the target is followed using a database that stores obstacles and human actions made before the experiment using only the 2D LiDAR sensor. As a result, our robot can follow the target person in four situations based on nine locations with seven types of motion.

Integrated Navigation Design Using a Gimbaled Vision/LiDAR System with an Approximate Ground Description Model

  • Yun, Sukchang;Lee, Young Jae;Kim, Chang Joo;Sung, Sangkyung
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.14 no.4
    • /
    • pp.369-378
    • /
    • 2013
  • This paper presents a vision/LiDAR integrated navigation system that provides accurate relative navigation performance on a general ground surface, in GNSS-denied environments. The considered ground surface during flight is approximated as a piecewise continuous model, with flat and slope surface profiles. In its implementation, the presented system consists of a strapdown IMU, and an aided sensor block, consisting of a vision sensor and a LiDAR on a stabilized gimbal platform. Thus, two-dimensional optical flow vectors from the vision sensor, and range information from LiDAR to ground are used to overcome the performance limit of the tactical grade inertial navigation solution without GNSS signal. In filter realization, the INS error model is employed, with measurement vectors containing two-dimensional velocity errors, and one differenced altitude in the navigation frame. In computing the altitude difference, the ground slope angle is estimated in a novel way, through two bisectional LiDAR signals, with a practical assumption representing a general ground profile. Finally, the overall integrated system is implemented, based on the extended Kalman filter framework, and the performance is demonstrated through a simulation study, with an aircraft flight trajectory scenario.

Parameter Analysis for Super-Resolution Network Model Optimization of LiDAR Intensity Image (LiDAR 반사 강도 영상의 초해상화 신경망 모델 최적화를 위한 파라미터 분석)

  • Seungbo Shim
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.5
    • /
    • pp.137-147
    • /
    • 2023
  • LiDAR is used in autonomous driving and various industrial fields to measure the size and distance of an object. In addition, the sensor also provides intensity images based on the amount of reflected light. This has a positive effect on sensor data processing by providing information on the shape of the object. LiDAR guarantees higher performance as the resolution increases but at an increased cost. These conditions also apply to LiDAR intensity images. Expensive equipment is essential to acquire high-resolution LiDAR intensity images. This study developed artificial intelligence to improve low-resolution LiDAR intensity images into high-resolution ones. Therefore, this study performed parameter analysis for the optimal super-resolution neural network model. The super-resolution algorithm was trained and verified using 2,500 LiDAR intensity images. As a result, the resolution of the intensity images were improved. These results can be applied to the autonomous driving field and help improve driving environment recognition and obstacle detection performance

Development of 3D Point Cloud Mapping System Using 2D LiDAR and Commercial Visual-inertial Odometry Sensor (2차원 라이다와 상업용 영상-관성 기반 주행 거리 기록계를 이용한 3차원 점 구름 지도 작성 시스템 개발)

  • Moon, Jongsik;Lee, Byung-Yoon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.3
    • /
    • pp.107-111
    • /
    • 2021
  • A 3D point cloud map is an essential elements in various fields, including precise autonomous navigation system. However, generating a 3D point cloud map using a single sensor has limitations due to the price of expensive sensor. In order to solve this problem, we propose a precise 3D mapping system using low-cost sensor fusion. Generating a point cloud map requires the process of estimating the current position and attitude, and describing the surrounding environment. In this paper, we utilized a commercial visual-inertial odometry sensor to estimate the current position and attitude states. Based on the state value, the 2D LiDAR measurement values describe the surrounding environment to create a point cloud map. To analyze the performance of the proposed algorithm, we compared the performance of the proposed algorithm and the 3D LiDAR-based SLAM (simultaneous localization and mapping) algorithm. As a result, it was confirmed that a precise 3D point cloud map can be generated with the low-cost sensor fusion system proposed in this paper.

Aerial Object Detection and Tracking based on Fusion of Vision and Lidar Sensors using Kalman Filter for UAV

  • Park, Cheonman;Lee, Seongbong;Kim, Hyeji;Lee, Dongjin
    • International journal of advanced smart convergence
    • /
    • v.9 no.3
    • /
    • pp.232-238
    • /
    • 2020
  • In this paper, we study on aerial objects detection and position estimation algorithm for the safety of UAV that flight in BVLOS. We use the vision sensor and LiDAR to detect objects. We use YOLOv2 architecture based on CNN to detect objects on a 2D image. Additionally we use a clustering method to detect objects on point cloud data acquired from LiDAR. When a single sensor used, detection rate can be degraded in a specific situation depending on the characteristics of sensor. If the result of the detection algorithm using a single sensor is absent or false, we need to complement the detection accuracy. In order to complement the accuracy of detection algorithm based on a single sensor, we use the Kalman filter. And we fused the results of a single sensor to improve detection accuracy. We estimate the 3D position of the object using the pixel position of the object and distance measured to LiDAR. We verified the performance of proposed fusion algorithm by performing the simulation using the Gazebo simulator.

Measuring Rebar Position Error and Marking Work for Automated Layout Robot Using LiDAR Sensor (마킹 로봇의 자동화를 위한 LiDAR 센서 기반 철근배근 오차 측정 및 먹매김 수행 프로세스 연구)

  • Kim, Taehoon;Lim, Hyunsu;Cho, Kyuman
    • Journal of the Korea Institute of Building Construction
    • /
    • v.23 no.2
    • /
    • pp.209-220
    • /
    • 2023
  • Ensuring accuracy within tolerance is crucial for a marking robot; however, rebar displacement frequently occurs during the structural work process, necessitating corrections to layout lines or rebar locations. To guarantee precision and automation, the marking robot must be capable of measuring rebar error and determining appropriate adjustments for marking lines and rebar placement. Consequently, this study proposes a method for measuring rebar location error using a LiDAR sensor and implementing a layout assessment process based on the measurement results. The rebar recognition experiment using the LiDAR sensor yielded an average error of 5mm, demonstrating a reliable level of accuracy for wall rebars. Additionally, this research proposed a process that enables the robot to evaluate rebar and marking corrections based on the error range. The findings of this study can contribute to the automated operation of marking robots while accounting for construction errors, potentially leading to improvements in structural quality.

Process Development for Optimizing Sensor Placement Using 3D Information by LiDAR (LiDAR자료의 3차원 정보를 이용한 최적 Sensor 위치 선정방법론 개발)

  • Yu, Han-Seo;Lee, Woo-Kyun;Choi, Sung-Ho;Kwak, Han-Bin;Kwak, Doo-Ahn
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.18 no.2
    • /
    • pp.3-12
    • /
    • 2010
  • In previous studies, the digital measurement systems and analysis algorithms were developed by using the related techniques, such as the aerial photograph detection and high resolution satellite image process. However, these studies were limited in 2-dimensional geo-processing. Therefore, it is necessary to apply the 3-dimensional spatial information and coordinate system for higher accuracy in recognizing and locating of geo-features. The objective of this study was to develop a stochastic algorithm for the optimal sensor placement using the 3-dimensional spatial analysis method. The 3-dimensional information of the LiDAR was applied in the sensor field algorithm based on 2- and/or 3-dimensional gridded points. This study was conducted with three case studies using the optimal sensor placement algorithms; the first case was based on 2-dimensional space without obstacles(2D-non obstacles), the second case was based on 2-dimensional space with obstacles(2D-obstacles), and lastly, the third case was based on 3-dimensional space with obstacles(3D-obstacles). Finally, this study suggested the methodology for the optimal sensor placement - especially, for ground-settled sensors - using the LiDAR data, and it showed the possibility of algorithm application in the information collection using sensors.

Implementation of an Obstacle Avoidance System Based on a Low-cost LiDAR Sensor for Autonomous Navigation of an Unmanned Ship (무인선박의 자율운항을 위한 저가형 LiDAR센서 기반의 장애물 회피 시스템 구현)

  • Song, HyunWoo;Lee, Kwangkook;Kim, Dong Hun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.68 no.3
    • /
    • pp.480-488
    • /
    • 2019
  • In this paper, we propose an obstacle avoidance system for an unmanned ship to navigate safely in dynamic environments. Also, in this paper, one-dimensional low-cost lidar sensor is used, and a servo motor is used to implement the lidar sensor in a two-dimensional space. The distance and direction of an obstacle are measured through the two-dimensional lidar sensor. The unmanned ship is controlled by the application at a Tablet PC. The user inputs the coordinates of the destination in Google maps. Then the position of the unmanned ship is compared with the position of the destination through GPS and a geomagnetic sensor. If the unmanned ship finds obstacles while moving to its destination, it avoids obstacles through a fuzzy control-based algorithm. The paper shows that the experimental results can effectively construct an obstacle avoidance system for an unmanned ship with a low-cost LiDAR sensor using fuzzy control.