• Title/Summary/Keyword: lidar sensors

Search Result 71, Processing Time 0.021 seconds

An Acceleration Method for Processing LiDAR Data for Real-time Perimeter Facilities (실시간 경계를 위한 라이다 데이터 처리의 가속화 방법)

  • Lee, Yoon-Yim;Lee, Eun-Seok;Noh, Heejeon;Lee, Sung Hyun;Kim, Young-Chul
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.101-103
    • /
    • 2022
  • CCTV is mainly used as a real-time detection system for critical facilities. In the case of CCTV, although the accuracy is high, the viewing angle is narrow, so it is used in combination with a sensor such as a radar. LiDAR is a technology that acquires distance information by detecting the time it takes to reflect off an object using a high-power pulsed laser. In the case of lidar, there is a problem in that the utilization is not high in terms of cost and technology due to the limitation of the number of simultaneous processing sensors in the server due to the data throughput. The detection method by the optical mesh sensor is also vulnerable to strong winds and extreme cold, and there is a problem of maintenance due to damage to animals. In this paper, by using the 1550nm wavelength band instead of the 905nm wavelength band used in the existing lidar sensor, the effect on the weather environment is strong and we propose to develop a system that can integrate and control multiple sensors.

  • PDF

Simulation of Ladar Range Images based on Linear FM Signal Analysis (Linear FM 신호분석을 통한 Ladar Range 영상의 시뮬레이션)

  • Min, Seong-Hong;Kim, Seong-Joon;Lee, Im-Pyeong
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.16 no.2
    • /
    • pp.87-95
    • /
    • 2008
  • Ladar (Laser Detection And Ranging, Lidar) is a sensor to acquire precise distances to the surfaces of target region using laser signals, which can be suitably applied to ATD (Automatic Target Detection) for guided missiles or aerial vehicles recently. It provides a range image in which each measured distance is expressed as the brightness of the corresponding pixel. Since the precise 3D models can be generated from the Ladar range image, more robust identification and recognition of the targets can be possible. If we simulate the data of Ladar sensor, we can efficiently use this simulator to design and develop Ladar sensors and systems and to develop the data processing algorithm. The purposes of this study are thus to simulate the signals of a Ladar sensor based on linear frequency modulation and to create range images from the simulated Ladar signals. We first simulated the laser signals of a Ladar using FM chirp modulator and then computed the distances from the sensor to a target using the FFT process of the simulated signals. Finally, we created the range image using the distances set.

  • PDF

An Adaptive ROI Decision for Real-time Performance in an Autonomous Driving Perception Module (자율주행 인지 모듈의 실시간 성능을 위한 적응형 관심 영역 판단)

  • Lee, Ayoung;Lee, Hojoon;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.14 no.2
    • /
    • pp.20-25
    • /
    • 2022
  • This paper represents an adaptive Region of Interest (ROI) decision for real-time performance in an autonomous driving perception module. Since the whole automated driving system consists of numerous modules and subdivisions of module occur, it is necessary to consider the characteristics, complexity, and limitations of each module. Furthermore, Light Detection And Ranging (Lidar) sensors require a considerable amount of time. In view of these limitations, division of submodule is inevitable to represent high real-time performance for stable system. This paper proposes ROI to reduce the number of data respect to computation time. ROI is set by a road's design speed and the corresponding ROI is applied differently to each vehicle considering its speed. The simulation model is constructed by ROS, and overall data analysis is conducted by Matlab. The algorithm is validated using real-time driving data in urban environment, and the result shows that ROI provides low computational costs.

Deep Image Retrieval using Attention and Semantic Segmentation Map (관심 영역 추출과 영상 분할 지도를 이용한 딥러닝 기반의 이미지 검색 기술)

  • Minjung Yoo;Eunhye Jo;Byoungjun Kim;Sunok Kim
    • Journal of Broadcast Engineering
    • /
    • v.28 no.2
    • /
    • pp.230-237
    • /
    • 2023
  • Self-driving is a key technology of the fourth industry and can be applied to various places such as cars, drones, cars, and robots. Among them, localiztion is one of the key technologies for implementing autonomous driving as a technology that identifies the location of objects or users using GPS, sensors, and maps. Locilization can be made using GPS or LIDAR, but it is very expensive and heavy equipment must be mounted, and precise location estimation is difficult for places with radio interference such as underground or tunnels. In this paper, to compensate for this, we proposes an image retrieval using attention module and image segmentation maps using color images acquired with low-cost vision cameras as an input.

Lidar for the BlindObstacle detection using sensors (시각장애인을 위한 라이다 센서를 활용한 장애물 감지)

  • Hyung Mook Lee;Ju Hwan Park;Jin Hwi Kim;Seoung Woo Lee;Seong Jeon;Jun Won Choi;Se Jeong Heo;Sung Jin Kim
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.01a
    • /
    • pp.133-136
    • /
    • 2023
  • 많은 사람이 "다음 생에 성적 소수자나 이주민, 장애인으로 태어나고 싶니?"라는 질문을 받는다면 대부분 사람은 선뜻 "그렇다"라고 대답하기 어렵다. 그 근본적인 이유는 질문 속 사회적 약자들은 사회에서 다양한 불평등을 마주하고 있다는 의미일 것이다. 본 논문에서는 여러 소수자 중 시각장애인을 위한 보행 보조 기구를 제작하였다. 시각장애인들이 사용하는 지팡이는 지면의 장애물 파악에 도움은 되지만 공중에 떠 있는 장애물을 파악하고 피하기 어렵다. 이러한 단점을 보완하고자 라이다 센서를 이용하여 장애물 감지가 가능한 서비스를 시각장애인에게 제공한다. 라이다 센서는 레이저 광원을 방출하여 목표물에서 튕겨 되돌아오는 특성을 이용하여 사용자에게 전방에 장애물이 감지되면 시각장애인에게 TTS로 경고음을 제공한다.

  • PDF

Experiment of Multitudinous Ultrasonics Sensors using Sequentially Transmitting Ultrasonic Signa (순차적 초음파 신호 송출 방식을 이용한 다중 초음파 센서 실험)

  • Chang, Jae-Won;Koo, Bon-Soo;Lee, Sang Jeong
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.45 no.2
    • /
    • pp.124-132
    • /
    • 2017
  • With the growth of interest in the UAVs, the study of the UAV collision avoidance is in progress. Lidar, Video camera, laser sensor, and ultrasonic sensor may be utilized for collision avoidance of UAV. In this paper, the characteristics of MB 1230 ultrasonic sensor is analyzed through the experiment. When concurrently using multitudinous ultrasonic sensors, ultrasonic sensors do not generate correct measurement values. To solve ultrasonic sensor interference, sequentially transmitting ultrasonics signal is suggested by using 'Enable' signal input of ultrasonic sensor, so by activating each ultrasonic sensor gradually. This proposed solution is also verified by experimentation.

Common Optical System for the Fusion of Three-dimensional Images and Infrared Images

  • Kim, Duck-Lae;Jung, Bo Hee;Kong, Hyun-Bae;Ok, Chang-Min;Lee, Seung-Tae
    • Current Optics and Photonics
    • /
    • v.3 no.1
    • /
    • pp.8-15
    • /
    • 2019
  • We describe a common optical system that merges a LADAR system, which generates a point cloud, and a more traditional imaging system operating in the LWIR, which generates image data. The optimum diameter of the entrance pupil was determined by analysis of detection ranges of the LADAR sensor, and the result was applied to design a common optical system using LADAR sensors and LWIR sensors; the performance of these sensors was then evaluated. The minimum detectable signal of the $128{\times}128-pixel$ LADAR detector was calculated as 20.5 nW. The detection range of the LADAR optical system was calculated to be 1,000 m, and according to the results, the optimum diameter of the entrance pupil was determined to be 15.7 cm. The modulation transfer function (MTF) in relation to the diffraction limit of the designed common optical system was analyzed and, according to the results, the MTF of the LADAR optical system was 98.8% at the spatial frequency of 5 cycles per millimeter, while that of the LWIR optical system was 92.4% at the spatial frequency of 29 cycles per millimeter. The detection, recognition, and identification distances of the LWIR optical system were determined to be 5.12, 2.82, and 1.96 km, respectively.

Autonomous Vehicles as Safety and Security Agents in Real-Life Environments

  • Al-Absi, Ahmed Abdulhakim
    • International journal of advanced smart convergence
    • /
    • v.11 no.2
    • /
    • pp.7-12
    • /
    • 2022
  • Safety and security are the topmost priority in every environment. With the aid of Artificial Intelligence (AI), many objects are becoming more intelligent, conscious, and curious of their surroundings. The recent scientific breakthroughs in autonomous vehicular designs and development; powered by AI, network of sensors and the rapid increase of Internet of Things (IoTs) could be utilized in maintaining safety and security in our environments. AI based on deep learning architectures and models, such as Deep Neural Networks (DNNs), is being applied worldwide in the automotive design fields like computer vision, natural language processing, sensor fusion, object recognition and autonomous driving projects. These features are well known for their identification, detective and tracking abilities. With the embedment of sensors, cameras, GPS, RADAR, LIDAR, and on-board computers in many of these autonomous vehicles being developed, these vehicles can properly map their positions and proximity to everything around them. In this paper, we explored in detail several ways in which these enormous features embedded in these autonomous vehicles, such as the network of sensors fusion, computer vision and natural image processing, natural language processing, and activity aware capabilities of these automobiles, could be tapped and utilized in safeguarding our lives and environment.

Proposal for Research Model of High-Function Patrol Robot using Integrated Sensor System (통합 센서 시스템을 이용한 고기능 순찰 로봇의 연구모델 제안)

  • Byeong-Cheon Yoo;Seung-Jung Shin
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.3
    • /
    • pp.77-85
    • /
    • 2024
  • In this dissertation, a we designed and implemented a patrol robot that integrates a thermal imaging camera, speed dome camera, PTZ camera, radar, lidar sensor, and smartphone. This robot has the ability to monitor and respond efficiently even in complex environments, and is especially designed to demonstrate high performance even at night or in low visibility conditions. An orbital movement system was selected for the robot's mobility, and a smartphone-based control system was developed for real-time data processing and decision-making. The combination of various sensors allows the robot to comprehensively perceive the environment and quickly detect hazards. Thermal imaging cameras are used for night surveillance, speed domes and PTZ cameras are used for wide-area monitoring, and radar and LIDAR are used for obstacle detection and avoidance. The smartphone-based control system provides a user-friendly interface. The proposed robot system can be used in various fields such as security, surveillance, and disaster response. Future research should include improving the robot's autonomous patrol algorithm, developing a multi-robot collaboration system, and long-term testing in a real environment. This study is expected to contribute to the development of the field of intelligent surveillance robots.

Development of a Fault Detection Algorithm for Multi-Autonomous Driving Perception Sensors Based on FIR Filters (FIR 필터 기반 다중 자율주행 인지 센서 결함 감지 알고리즘 개발)

  • Jae-lee Kim;Man-bok Park
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.3
    • /
    • pp.175-189
    • /
    • 2023
  • Fault detection and diagnosis (FDI) algorithms are actively being researched for ensuring the integrity and reliability of environment perception sensors in autonomous vehicles. In this paper, a fault detection algorithm based on a multi-sensor perception system composed of radar, camera, and lidar is proposed to guarantee the safety of an autonomous vehicle's perception system. The algorithm utilizes reference generation filters and residual generation filters based on finite impulse response (FIR) filter estimates. By analyzing the residuals generated from the filtered sensor observations and the estimated state errors of individual objects, the algorithm detects faults in the environment perception sensors. The proposed algorithm was evaluated by comparing its performance with a Kalman filter-based algorithm through numerical simulations in a virtual environment. This research could help to ensure the safety and reliability of autonomous vehicles and to enhance the integrity of their environment perception sensors.