• Title/Summary/Keyword: Laser distance sensor

Search Result 141, Processing Time 0.027 seconds

An Acceleration Method for Processing LiDAR Data for Real-time Perimeter Facilities (실시간 경계를 위한 라이다 데이터 처리의 가속화 방법)

  • Lee, Yoon-Yim;Lee, Eun-Seok;Noh, Heejeon;Lee, Sung Hyun;Kim, Young-Chul
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.101-103
    • /
    • 2022
  • CCTV is mainly used as a real-time detection system for critical facilities. In the case of CCTV, although the accuracy is high, the viewing angle is narrow, so it is used in combination with a sensor such as a radar. LiDAR is a technology that acquires distance information by detecting the time it takes to reflect off an object using a high-power pulsed laser. In the case of lidar, there is a problem in that the utilization is not high in terms of cost and technology due to the limitation of the number of simultaneous processing sensors in the server due to the data throughput. The detection method by the optical mesh sensor is also vulnerable to strong winds and extreme cold, and there is a problem of maintenance due to damage to animals. In this paper, by using the 1550nm wavelength band instead of the 905nm wavelength band used in the existing lidar sensor, the effect on the weather environment is strong and we propose to develop a system that can integrate and control multiple sensors.

  • PDF

UKF Localization of a Mobile Robot in an Indoor Environment and Performance Evaluation (실내 이동로봇의 UKF 위치 추정 및 성능 평가)

  • Han, Jun Hee;Ko, Nak Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.4
    • /
    • pp.361-368
    • /
    • 2015
  • This paper reports an unscented Kalman filter approach for localization of a mobile robot in an indoor environment. The method proposes a new model of measurement uncertainty which adjusts the error covariance according to the measured distance. The method also uses non-zero off diagonal values in error covariance matrices of motion uncertainty and measurement uncertainty. The method is tested through experiments in an indoor environment of 100*40 m working space using a differential drive robot which uses Laser range finder as an exteroceptive sensor. The results compare the localization performance of the proposed method with the conventional method which doesn't use adaptive measurement uncertainty model. Also, the experiment verifies the improvement due to non-zero off diagonal elements in covariance matrices. This paper contributes to implementing and evaluating a practical UKF approach for mobile robot localization.

Smoke Detection Based on RGB-Depth Camera in Interior (RGB-Depth 카메라 기반의 실내 연기검출)

  • Park, Jang-Sik
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.2
    • /
    • pp.155-160
    • /
    • 2014
  • In this paper, an algorithm using RGB-depth camera is proposed to detect smoke in interrior. RGB-depth camera, the Kinect provides RGB color image and depth information. The Kinect sensor consists of an infra-red laser emitter, infra-red camera and an RGB camera. A specific pattern of speckles radiated from the laser source is projected onto the scene. This pattern is captured by the infra-red camera and is analyzed to get depth information. The distance of each speckle of the specific pattern is measured and the depth of object is estimated. As the depth of object is highly changed, the depth of object plain can not be determined by the Kinect. The depth of smoke can not be determined too because the density of smoke is changed with constant frequency and intensity of infra-red image is varied between each pixels. In this paper, a smoke detection algorithm using characteristics of the Kinect is proposed. The region that the depth information is not determined sets the candidate region of smoke. If the intensity of the candidate region of color image is larger than a threshold, the region is confirmed as smoke region. As results of simulations, it is shown that the proposed method is effective to detect smoke in interior.

Development of underwater 3D shape measurement system with improved radiation tolerance

  • Kim, Taewon;Choi, Youngsoo;Ko, Yun-ho
    • Nuclear Engineering and Technology
    • /
    • v.53 no.4
    • /
    • pp.1189-1198
    • /
    • 2021
  • When performing remote tasks using robots in nuclear power plants, a 3D shape measurement system is advantageous in improving the efficiency of remote operations by easily identifying the current state of the target object for example, size, shape, and distance information. Nuclear power plants have high-radiation and underwater environments therefore the electronic parts that comprise 3D shape measurement systems are prone to degradation and thus cannot be used for a long period of time. Also, given the refraction caused by a medium change in the underwater environment, optical design constraints and calibration methods for them are required. The present study proposed a method for developing an underwater 3D shape measurement system with improved radiation tolerance, which is composed of commercial electric parts and a stereo camera while being capable of easily and readily correcting underwater refraction. In an effort to improve its radiation tolerance, the number of parts that are exposed to a radiation environment was minimized to include only necessary components, such as a line beam laser, a motor to rotate the line beam laser, and a stereo camera. Given that a signal processing circuit and control circuit of the camera is susceptible to radiation, an image sensor and lens of the camera were separated from its main body to improve radiation tolerance. The prototype developed in the present study was made of commercial electric parts, and thus it was possible to improve the overall radiation tolerance at a relatively low cost. Also, it was easy to manufacture because there are few constraints for optical design.

Distributed Search of Swarm Robots Using Tree Structure in Unknown Environment (미지의 환경에서 트리구조를 이용한 군집로봇의 분산 탐색)

  • Lee, Gi Su;Joo, Young Hoon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.2
    • /
    • pp.285-292
    • /
    • 2018
  • In this paper, we propose a distributed search of a cluster robot using tree structure in an unknown environment. In the proposed method, the cluster robot divides the unknown environment into 4 regions by using the LRF (Laser Range Finder) sensor information and divides the maximum detection distance into 4 regions, and detects feature points of the obstacle. Also, we define the detected feature points as Voronoi Generators of the Voronoi Diagram and apply the Voronoi diagram. The Voronoi Space, the Voronoi Partition, and the Voronoi Vertex, components of Voronoi, are created. The generated Voronoi partition is the path of the robot. Voronoi vertices are defined as each node and consist of the proposed tree structure. The root of the tree is the starting point, and the node with the least significant bit and no children is the target point. Finally, we demonstrate the superiority of the proposed method through several simulations.

A Study on the RFID Tag-Floor Based Navigation (RFID 태그플로어 방식의 내비게이션에 관한 연구)

  • Choi Jung-Wook;Oh Dong-Ik;Kim Seung-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.10
    • /
    • pp.968-974
    • /
    • 2006
  • We are moving into the era of ubiquitous computing. Ubiquitous Sensor Network (USN) is a base of such computing paradigm, where recognizing the identification and the position of objects is important. For the object identification, RFID tags are commonly used. For the object positioning, use of sensors such as laser and ultrasonic scanners is popular. Recently, there have been a few attempts to apply RFID technology in robot localization by replacing the sensors with RFID readers to achieve simpler and unified USN settings. However, RFID does not provide enough sensing accuracy for some USN applications such as robot navigation, mainly because of its inaccuracy in distance measurements. In this paper, we describe our approach on achieving accurate navigation using RFID. We solely rely on RFID mechanism for the localization by providing coordinate information through RFID tag installed floors. With the accurate positional information stored in the RFID tag, we complement coordinate errors accumulated during the wheel based robot navigation. We especially focus on how to distribute RFID tags (tag pattern) and how many to place (tag granularity) on the RFID tag-floor. To determine efficient tag granularities and tag patterns, we developed a simulation program. We define the error in navigation and use it to compare the effectiveness of the navigation. We analyze the simulation results to determine the efficient granularities and tag arrangement patterns that can improve the effectiveness of RFID navigation in general.

A Study on the Application Technique of 3-D Spatial Information by integration of Aerial photos and Laser data (항공사진과 레이져 데이터의 통합에 의한 3 차원 공간정보 활용기술연구)

  • Yeon, Sang-Ho
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.3
    • /
    • pp.385-392
    • /
    • 2010
  • A LiDAR technique has the merits that survey engineers can get a large number of measurements with high precision quickly. Aerial photos and satellite sensor images are used for generating 3D spatial images which are matched with the map coordinates and elevation data from digital topographic files. Also, those images are used for matching with 3D spatial image contents through perspective view condition composed along to the designated roads until arrival the corresponding location. Recently, 3D aviation image could be generated by various digital data. The advanced geographical methods for guidance of the destination road are experimented under the GIS environments. More information and access designated are guided by the multimedia contents on internet or from the public tour information desk using the simulation images. The height data based on LiDAR is transformed into DEM, and the real time unification of the vector via digital image mapping and raster via extract evaluation are transformed to trace the generated model of 3-dimensional downtown building along to the long distance for 3D tract model generation.

Development of Autonomous Steering Platforms for Upland Furrow (노지 밭고랑 환경 적용을 위한 자율조향 플랫폼 개발)

  • Cho, Yongjun;Yun, Haeyong;Hong, Hyunggil;Oh, Jangseok;Park, Hui Chang;Kang, Minsu;Park, Kwanhyung;Seo, Kabho;Kim, Sunduck;Lee, Youngtae
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.20 no.9
    • /
    • pp.70-75
    • /
    • 2021
  • We developed a platform that was capable of autonomous steering in a furrow environment. It was developed to autonomously control steering by recognizing the furrow using a laser distance, three-axis tilt, and temperature sensor. The performance evaluation indicated that the autonomous steering success rate was 99.17%, and it was possible to climb up to 5° on the slope. The usage time was approximately 40 h, and the maximum speed was 6.7 km/h.

Sensor Fusion Docking System of Drone and Ground Vehicles Using Image Object Detection (영상 객체 검출을 이용한 드론과 지상로봇의 센서 융합 도킹 시스템)

  • Beck, Jong-Hwan;Park, Hee-Su;Oh, Se-Ryeong;Shin, Ji-Hun;Kim, Sang-Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.4
    • /
    • pp.217-222
    • /
    • 2017
  • Recent studies for working robot in dangerous places have been carried out on large unmanned ground vehicles or 4-legged robots with the advantage of long working time, but it is difficult to apply in practical dangerous fields which require the real-time system with high locomotion and capability of delicate working. This research shows the collaborated docking system of drone and ground vehicles which combines image processing algorithm and laser sensors for effective detection of docking markers, and is finally capable of moving a long distance and doing very delicate works. We proposed the docking system of drone and ground vehicles with sensor fusion which also suggests two template matching methods appropriate for this application. The system showed 95% docking success rate in 50 docking attempts.

Measurement of Dynamic Characteristics on Structure using Non-marker Vision-based Displacement Measurement System (비마커 영상기반 변위계측 시스템을 이용한 구조물의 동특성 측정)

  • Choi, Insub;Kim, JunHee
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.29 no.4
    • /
    • pp.301-308
    • /
    • 2016
  • In this study, a novel method referred as non-marker vision-based displacement measuring system(NVDMS) was introduced in order to measure the displacement of structure. There are two distinct differences between proposed NVDMS and existing vision-based displacement measuring system(VDMS). First, the NVDMS extracts the pixel coordinates of the structure using a feature point not a marker. Second, in the NVDMS, the scaling factor in order to convert the coordinates of a feature points from pixel value to physical value can be calculated by using the external conditions between the camera and the structure, which are distance, angle, and focal length, while the scaling factor for VDMS can be calculated by using the geometry of marker. The free vibration test using the three-stories scale model was conducted in order to analyze the reliability of the displacement data obtained from the NVDMS by comparing the reference data obtained from laser displacement sensor(LDS), and the measurement of dynamic characteristics was proceed using the displacement data. The NVDMS can accurately measure the dynamic displacement of the structure without the marker, and the high reliability of the dynamic characteristics obtained from the NVDMS are secured.