• Title/Summary/Keyword: map matching

Search Result 530, Processing Time 0.022 seconds

Road Surface Marking Detection for Sensor Fusion-based Positioning System (센서 융합 기반 정밀 측위를 위한 노면 표시 검출)

  • Kim, Dongsuk;Jung, Hogi
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.22 no.7
    • /
    • pp.107-116
    • /
    • 2014
  • This paper presents camera-based road surface marking detection methods suited to sensor fusion-based positioning system that consists of low-cost GPS (Global Positioning System), INS (Inertial Navigation System), EDM (Extended Digital Map), and vision system. The proposed vision system consists of two parts: lane marking detection and RSM (Road Surface Marking) detection. The lane marking detection provides ROIs (Region of Interest) that are highly likely to contain RSM. The RSM detection generates candidates in the regions and classifies their types. The proposed system focuses on detecting RSM without false detections and performing real time operation. In order to ensure real time operation, the gating varies for lane marking detection and changes detection methods according to the FSM (Finite State Machine) about the driving situation. Also, a single template matching is used to extract features for both lane marking detection and RSM detection, and it is efficiently implemented by horizontal integral image. Further, multiple step verification is performed to minimize false detections.

Fusion System of Time-of-Flight Sensor and Stereo Cameras Considering Single Photon Avalanche Diode and Convolutional Neural Network (SPAD과 CNN의 특성을 반영한 ToF 센서와 스테레오 카메라 융합 시스템)

  • Kim, Dong Yeop;Lee, Jae Min;Jun, Sewoong
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.4
    • /
    • pp.230-236
    • /
    • 2018
  • 3D depth perception has played an important role in robotics, and many sensory methods have also proposed for it. As a photodetector for 3D sensing, single photon avalanche diode (SPAD) is suggested due to sensitivity and accuracy. We have researched for applying a SPAD chip in our fusion system of time-of-fight (ToF) sensor and stereo camera. Our goal is to upsample of SPAD resolution using RGB stereo camera. Currently, we have 64 x 32 resolution SPAD ToF Sensor, even though there are higher resolution depth sensors such as Kinect V2 and Cube-Eye. This may be a weak point of our system, however we exploit this gap using a transition of idea. A convolution neural network (CNN) is designed to upsample our low resolution depth map using the data of the higher resolution depth as label data. Then, the upsampled depth data using CNN and stereo camera depth data are fused using semi-global matching (SGM) algorithm. We proposed simplified fusion method created for the embedded system.

JCMT-CHIMPS2 Survey

  • Kim, Kee-Tae;Moore, Toby;Minamidani, Tetsuhiro;OscarMorata, OscarMorata;Rosolowski, Erik;Su, Yang;Eden, David
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.69.3-69.3
    • /
    • 2019
  • The CHIMPS2 survey is to extend the JCMT HARP $^{13}CO/C^{18}O$ J=3-2 Inner Milky-Way Plane Survey (CHIMPS) and the ${12}^CO$ J=3-2 survey (COHRS) into the inner Galactic Plane, the Central Molecular Zone (CMZ), and a section of the Outer Plane. When combined with the complementary $^{12}CO/^{13}CO/C^{18}O$ J=1-0 survey at the Nobeyama 45m (FUGIN) at matching 15" resolution and sensitivity, and other current CO surveys, the results will provide a complete set of transition data with which to calculate accurate column densities, gas temperatures and turbulent Mach numbers. These will be used to: analyze molecular cloud properties across a range of Galactic environments; map the star-formation efficiency (SFE) and dense-gas mass fraction (DGMF) in molecular gas as a function of position in the Galaxy and its relation to the nature of the turbulence within molecular clouds; determine Galactic structure as traced by molecular gas and star formation; constrain cloud-formation models; study the relationship of filaments to star formation; test current models of the gas kinematics and stability in the Galactic center region and the flow of gas from the disc. It will also provide an invaluable legacy data set for JCMT that will not be superseded for several decades. In this poster, we will present the current status of the CHIMPS2.

  • PDF

Considerations for Developing a SLAM System for Real-time Remote Scanning of Building Facilities (건축물 실시간 원격 스캔을 위한 SLAM 시스템 개발 시 고려사항)

  • Kang, Tae-Wook
    • Journal of KIBIM
    • /
    • v.10 no.1
    • /
    • pp.1-8
    • /
    • 2020
  • In managing building facilities, spatial information is the basic data for decision making. However, the method of acquiring spatial information is not easy. In many cases, the site and drawings are often different due to changes in facilities and time after construction. In this case, the site data should be scanned to obtain spatial information. The scan data actually contains spatial information, which is a great help in making space related decisions. However, to obtain scan data, an expensive LiDAR (Light Detection and Ranging) device must be purchased, and special software for processing data obtained from the device must be available.Recently, SLAM (Simultaneous localization and mapping), an advanced map generation technology, has been spreading in the field of robotics. Using SLAM, 3D spatial information can be obtained quickly in real time without a separate matching process. This study develops and tests whether SLAM technology can be used to obtain spatial information for facility management. This draws considerations for developing a SLAM device for real-time remote scanning for facility management. However, this study focuses on the system development method that acquires spatial information necessary for facility management through SLAM technology. To this end, we develop a prototype, analyze the pros and cons, and then suggest considerations for developing a SLAM system.

Intensity and Ambient Enhanced Lidar-Inertial SLAM for Unstructured Construction Environment (비정형의 건설환경 매핑을 위한 레이저 반사광 강도와 주변광을 활용한 향상된 라이다-관성 슬램)

  • Jung, Minwoo;Jung, Sangwoo;Jang, Hyesu;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.3
    • /
    • pp.179-188
    • /
    • 2021
  • Construction monitoring is one of the key modules in smart construction. Unlike structured urban environment, construction site mapping is challenging due to the characteristics of an unstructured environment. For example, irregular feature points and matching prohibit creating a map for management. To tackle this issue, we propose a system for data acquisition in unstructured environment and a framework for Intensity and Ambient Enhanced Lidar Inertial Odometry via Smoothing and Mapping, IA-LIO-SAM, that achieves highly accurate robot trajectories and mapping. IA-LIO-SAM utilizes a factor graph same as Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping (LIO-SAM). Enhancing the existing LIO-SAM, IA-LIO-SAM leverages point's intensity and ambient value to remove unnecessary feature points. These additional values also perform as a new factor of the K-Nearest Neighbor algorithm (KNN), allowing accurate comparisons between stored points and scanned points. The performance was verified in three different environments and compared with LIO-SAM.

Reliable Autonomous Reconnaissance System for a Tracked Robot in Multi-floor Indoor Environments with Stairs (다층 실내 환경에서 계단 극복이 가능한 궤도형 로봇의 신뢰성 있는 자율 주행 정찰 시스템)

  • Juhyeong Roh;Boseong Kim;Dokyeong Kim;Jihyeok Kim;D. Hyunchul Shim
    • The Journal of Korea Robotics Society
    • /
    • v.19 no.2
    • /
    • pp.149-158
    • /
    • 2024
  • This paper presents a robust autonomous navigation and reconnaissance system for tracked robots, designed to handle complex multi-floor indoor environments with stairs. We introduce a localization algorithm that adjusts scan matching parameters to robustly estimate positions and create maps in environments with scarce features, such as narrow rooms and staircases. Our system also features a path planning algorithm that calculates distance costs from surrounding obstacles, integrated with a specialized PID controller tuned to the robot's differential kinematics for collision-free navigation in confined spaces. The perception module leverages multi-image fusion and camera-LiDAR fusion to accurately detect and map the 3D positions of objects around the robot in real time. Through practical tests in real settings, we have verified that our system performs reliably. Based on this reliability, we expect that our research team's autonomous reconnaissance system will be practically utilized in actual disaster situations and environments that are difficult for humans to access, thereby making a significant contribution.

Landform and Drainage Analysis in Geoje-Do Using GIS (GIS를 이용한 거제도 지형 및 하계 분석)

  • Kim, Woo-Kwan;Lim, Yong-Ho
    • Journal of the Korean association of regional geographers
    • /
    • v.3 no.2
    • /
    • pp.19-35
    • /
    • 1997
  • The purpose of this study is to find out the characteristics of landform in Geoje-Do using GIS and DTED data. The characteristics of landform in Geoje-Do are as follows; First, the height-range of Geoje-Do is $0{\sim}580m$, and the average elevation of it is 124m. Volcanic and granite region is mainly appeared at high elevation-region. But, we can't find out outstanding difference of elevation, according to its geology. The second. the slope-range of Geoje-Do is $0{\sim}52$ degree, and the average slope of it is 17.6 degree. The slope of volcanic and granite area is more steeper than any other region. But the results of analysis of the geology in Geojo-Do, don't show outstanding difference of the slope. The third, the area-rate of the aspect of Geoje-Do is almost same in all direction. And the area-rate of south-west direction is the highest. According to the geology of Geoje-Do, granite is distributed the most widely, and the area of volcanic and granite occupy 60% of entire island's area. According to analysis of influence of geology with elevation, geology has little relationship with elevation. According to analysis of geology and drainage network, streams are inclined to be developed well in Alluvium area. Drainage network is well developed throughout the entire island, except southeast area. The highest order of stream is 4 in 1:25,000 topographic map. The density of stream in Geoje-Do is very high, such as 1.6. The bifurcation-ratio of stream is also higher than 4 in all order. The length-ratio of stream is ranged from 1.24 to 3.25. According to the relationship between order and elevation. order is the greater, elevation is the lower. According to the relationship between order and slope, order is the greater, slope is the gentler. In this study, we use DTED Data, and compare it with topographic map data. According to the comparison, there is a little difference between DTED data and topographic map data. Therefore, to use DTED data in landform analysis, it is required coordinate matching process. This process is very important, and take very long time. Thus, if you use DTED in landform analysis, some processes are required. DTED data can be taken very easily, but its using is not simple. Because coordinate adjust is very hard work.

  • PDF

Distracted Driver Detection and Characteristic Area Localization by Combining CAM-Based Hierarchical and Horizontal Classification Models (CAM 기반의 계층적 및 수평적 분류 모델을 결합한 운전자 부주의 검출 및 특징 영역 지역화)

  • Go, Sooyeon;Choi, Yeongwoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.439-448
    • /
    • 2021
  • Driver negligence accounts for the largest proportion of the causes of traffic accidents, and research to detect them is continuously being conducted. This paper proposes a method to accurately detect a distracted driver and localize the most characteristic parts of the driver. The proposed method hierarchically constructs a CNN basic model that classifies 10 classes based on CAM in order to detect driver distration and 4 subclass models for detailed classification of classes having a confusing or common feature area in this model. The classification result output from each model can be considered as a new feature indicating the degree of matching with the CNN feature maps, and the accuracy of classification is improved by horizontally combining and learning them. In addition, by combining the heat map results reflecting the classification results of the basic and detailed classification models, the characteristic areas of attention in the image are found. The proposed method obtained an accuracy of 95.14% in an experiment using the State Farm data set, which is 2.94% higher than the 92.2%, which is the highest accuracy among the results using this data set. Also, it was confirmed by the experiment that more meaningful and accurate attention areas were found than the results of the attention area found when only the basic model was used.

A Study on the Construction of Near-Real Time Drone Image Preprocessing System to use Drone Data in Disaster Monitoring (재난재해 분야 드론 자료 활용을 위한 준 실시간 드론 영상 전처리 시스템 구축에 관한 연구)

  • Joo, Young-Do
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.3
    • /
    • pp.143-149
    • /
    • 2018
  • Recently, due to the large-scale damage of natural disasters caused by global climate change, a monitoring system applying remote sensing technology is being constructed in disaster areas. Among remote sensing platforms, the drone has been actively used in the private sector due to recent technological developments, and has been applied in the disaster areas owing to advantages such as timeliness and economical efficiency. This paper deals with the development of a preprocessing system that can map the drone image data in a near-real time manner as a basis for constructing the disaster monitoring system using the drones. For the research purpose, our system is based on the SURF algorithm which is one of the computer vision technologies. This system aims to performs the desired correction through the feature point matching technique between reference images and shot images. The study area is selected as the lower part of the Gahwa River and the Daecheong dam basin. The former area has many characteristic points for matching whereas the latter area has a relatively low number of difference, so it is possible to effectively test whether the system can be applied in various environments. The results show that the accuracy of the geometric correction is 0.6m and 1.7m respectively, in both areas, and the processing time is about 30 seconds per 1 scene. This indicates that the applicability of this study may be high in disaster areas requiring timeliness. However, in case of no reference image or low-level accuracy, the results entail the limit of the decreased calibration.

Automated Improvement of RapidEye 1-B Geo-referencing Accuracy Using 1:25,000 Digital Maps (1:25,000 수치지도를 이용한 RapidEye 위성영상의 좌표등록 정확도 자동 향상)

  • Oh, Jae Hong;Lee, Chang No
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.5
    • /
    • pp.505-513
    • /
    • 2014
  • The RapidEye can acquire the 6.5m spatial resolution satellite imagery with the high temporal resolution on each day, based on its constellation of five satellites. The image products are available in two processing levels of Basic 1B and Ortho 3A. The Basic 1B image have radiometric and sensor corrections and include RPCs (Rational Polynomial Coefficients) data. In Korea, the geometric accuracy of RapidEye imagery can be improved, based on the scaled national digital maps that had been built. In this paper, we present the fully automated procedures to georegister the 1B data using 1:25,000 digital maps. Those layers of map are selected if the layers appear well in the RapidEye image, and then the selected layers are RPCs-projected into the RapidEye 1B space for generating vector images. The automated edge-based matching between the vector image and RapidEye improves the accuracy of RPCs. The experimental results showed the accuracy improvement from 2.8 to 0.8 pixels in RMSE when compared to the maps.