• Title/Summary/Keyword: Mobile Mapping Systems

Search Result 125, Processing Time 0.026 seconds

Mobility Support Architecture in Locator-ID Separation based Future Internet using Proxy Mobile IPv6

  • Seok, Seung-Joon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.2
    • /
    • pp.209-217
    • /
    • 2014
  • Of several approaches for future Internet, separating two properties of IP address into locator and identifier, is being considered as a highly likely solution. IETF's LISP (Locator ID Separation Protocol) is proposed for this architecture. In particular, the LISP model easily allows for device mobility through simple update of information at MS (Mapping Server) without a separate protocol. In recent years, some of the models supporting device mobility using such LISP attributes have emerged; however, most of them have the limitation for seamless mobility support due to the frequent MS information updates and the time required for the updates. In this paper, PMIPv6 (Proxy Mobile IPv6) model is applied for mobility support in LISP model. PMIPv6 is a method that can support mobility based on network without the help of device; thus, this we define anew the behavior of functional modules (LMA, MAG and MS) to fit this model to the LISP environment and present specifically procedures of device registration, data transfer, route optimization and handover. In addition, our approach improves the communication performance using three tunnels identified with locators between mobile node and corresponding node and using a route optimized tunnel between MN's MAG and CN's MAG. Finally, it allows for seamless mobility by designing a sophisticated handover procedure.

Implementation of Path Finding Method using 3D Mapping for Autonomous Robotic (3차원 공간 맵핑을 통한 로봇의 경로 구현)

  • Son, Eun-Ho;Kim, Young-Chul;Chong, Kil-To
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.2
    • /
    • pp.168-177
    • /
    • 2008
  • Path finding is a key element in the navigation of a mobile robot. To find a path, robot should know their position exactly, since the position error exposes a robot to many dangerous conditions. It could make a robot move to a wrong direction so that it may have damage by collision by the surrounding obstacles. We propose a method obtaining an accurate robot position. The localization of a mobile robot in its working environment performs by using a vision system and Virtual Reality Modeling Language(VRML). The robot identifies landmarks located in the environment. An image processing and neural network pattern matching techniques have been applied to find location of the robot. After the self-positioning procedure, the 2-D scene of the vision is overlaid onto a VRML scene. This paper describes how to realize the self-positioning, and shows the overlay between the 2-D and VRML scenes. The suggested method defines a robot's path successfully. An experiment using the suggested algorithm apply to a mobile robot has been performed and the result shows a good path tracking.

An Efficient Outdoor Localization Method Using Multi-Sensor Fusion for Car-Like Robots (다중 센서 융합을 사용한 자동차형 로봇의 효율적인 실외 지역 위치 추정 방법)

  • Bae, Sang-Hoon;Kim, Byung-Kook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.10
    • /
    • pp.995-1005
    • /
    • 2011
  • An efficient outdoor local localization method is suggested using multi-sensor fusion with MU-EKF (Multi-Update Extended Kalman Filter) for car-like mobile robots. In outdoor environments, where mobile robots are used for explorations or military services, accurate localization with multiple sensors is indispensable. In this paper, multi-sensor fusion outdoor local localization algorithm is proposed, which fuses sensor data from LRF (Laser Range Finder), Encoder, and GPS. First, encoder data is used for the prediction stage of MU-EKF. Then the LRF data obtained by scanning the environment is used to extract objects, and estimates the robot position and orientation by mapping with map objects, as the first update stage of MU-EKF. This estimation is finally fused with GPS as the second update stage of MU-EKF. This MU-EKF algorithm can also fuse more than three sensor data efficiently even with different sensor data sampling periods, and ensures high accuracy in localization. The validity of the proposed algorithm is revealed via experiments.

Loosely Coupled LiDAR-visual Mapping and Navigation of AMR in Logistic Environments (실내 물류 환경에서 라이다-카메라 약결합 기반 맵핑 및 위치인식과 네비게이션 방법)

  • Choi, Byunghee;Kang, Gyeongsu;Roh, Yejin;Cho, Younggun
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.4
    • /
    • pp.397-406
    • /
    • 2022
  • This paper presents an autonomous mobile robot (AMR) system and operation algorithms for logistic and factory facilities without magnet-lines installation. Unlike widely used AMR systems, we propose an EKF-based loosely coupled fusion of LiDAR measurements and visual markers. Our method first constructs occupancy grid and visual marker map in the mapping process and utilizes prebuilt maps for precise localization. Also, we developed a waypoint-based navigation pipeline for robust autonomous operation in unconstrained environments. The proposed system estimates the robot pose using by updating the state with the fusion of visual marker and LiDAR measurements. Finally, we tested the proposed method in indoor environments and existing factory facilities for evaluation. In experimental results, this paper represents the performance of our system compared to the well-known LiDAR-based localization and navigation system.

Multi-camera System Calibration with Built-in Relative Orientation Constraints (Part 1) Theoretical Principle

  • Lari, Zahra;Habib, Ayman;Mazaheri, Mehdi;Al-Durgham, Kaleel
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.3
    • /
    • pp.191-204
    • /
    • 2014
  • In recent years, multi-camera systems have been recognized as an affordable alternative for the collection of 3D spatial data from physical surfaces. The collected data can be applied for different mapping(e.g., mobile mapping and mapping inaccessible locations)or metrology applications (e.g., industrial, biomedical, and architectural). In order to fully exploit the potential accuracy of these systems and ensure successful manipulation of the involved cameras, a careful system calibration should be performed prior to the data collection procedure. The calibration of a multi-camera system is accomplished when the individual cameras are calibrated and the geometric relationships among the different system components are defined. In this paper, a new single-step approach is introduced for the calibration of a multi-camera system (i.e., individual camera calibration and estimation of the lever-arm and boresight angles among the system components). In this approach, one of the cameras is set as the reference camera and the system mounting parameters are defined relative to that reference camera. The proposed approach is easy to implement and computationally efficient. The major advantage of this method, when compared to available multi-camera system calibration approaches, is the flexibility of being applied for either directly or indirectly geo-referenced multi-camera systems. The feasibility of the proposed approach is verified through experimental results using real data collected by a newly-developed indirectly geo-referenced multi-camera system.

Extraction of Different Types of Geometrical Features from Raw Sensor Data of Two-dimensional LRF (2차원 LRF의 Raw Sensor Data로부터 추출된 다른 타입의 기하학적 특징)

  • Yan, Rui-Jun;Wu, Jing;Yuan, Chao;Han, Chang-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.3
    • /
    • pp.265-275
    • /
    • 2015
  • This paper describes extraction methods of five different types of geometrical features (line, arc, corner, polynomial curve, NURBS curve) from the obtained raw data by using a two-dimensional laser range finder (LRF). Natural features with their covariance matrices play a key role in the realization of feature-based simultaneous localization and mapping (SLAM), which can be used to represent the environment and correct the pose of mobile robot. The covariance matrices of these geometrical features are derived in detail based on the raw sensor data and the uncertainty of LRF. Several comparison are made and discussed to highlight the advantages and drawbacks of each type of geometrical feature. Finally, the extracted features from raw sensor data obtained by using a LRF in an indoor environment are used to validate the proposed extraction methods.

A Study on Fisheye Lens based Features on the Ceiling for Self-Localization (실내 환경에서 자기위치 인식을 위한 어안렌즈 기반의 천장의 특징점 모델 연구)

  • Choi, Chul-Hee;Choi, Byung-Jae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.4
    • /
    • pp.442-448
    • /
    • 2011
  • There are many research results about a self-localization technique of mobile robot. In this paper we present a self-localization technique based on the features of ceiling vision using a fisheye lens. The features obtained by SIFT(Scale Invariant Feature Transform) can be used to be matched between the previous image and the current image and then its optimal function is derived. The fisheye lens causes some distortion on its images naturally. So it must be calibrated by some algorithm. We here propose some methods for calibration of distorted images and design of a geometric fitness model. The proposed method is applied to laboratory and aile environment. We show its feasibility at some indoor environment.

Determination of 3D Object Coordinates from Overlapping Omni-directional Images Acquired by a Mobile Mapping System (모바일매핑시스템으로 취득한 중첩 전방위 영상으로부터 3차원 객체좌표의 결정)

  • Oh, Tae-Wan;Lee, Im-Pyeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.3
    • /
    • pp.305-315
    • /
    • 2010
  • This research aims to develop a method to determine the 3D coordinates of an object point from overlapping omni-directional images acquired by a ground mobile mapping system and assess their accuracies. In the proposed method, we first define an individual coordinate system on each sensor and the object space and determine the geometric relationships between the systems. Based on these systems and their relationships, we derive a straight line of the corresponding object point candidates for a point of an omni-directional image, and determine the 3D coordinates of the object point by intersecting a pair of straight lines derived from a pair of matched points. We have compared the object coordinates determined through the proposed method with those measured by GPS and a total station for the accuracy assessment and analysis. According to the experimental results, with the appropriate length of baseline and mutual positions between cameras and objects, we can determine the relative coordinates of the object point with the accuracy of several centimeters. The accuracy of the absolute coordinates is ranged from several centimeters to 1 m due to systematic errors. In the future, we plan to improve the accuracy of absolute coordinates by determining more precisely the relationship between the camera and GPS/INS coordinates and performing the calibration of the omni-directional camera

Text Area Detection of Road Sign Images based on IRBP Method (도로표지 영상에서 IRBP 기반의 문자 영역 추출)

  • Chong, Kyusoo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.13 no.6
    • /
    • pp.1-9
    • /
    • 2014
  • Recently, a study is conducting to image collection and auto detection of attribute information using mobile mapping system. The road sign attribute information detection is difficult because of various size and placement, interference of other facilities like trees. In this study, a text detection method that does not rely on a Korean character template is required to successfully detect the target text when a variety of differently sized texts are present near the target texts. To overcome this, the method of incremental right-to-left blob projection (IRBP) was suggested as a solution; the potential and improvement of the method was also assessed. To assess the performance improvement of the IRBP that was developed, the IRBP method was compared to the existing method that uses Korean templates through the 60 videos of street signs that were used. It was verified that text detection can be improved with the IRBP method.

Machine Learning Based MMS Point Cloud Semantic Segmentation (머신러닝 기반 MMS Point Cloud 의미론적 분할)

  • Bae, Jaegu;Seo, Dongju;Kim, Jinsoo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_3
    • /
    • pp.939-951
    • /
    • 2022
  • The most important factor in designing autonomous driving systems is to recognize the exact location of the vehicle within the surrounding environment. To date, various sensors and navigation systems have been used for autonomous driving systems; however, all have limitations. Therefore, the need for high-definition (HD) maps that provide high-precision infrastructure information for safe and convenient autonomous driving is increasing. HD maps are drawn using three-dimensional point cloud data acquired through a mobile mapping system (MMS). However, this process requires manual work due to the large numbers of points and drawing layers, increasing the cost and effort associated with HD mapping. The objective of this study was to improve the efficiency of HD mapping by segmenting semantic information in an MMS point cloud into six classes: roads, curbs, sidewalks, medians, lanes, and other elements. Segmentation was performed using various machine learning techniques including random forest (RF), support vector machine (SVM), k-nearest neighbor (KNN), and gradient-boosting machine (GBM), and 11 variables including geometry, color, intensity, and other road design features. MMS point cloud data for a 130-m section of a five-lane road near Minam Station in Busan, were used to evaluate the segmentation models; the average F1 scores of the models were 95.43% for RF, 92.1% for SVM, 91.05% for GBM, and 82.63% for KNN. The RF model showed the best segmentation performance, with F1 scores of 99.3%, 95.5%, 94.5%, 93.5%, and 90.1% for roads, sidewalks, curbs, medians, and lanes, respectively. The variable importance results of the RF model showed high mean decrease accuracy and mean decrease gini for XY dist. and Z dist. variables related to road design, respectively. Thus, variables related to road design contributed significantly to the segmentation of semantic information. The results of this study demonstrate the applicability of segmentation of MMS point cloud data based on machine learning, and will help to reduce the cost and effort associated with HD mapping.