• 제목/요약/키워드: 3D localization

검색결과 360건 처리시간 0.028초

오차 감소를 위한 이동로봇 Self-Localization과 VRML 영상오버레이 기법 (Self-localization of a Mobile Robot for Decreasing the Error and VRML Image Overlay)

  • 권방현;손은호;김영철;정길도
    • 제어로봇시스템학회논문지
    • /
    • 제12권4호
    • /
    • pp.389-394
    • /
    • 2006
  • Inaccurate localization exposes a robot to many dangerous conditions. It could make a robot be moved to wrong direction or damaged by collision with surrounding obstacles. There are numerous approaches to self-localization, and there are different modalities as well (vision, laser range finders, ultrasonic sonars). Since sensor information is generally uncertain and contains noise, there are many researches to reduce the noise. But, the correctness is limited because most researches are based on statistical approach. The goal of our research is to measure more exact robot location by matching between built VRML 3D model and real vision image. To determine the position of mobile robot, landmark-localization technique has been applied. Landmarks are any detectable structure in the physical environment. Some use vertical lines, others use specially designed markers, In this paper, specially designed markers are used as landmarks. Given known focal length and a single image of three landmarks it is possible to compute the angular separation between the lines of sight of the landmarks. The image-processing and neural network pattern matching techniques are employed to recognize landmarks placed in a robot working environment. After self-localization, the 2D scene of the vision is overlaid with the VRML scene.

3D-Lidar 기반 도로 반사도 지도와 IPM 영상을 이용한 위치추정 (Localization Using 3D-Lidar Based Road Reflectivity Map and IPM Image)

  • 정태기;송종화;임준혁;이병현;지규인
    • 제어로봇시스템학회논문지
    • /
    • 제22권12호
    • /
    • pp.1061-1067
    • /
    • 2016
  • Position of the vehicle for driving is essential to autonomous navigation. However, there appears GPS position error due to multipath which is occurred by tall buildings in downtown area. In this paper, GPS position error is corrected by using camera sensor and highly accurate map made with 3D-Lidar. Input image through inverse perspective mapping is converted into top-view image, and it works out map matching with the map which has intensity of 3D-Lidar. Performance comparison was conducted between this method and traditional way which does map matching with input image after conversion of map to pinhole camera image. As a result, longitudinal error declined 49% and complexity declined 90%.

Object Detection and Localization on Map using Multiple Camera and Lidar Point Cloud

  • Pansipansi, Leonardo John;Jang, Minseok;Lee, Yonsik
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2021년도 추계학술대회
    • /
    • pp.422-424
    • /
    • 2021
  • In this paper, it leads the approach of fusing multiple RGB cameras for visual objects recognition based on deep learning with convolution neural network and 3D Light Detection and Ranging (LiDAR) to observe the environment and match into a 3D world in estimating the distance and position in a form of point cloud map. The goal of perception in multiple cameras are to extract the crucial static and dynamic objects around the autonomous vehicle, especially the blind spot which assists the AV to navigate according to the goal. Numerous cameras with object detection might tend slow-going the computer process in real-time. The computer vision convolution neural network algorithm to use for eradicating this problem use must suitable also to the capacity of the hardware. The localization of classified detected objects comes from the bases of a 3D point cloud environment. But first, the LiDAR point cloud data undergo parsing, and the used algorithm is based on the 3D Euclidean clustering method which gives an accurate on localizing the objects. We evaluated the method using our dataset that comes from VLP-16 and multiple cameras and the results show the completion of the method and multi-sensor fusion strategy.

  • PDF

VRML 영상오버레이기법을 이용한 로봇의 Self-Localization (VRML image overlay method for Robot's Self-Localization)

  • 손은호;권방현;김영철;정길도
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2006년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.318-320
    • /
    • 2006
  • Inaccurate localization exposes a robot to many dangerous conditions. It could make a robot be moved to wrong direction or damaged by collision with surrounding obstacles. There are numerous approaches to self-localization, and there are different modalities as well (vision, laser range finders, ultrasonic sonars). Since sensor information is generally uncertain and contains noise, there are many researches to reduce the noise. But, the correctness is limited because most researches are based on statistical approach. The goal of our research is to measure more exact robot location by matching between built VRML 3D model and real vision image. To determine the position of mobile robot, landmark-localitzation technique has been applied. Landmarks are any detectable structure in the physical environment. Some use vertical lines, others use specially designed markers, In this paper, specially designed markers are used as landmarks. Given known focal length and a single image of three landmarks it is possible to compute the angular separation between the lines of sight of the landmarks. The image-processing and neural network pattern matching techniques are employed to recognize landmarks placed in a robot working environment. After self-localization, the 2D scene of the vision is overlaid with the VRML scene.

  • PDF

Localization for Mobile Robot Using Vertical Lines

  • Kang, Chang-Hun;Ahn, Hyun-Sik
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.793-797
    • /
    • 2003
  • In this paper, we present a self-localization method for mobile robots using vertical line features of indoor environment. When a 2D map including feature points and color information is given, a mobile robot moves to the destination, and acquires images by one camera from the surroundings having vertical line edges. From the image, vertical line edges are detected, and pattern vectors meaning averaged color values of the left and right region of each line segment are computed. The pattern vectors are matched with the feature points of the map using the color information and the geometrical relationship of the points. From the perspective transformation of the corresponded points, nonlinear equations are derived. Localization is carried out from solving the equations by using Newton's method. Experimental results show that the proposed method using mono view is simple and applicable to indoor environment.

  • PDF

Visual Positioning System based on Voxel Labeling using Object Simultaneous Localization And Mapping

  • Jung, Tae-Won;Kim, In-Seon;Jung, Kye-Dong
    • International Journal of Advanced Culture Technology
    • /
    • 제9권4호
    • /
    • pp.302-306
    • /
    • 2021
  • Indoor localization is one of the basic elements of Location-Based Service, such as indoor navigation, location-based precision marketing, spatial recognition of robotics, augmented reality, and mixed reality. We propose a Voxel Labeling-based visual positioning system using object simultaneous localization and mapping (SLAM). Our method is a method of determining a location through single image 3D cuboid object detection and object SLAM for indoor navigation, then mapping to create an indoor map, addressing it with voxels, and matching with a defined space. First, high-quality cuboids are created from sampling 2D bounding boxes and vanishing points for single image object detection. And after jointly optimizing the poses of cameras, objects, and points, it is a Visual Positioning System (VPS) through matching with the pose information of the object in the voxel database. Our method provided the spatial information needed to the user with improved location accuracy and direction estimation.

DEM과 산영상을 이용한 비전기반 카메라 위치인식 (Vision-based Camera Localization using DEM and Mountain Image)

  • 차정희
    • 한국컴퓨터정보학회논문지
    • /
    • 제10권6호
    • /
    • pp.177-186
    • /
    • 2005
  • 본 논문에서는 DEM(Digital Elevation Model)과 산 영상을 매핑하여 3차원 정보를 생성하고 이를 이용한 비전기반 카메라 위치인식방법을 제안한다. 일반적으로 인식에 사용된 영상의 특징들은 카메라뷰에 따라 내용이 변해 정보양이 증가하는 단점이 있다. 본 논문에서는 카메라뷰에 무관한 기하학의 불변특징을 추출하고 제안하는 유사도 평가함수와 Graham 탐색방법을 사용한 정확한 대응점을 산출하여 카메라 외부인수를 계산하였다. 또한 그래픽이론과 시각적 단서를 이용한 3차원 정보생성 방법을 제안하였다. 제안하는 방법은 불변 점 특징 추출단계, 3차원 정보 생성단계, 외부인수 산출단계의 3단계로 구성된다. 실험에서는 제안한 방법과 기존방법을 비교, 분석함으로써 제안한 방법의 우월성을 입증하였다.

  • PDF

UUV의 수중 도킹을 위한 전자기파 신호 기반의 위치인식 센서 개발 (The Underwater UUV Docking with 3D RF Signal Attenuation based Localization)

  • 곽경민;박대길;정완균;김진현
    • 센서학회지
    • /
    • 제26권3호
    • /
    • pp.199-203
    • /
    • 2017
  • In this paper, we developed an underwater localization system for underwater robot docking using the electromagnetic wave attenuation model. Electromagnetic waves are generally known to be impossible to use in water environment. However, according to the conclusions of the previous studies on the attenuation characteristics in underwater, the attenuation pattern is uniform and its model was accurately proposed and verified in 3-dimensional space via the omnidirectional antenna. In this paper, a docking structure and localization sensor system are developed for a widely used cone type docking mechanism. First, we fabricated electromagnetic wave range sensor transmit modules. And a mobile sensor node is equipped with unmanned underwater vehicle(UUV)s. The mobile node senses the four different signal strength (RSS: Received Signal Strength) from fixed nodes, and the obtained RSS data are transformed to each distance information using the 3-Dimensional EM wave attenuation model. Then, the relative localization between the docking area and underwater robot can be achieved according to optimization algorithm. Finally, experimental results show the feasibility of the proposed localization system for the docking induction by comparing the errors in the actual position of the mobile node and the theoretical position through the model.

모바일 로봇에서 RFID를 이용한 지도작성 알고리즘 개발 (Development of Map Building Algorithm for Mobile Robot by Using RFID)

  • 김시습;선정안;기창두
    • 한국생산제조학회지
    • /
    • 제20권2호
    • /
    • pp.133-138
    • /
    • 2011
  • RFID system can be used to improve object recognition, map building and localization for robot area. A novel method of indoor navigation system for a mobile robot is proposed using RFID technology. The mobile robot With a RFID reader and antenna is able to find what obstacles are located where in circumstance and can build the map similar to indoor circumstance by combining RFID information and distance data obtained from sensors. Using the map obtained, the mobile robot can avoid obstacles and finally reach the desired goal by $A^*$ algorithm. 3D map which has the advantage of robot navigation and manipulation is able to be built using z dimension of products. The proposed robot navigation system is proved to apply for SLAM and path planning in unknown circumstance through numerous experiments.