• Title/Summary/Keyword: Image based localization

Search Result 258, Processing Time 0.028 seconds

Survey on Visual Navigation Technology for Unmanned Systems (무인 시스템의 자율 주행을 위한 영상기반 항법기술 동향)

  • Kim, Hyoun-Jin;Seo, Hoseong;Kim, Pyojin;Lee, Chung-Keun
    • Journal of Advanced Navigation Technology
    • /
    • v.19 no.2
    • /
    • pp.133-139
    • /
    • 2015
  • This paper surveys vision based autonomous navigation technologies for unmanned systems. Main branches of visual navigation technologies are visual servoing, visual odometry, and visual simultaneous localization and mapping (SLAM). Visual servoing provides velocity input which guides mobile system to desired pose. This input velocity is calculated from feature difference between desired image and acquired image. Visual odometry is the technology that estimates the relative pose between frames of consecutive image. This can improve the accuracy when compared with the exisiting dead-reckoning methods. Visual SLAM aims for constructing map of unknown environment and determining mobile system's location simultaneously, which is essential for operation of unmanned systems in unknown environments. The trend of visual navigation is grasped by examining foreign research cases related to visual navigation technology.

Localization of a Mobile Robot Using the Information of a Moving Object (운동물체의 정보를 이용한 이동로봇의 자기 위치 추정)

  • Roh, Dong-Kyu;Kim, Il-Myung;Kim, Byung-Hwa;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.11
    • /
    • pp.933-938
    • /
    • 2001
  • In this paper, we describe a method for the mobile robot using images of a moving object. This method combines the observed position from dead-reckoning sensors and the estimated position from the images captured by a fixed camera to localize a mobile robot. Using the a priori known path of a moving object in the world coordinates and a perspective camera model, we derive the geometric constraint equations which represent the relation between image frame coordinates for a moving object and the estimated robot`s position. Since the equations are based on the estimated position, the measurement error may exist all the time. The proposed method utilizes the error between the observed and estimated image coordinates to localize the mobile robot. The Kalman filter scheme is applied to this method. Effectiveness of the proposed method is demonstrated by the simulation.

  • PDF

Self-Positioning of a Mobile Robot using a Vision System and Image Overlay with VRML (비전 시스템을 이용한 이동로봇 Self-positioning과 VRML과의 영상오버레이)

  • Hyun, Kwon-Bang;To, Chong-Kil
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.258-260
    • /
    • 2005
  • We describe a method for localizing a mobile robot in the working environment using a vision system and VRML. The robot identifies landmarks in the environment and carries out the self-positioning. The image-processing and neural network pattern matching technique are employed to recognize landmarks placed in a robot working environment. The robot self-positioning using vision system is based on the well-known localization algorithm. After self-positioning, 2D scene is overlaid with VRML scene. This paper describes how to realize the self-positioning and shows the result of overlaying between 2D scene and VRML scene. In addition we describe the advantage expected from overlapping both scenes.

  • PDF

A Path tracking algorithm and a VRML image overlay method (VRML과 영상오버레이를 이용한 로봇의 경로추적)

  • Sohn, Eun-Ho;Zhang, Yuanliang;Kim, Young-Chul;Chong, Kil-To
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.907-908
    • /
    • 2006
  • We describe a method for localizing a mobile robot in its working environment using a vision system and Virtual Reality Modeling Language (VRML). The robot identifies landmarks in the environment, using image processing and neural network pattern matching techniques, and then its performs self-positioning with a vision system based on a well-known localization algorithm. After the self-positioning procedure, the 2-D scene of the vision is overlaid with the VRML scene. This paper describes how to realize the self-positioning, and shows the overlap between the 2-D and VRML scenes. The method successfully defines a robot's path.

  • PDF

Approaches for Automatic GCP Extraction and Localization in Airborne SAR Images and Some Test Results

  • Tsay, Jaan-Rong;Liu, Pang-Wei
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.360-362
    • /
    • 2003
  • This paper presents simple feature-based approaches for full- and/or semi-automatic extraction, selection, and localization (center-determination) of ground control points (GCPs) for radargrammetry using airborne synthetic aperture radar (SAR) images. Test results using airborne NASA/JPL TOPSAR images in Taiwan verify that the registration accuracy is about 0.8${\sim}$1.4 pixels. In c.a. 30 minutes, 1500${\sim}$3000 GCPs are extracted and their point centers in a SAR image of about 512 ${\times}$ 512 pixels are determined on a personal computer.

  • PDF

Iris Localization using the Pupil Center Point based on Deep Learning in RGB Images (RGB 영상에서 딥러닝 기반 동공 중심점을 이용한 홍채 검출)

  • Lee, Tae-Gyun;Yoo, Jang-Hee
    • Journal of Software Assessment and Valuation
    • /
    • v.16 no.2
    • /
    • pp.135-142
    • /
    • 2020
  • In this paper, we describe the iris localization method in RGB images. Most of the iris localization methods are developed for infrared images, thus an iris localization method in RGB images is required for various applications. The proposed method consists of four stages: i) detection of the candidate irises using circular Hough transform (CHT) from an input image, ii) detection of a pupil center based on deep learning, iii) determine the iris using the pupil center, and iv) correction of the iris region. The candidate irises are detected in the order of the number of intersections of the center point candidates after generating the Hough space, and the iris in the candidates is determined based on the detected pupil center. Also, the error due to distortion of the iris shape is corrected by finding a new boundary point based on the detected iris center. In experiments, the proposed method has an improved accuracy about 27.4% compared to the CHT method.

Accurate Human Localization for Automatic Labelling of Human from Fisheye Images

  • Than, Van Pha;Nguyen, Thanh Binh;Chung, Sun-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.5
    • /
    • pp.769-781
    • /
    • 2017
  • Deep learning networks like Convolutional Neural Networks (CNNs) show successful performances in many computer vision applications such as image classification, object detection, and so on. For implementation of deep learning networks in embedded system with limited processing power and memory, deep learning network may need to be simplified. However, simplified deep learning network cannot learn every possible scene. One realistic strategy for embedded deep learning network is to construct a simplified deep learning network model optimized for the scene images of the installation place. Then, automatic training will be necessitated for commercialization. In this paper, as an intermediate step toward automatic training under fisheye camera environments, we study more precise human localization in fisheye images, and propose an accurate human localization method, Automatic Ground-Truth Labelling Method (AGTLM). AGTLM first localizes candidate human object bounding boxes by utilizing GoogLeNet-LSTM approach, and after reassurance process by GoogLeNet-based CNN network, finally refines them more correctly and precisely(tightly) by applying saliency object detection technique. The performance improvement of the proposed human localization method, AGTLM with respect to accuracy and tightness is shown through several experiments.

Development of 3-D Radiosurgery Planning System Using IBM Personal Computer (IBM Personal Computer를 이용한 3차원적 뇌정위 방사선 수술계획 시스템의 개발)

  • Suh Tae-Suk;Suh Doug-Young;Park Charn Il;Ha Sung Whan;Kang Wee Saing;Park Sung Hun;Yoon Sei Chul
    • Radiation Oncology Journal
    • /
    • v.11 no.1
    • /
    • pp.167-174
    • /
    • 1993
  • Recently, stereotactic radiosurgery plan is required with the information of 3-D image and dose distribution. A project has been doing if developing LINAC based stereotactic radiosurgery since April 1991. The purpose of this research is to develop 3-D radiosurgery planning system using personal computer. The procedure of this research is based on two steps. The first step is to develop 3-D localization system, which input the image information of the patient, coordinate transformation, the position and shape of target, and patient contour into computer system using CT image and stereotactic frame. The second step is to develop 3-D dose planning system, which compute dose distribution on image plane, display on high resolution monitor both isodose distribution and patient image simultaneously and develop menu-driven planning system. This prototype of radiosurgery planning system was applied recently for several clinical cases. It was shown that our planning system is fast, accurate and efficient while making it possible to handle various kinds of image modalities such as angiography, CT and MRI. It makes it possible to develop general 3-D planning system using beam's eye view or CT simulation in radiation therapy in future.

  • PDF

Object-based Digital Watermarking Using Wavelet Property (웨이브릿 특성을 이용한 객체기반 디지털 워터마킹)

  • 김유신;원치선;이재진
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.7A
    • /
    • pp.1037-1043
    • /
    • 2000
  • In this paper we present an object-based watermarking scheme, which cuts out several featured portions from an image and casts chaotic sequences as a watermark into the extracted object image in the discrete wavelet domain with respect to the models of the human visual system. In such a way we can insert watermark to several objects of an image separately. Advantages of the proposed scheme include that it can protect featured object out of an image selectively as well as entire image and casts watermark sequences according to human visual masking utilizing the time-frequency localization property of the wavelet transform.

  • PDF

Ultrasonic Source Localization and Visualization Technique for Fault Detection of a Power Distribution Equipment (배전설비 결함 검출을 위한 초음파 음원 위치추정 및 시각화 기법)

  • Park, Jin Ha;Jung, Ha Hyoung;Lyou, Joon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.4
    • /
    • pp.315-320
    • /
    • 2015
  • This paper describes the implemenation of localization and visualization scheme to find out an ultrasonic source caused by defects of a power distribution line equipment. To increase the fault detection performance, $2{\times}4$ sensor array is configured with MEMS ultrasonic sensors, and from the sensor signals aquired, the azimuth and elevation angles of the ultrasonic source is estimated based on the delay-sum beam forming method. Also, to visualize the estimated location, it is marked on the background image. Experimental results show applicability of the present technique.