• Title/Summary/Keyword: Image position

Search Result 2,575, Processing Time 0.031 seconds

Integrated Position Estimation Using the Aerial Image Sequence (항공영상을 이용한 통합된 위치 추정)

  • Sim, Dong-Gyu;Park, Rae-Hong;Kim, Rin-Chul;Lee, Sang-Uk
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.12
    • /
    • pp.76-84
    • /
    • 1999
  • This paper presents an integrated method for aircraft position estimation using sequential aerial images. The proposed integrated system for position estimation is composed of two parts: relative position estimation and absolute position estimation. Relative position estimation recursively computes the current position of an aircraft by accumulating relative displacement estimates extracted from two successive aerial images. Simple accumulation of parameter values decreases reliability of the extracted parameter estimates as an aircraft goes on navigating, resulting in large position error. Therefore absolute position estimation is required to compensate for the position error generated in relative position estimation. Absolute position estimation algorithms by image matching or digital elevation model (DEM) matching are presented. In image matching, a robust oriented Hausdorff measure (ROHM) is employed whereas in DEM matching an algorithm using multiple image pairs is used. Computer simulation with four real aerial image sequences shows the effectiveness of the proposed integrated position estimation algorithm.

  • PDF

Image Path Searching using Auto and Cross Correlations

  • Kim, Young-Bin;Ryu, Kwang-Ryol
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.6
    • /
    • pp.747-752
    • /
    • 2011
  • The position detection of overlapping area in the interframe for image stitching using auto and cross correlation function (ACCF) and compounding one image with the stitching algorithm is presented in this paper. ACCF is used by autocorrelation to the featured area to extract the filter mask in the reference (previous) image and the comparing (current) image is used by crosscorrelation. The stitching is detected by the position of high correlation, and aligns and stitches the image in shifting the current image based on the moving vector. The ACCF technique results in a few computations and simplicity because the filter mask is given by the featuring block, and the position is enabled to detect a bit movement. Input image captured from CMOS is used to be compared with the performance between the ACCF and the window correlation. The results of ACCF show that there is no seam and distortion at the joint parts in the stitched image, and the detection performance of the moving vector is improved to 12% in comparison with the window correlation method.

Vision Based Estimation of 3-D Position of Target for Target Following Guidance/Control of UAV (무인 항공기의 목표물 추적을 위한 영상 기반 목표물 위치 추정)

  • Kim, Jong-Hun;Lee, Dae-Woo;Cho, Kyeum-Rae;Jo, Seon-Yeong;Kim, Jung-Ho;Han, Dong-In
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.12
    • /
    • pp.1205-1211
    • /
    • 2008
  • This paper describes methods to estimate 3-D position of target with respect to reference frame through monocular image from unmanned aerial vehicle (UAV). 3-D position of target is used as information for surveillance, recognition and attack. In this paper. 3-D position of target is estimated to make guidance and control law, which can follow target, user interested. It is necessary that position of target is measured in image to solve 3-D position of target. In this paper, kalman filter is used to track and output position of target in image. Estimation of target's 3-D position is possible using result of image tracking and information of UAV and camera. To estimate this, two algorithms are used. One is methode from arithmetic derivation of dynamics between UAV, carmer, and target. The other is LPV (Linear Parametric Varying). These methods have been run on simulation, and compared in this paper.

Camera Position Estimation in Castor Using Electroendoscopic Image Sequence (전자내시경 순차영상을 이용한 위에서의 카메라 위치 추정)

  • 이상경;민병구
    • Journal of Biomedical Engineering Research
    • /
    • v.12 no.1
    • /
    • pp.49-56
    • /
    • 1991
  • In this paper, a method for camera position estimation in gasher using elechoendoscopic image sequence is proposed. In orders to obtain proper image sequences, the gasser in divided into three sections. It Is presented thats camera position modeling for 3D information extvac lion and image distortion due to the endoscopic lenses is corrected. The feature points are represented with respect to the reference coordinate system below 10 percents error rate. The faster distortion correction algorithm is proposed in this paper. This algorithm uses error table which is faster than coordinate transform method using n -th order polynomials.

  • PDF

Center Position Tracking Enhancement of Eyes and Iris on the Facial Image

  • Chai Duck-hyun;Ryu Kwang-ryol
    • Journal of information and communication convergence engineering
    • /
    • v.3 no.2
    • /
    • pp.110-113
    • /
    • 2005
  • An enhancement of tracking capacity for the centering position of eye and iris on the facial image is presented. A facial image is acquisitioned with a CCD camera to be converted into a binary image. The eye region to be a specified brightness and shapes is used the FRM method using the neighboring five mask areas, and the iris on the eye is tracked with FPDP method. The experimental result shows that the proposed methods lead the centering position tracking capability to be enhanced than the pixel average coordinate values method.

Global Positioning of a Mobile Robot based on Color Omnidirectional Image Understanding (컬러 전방향 영상 이해에 기반한 이동 로봇의 위치 추정)

  • Kim, Tae-Gyun;Lee, Yeong-Jin;Jeong, Myeong-Jin
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.49 no.6
    • /
    • pp.307-315
    • /
    • 2000
  • For the autonomy of a mobile robot it is first needed to know its position and orientation. Various methods of estimating the position of a robot have been developed. However, it is still difficult to localize the robot without any initial position or orientation. In this paper we present the method how to make the colored map and how to calculate the position and direction of a robot using the angle data of an omnidirectional image. The wall of the map is rendered with the corresponding color images and the color histograms of images and the coordinates of feature points are stored in the map. Then a mobile robot gets the color omnidirectional image at arbitrary position and orientation, segments it and recognizes objects by multiple color indexing. Using the information of recognized objects robot can have enough feature points and localize itself.

  • PDF

Position Estimation of Wheeled Mobile Robot in a Corridor Using Neural Network (신경망을 이용한 복도에서의 구륜이동로봇의 위치추정)

  • Choi, Kyung-Jin;Lee, Young-Hyun;Park, Chong-Kug
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.5
    • /
    • pp.577-582
    • /
    • 2004
  • This paper describes position estimation algorithm using neural network for the navigation of the vision-based Wheeled Mobile Robot (WMR) in a corridor with taking ceiling lamps as landmark. From images of a corridor the lamp's line on the ceiling in corridor has a specific slope to the lateral position of the WMR. The vanishing point produced by the lamp's line also has a specific position to the orientation of WMR. The ceiling lamps has a limited size and shape like a circle in image. Simple image processing algorithms are used to extract lamps from the corridor image. Then the lamp's line and vanishing point's position are defined and calculated at known position of WMR in a corridor To estimate the lateral position and orientation of WMR from an image, the relationship between the position of WMR and the features of ceiling lamps have to be defined. Data set between position of WMR and features of lamps are configured. Neural network are composed and teamed with data set. Back propagation algorithm(BPN) is used for learning. And it is applied in navigation of WMR in a corridor.

A Novel Measuring Method of In-plane Position of Contact-Free Planar Actuator Using Binary Grid Pattern Image (이진 격자 패턴 이미지를 이용한 비접촉식 평면 구동기의 면내 위치(x, y, $\theta$) 측정 방법)

  • 정광석;정광호;백윤수
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.20 no.7
    • /
    • pp.120-127
    • /
    • 2003
  • A novel three degrees of freedom sensing method utilizing binary grid pattern image and vision camera is presented. The binary grid pattern image is designed by Pseudo-Random Binary Arrays and referenced to encode in-plane position of a moving stage of the contact-free planar actuator. First, the yaw motion of the stage is detected using fast image processing and then the other planar positions, x and y, are decoded with a sequence of images. This method can be applied to the system that needs feedback of in-plane position, with advantages of a good accuracy and high resolution comparable with the encoder, a relatively compact structure, no friction, and a low cost. In this paper, all the procedures of the above sensing mechanism are described in detail, including simulation and experiment results.

A landmark position estimation method using a color image for an indoor mobile robot (실내 주행 이동 로봇을 위한 컬러 이미지를 이용한 표식점 위치 측정 방법)

  • 유원필;정명진
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.45 no.2
    • /
    • pp.310-318
    • /
    • 1996
  • It is very important for a mobile robot to estimate its current position With precise information about the current position, the mobile robot can do path-planning or environmental map building successfully. In this paper, a position estimation method using one color image is presented. The mobile robot(K2A) takes an image of a corridor and searches for the door and pillar, which are the given landmarks. The color information is used to distinguish the landmarks. In order to represent the presence of the landmarks, Image Mode is defined. This method adopts Kullback information distance. If a landmark is detected, with the color information, the mobile robot identifies the vertical line of the landmark and its crossing point and an experimental navigation is performed.

  • PDF

PANORAMIC IMAGE OF MANDIBULAR CONDYLE ACCORDING TO HEAD POSITION (두부 위치에 따른 하악 과두의 파노라마상)

  • Kim Jeong Hwa;Choi Soon Chul
    • Journal of Korean Academy of Oral and Maxillofacial Radiology
    • /
    • v.20 no.2
    • /
    • pp.219-225
    • /
    • 1990
  • Panoramic radiography is convenient in clinic and visualizes those areas which other technique do not give. But the technique has limitation of image distortion which results from the relationship of the ramus to the focal trough and from the direction of the central ray. This study is, using 7 dry skulls, to determine the effect of rotation of patient's head on reducing those distortion and determine the magnification ratio of images of mandibular condyle in rotated patient head position. The obtained results were as follows: 1. Generally, in panoramic radiography the anterolateral portion of the mandibular condyle was best to be visualized. 2. There are no significant difference between the image readability of anteromedial portion and that of anterocentral portion of the mandibular condyle. 3. Anterolateral portion of the mandibular condyle was better visualized in rotated head position by 20 degree or horizontal condylar inclination than in conventional position or in rotated head position by 10 degree. 4. The magnification ratio of the anteroposterior diameter in the image of mandibular condyle was least in the rotated head position by horizontal inclination of the mandibular condyle and was largest by 20 degree.

  • PDF