• Title/Summary/Keyword: Perspective Plane Image

Search Result 35, Processing Time 0.031 seconds

Development of a Lane Sensing Algorithm Using Vision Sensors (비전 센서를 이용한 차선 감지 알고리듬 개발)

  • Park, Yong-Jun;Heo, Geon-Su
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.26 no.8
    • /
    • pp.1666-1671
    • /
    • 2002
  • A lane sensing algorithm using vision sensors is developed based on lane geometry models. The parameters of the lane geometry models are estimated by a Kalman filter and utilized to reconstruct the lane geometry in the global coordinate. The inverse perspective mapping from image plane to global coordinate assumes earth to be flat, but roll and pitch motions of a vehicle are considered from the perspective of the lane sensing. The proposed algorithm shows robust lane sensing performance compared to the conventional algorithms.

Fast landmark matching algorithm using moving guide-line image

  • Seo Seok-Bae;Kang Chi-Ho;Ahn Sang-Il;Choi Hae-Jin
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.208-211
    • /
    • 2004
  • Landmark matching is one of an important algorithm for navigation of satellite images. This paper proposes a fast landmark matching algorithm using a MGLI (Moving Guide-Line Image). For searching the matched point between the landmark chip and a part of image, correlation matrix is used generally, but the full-sized correlation matrix has a drawback requiring plenty of time for matching point calculation. MGLI includes thick lines for fast calculation of correlation matrix. In the MGLI, width of the thick lines should be determined by satellite position changes and navigation error range. For the fast landmark matching, the MGLI provides guided line for a landmark chip we want to match, so that the proposed method should reduce candidate areas for correlation matrix calculation. This paper will show how much time is reduced in the proposed fast landmark matching algorithm compared to general ones.

  • PDF

Real-Time Lane Detection Based on Inverse Perspective Transform and Search Range Prediction (역원근 변환과 검색 영역 예측에 의한 실시간 차선 인식)

  • Kim, S.H.;Lee, D.H.;Lee, M.H.;Be, J.I.
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.2843-2845
    • /
    • 2000
  • A lane detection based on a road model or feature all need correct acquirement of information on the lane in a image, It is inefficient to implement a lane detection algorithm through the full range of a image when being applied to a real road in real time because of the calculating time. This paper defines two searching range of detecting lane in a road, First is searching mode that is searching the lane without any prior information of a road, Second is recognition mode, which is able to reduce the size and change the position of a searching range by predicting the position of a lane through the acquired information in a previous frame. It is allow to extract accurately and efficiently the edge candidates points of a lane as not conducting an unnecessary searching. By means of removing the perspective effect of the edge candidate points which are acquired by using the inverse perspective transformation, we transform the edge candidate information in the Image Coordinate System(ICS) into the plane-view image in the World Coordinate System(WCS). We define linear approximation filter and remove the fault edge candidate points by using it This paper aims to approximate more correctly the lane of an actual road by applying the least-mean square method with the fault-removed edge information for curve fitting.

  • PDF

Coordinate Determination for Texture Mapping using Camera Calibration Method (카메라 보정을 이용한 텍스쳐 좌표 결정에 관한 연구)

  • Jeong K. W.;Lee Y.Y.;Ha S.;Park S.H.;Kim J. J.
    • Korean Journal of Computational Design and Engineering
    • /
    • v.9 no.4
    • /
    • pp.397-405
    • /
    • 2004
  • Texture mapping is the process of covering 3D models with texture images in order to increase the visual realism of the models. For proper mapping the coordinates of texture images need to coincide with those of the 3D models. When projective images from the camera are used as texture images, the texture image coordinates are defined by a camera calibration method. The texture image coordinates are determined by the relation between the coordinate systems of the camera image and the 3D object. With the projective camera images, the distortion effect caused by the camera lenses should be compensated in order to get accurate texture coordinates. The distortion effect problem has been dealt with iterative methods, where the camera calibration coefficients are computed first without considering the distortion effect and then modified properly. The methods not only cause to change the position of the camera perspective line in the image plane, but also require more control points. In this paper, a new iterative method is suggested for reducing the error by fixing the principal points in the image plane. The method considers the image distortion effect independently and fixes the values of correction coefficients, with which the distortion coefficients can be computed with fewer control points. It is shown that the camera distortion effects are compensated with fewer numbers of control points than the previous methods and the projective texture mapping results in more realistic image.

Coupled Line Cameras as a New Geometric Tool for Quadrilateral Reconstruction (사각형 복원을 위한 새로운 기하학적 도구로서의 선분 카메라 쌍)

  • Lee, Joo-Haeng
    • Korean Journal of Computational Design and Engineering
    • /
    • v.20 no.4
    • /
    • pp.357-366
    • /
    • 2015
  • We review recent research results on coupled line cameras (CLC) as a new geometric tool to reconstruct a scene quadrilateral from image quadrilaterals. Coupled line cameras were first developed as a camera calibration tool based on geometric insight on the perspective projection of a scene rectangle to an image plane. Since CLC comprehensively describes the relevant projective structure in a single image with a set of simple algebraic equations, it is also useful as a geometric reconstruction tool, which is an important topic in 3D computer vision. In this paper we first introduce fundamentals of CLC with reals examples. Then, we cover the related works to optimize the initial solution, to extend for the general quadrilaterals, and to apply for cuboidal reconstruction.

Adaptive Spatial Coordinates Detection Scheme for Path-Planning of Autonomous Mobile Robot (자율 이동로봇의 경로추정을 위한 적응적 공간좌표 검출 기법)

  • Lee, Jung-Suk;Ko, Jung-Hwan
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.55 no.2
    • /
    • pp.103-109
    • /
    • 2006
  • In this paper, the detection scheme of the spatial coordinates based on stereo camera for a intelligent path planning of an automatic mobile robot is proposed. In the proposed system, face area of a moving person is detected from a left image among the stereo image pairs by using the YCbCr color model and its center coordinates are computed by using the centroid method and then using these data, the stereo camera embedded on the mobile robot can be controlled for tracking the moving target in real-time. Moreover, using the disparity mad obtained from the left and right images captured by the tracking-controlled stereo camera system and the perspective transformation between a 3-D scene. and an image plane, depth information can be detected. Finally, based-on the analysis of these calculated coordinates, a mobile robot system is derived as a intelligent path planning and a estimation.

A Fast Volume Rendering Algorithm for Virtual Endoscopy

  • Ra Jong Beom;Kim Sang Hun;Kwon Sung Min
    • Journal of Biomedical Engineering Research
    • /
    • v.26 no.1
    • /
    • pp.23-30
    • /
    • 2005
  • 3D virtual endoscopy has been used as an alternative non-invasive procedure for visualization of hollow organs. However, due to computational complexity, this is a time-consuming procedure. In this paper, we propose a fast volume rendering algorithm based on perspective ray casting for virtual endoscopy. As a pre-processing step, the algorithm divides a volume into hierarchical blocks and classifies them into opaque or transparent blocks. Then, in the first step, we perform ray casting only for sub-sampled pixels on the image plane, and determine their pixel values and depth information. In the next step, by reducing the sub-sampling factor by half, we repeat ray casting for newly added pixels, and their pixel values and depth information are determined. Here, the previously obtained depth information is utilized to reduce the processing time. This step is recursively performed until a full-size rendering image is acquired. Experiments conducted on a PC show that the proposed algorithm can reduce the rendering time by 70- 80% for bronchus and colon endoscopy, compared with the brute-force ray casting scheme. Using the proposed algorithm, interactive volume rendering becomes more realizable in a PC environment without any specific hardware.

3-D Pose Estimation of an Elliptic Object Using Two Coplanar Points (두 개의 공면점을 활용한 타원물체의 3차원 위치 및 자세 추정)

  • Kim, Heon-Hui;Park, Kwang-Hyun;Ha, Yun-Su
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.4
    • /
    • pp.23-35
    • /
    • 2012
  • This paper presents a 3-D pose (position and orientation) estimation method for an elliptic object in 3-D space. It is difficult to resolve the problem of determining 3-D pose parameters with respect to an elliptic feature in 3-D space by interpretation of its projected feature onto an image plane. As an alternative, we propose a two points-based pose estimation algorithm to seek the 3-D information of an elliptic feature. The proposed algorithm determines a homogeneous transformation uniquely for a given correspondence set of an ellipse and two coplanar points that are defined on model and image plane, respectively. For each plane, two triangular features are extracted from an ellipse and two points based on the polarity in 2-D projection space. A planar homography is first estimated by the triangular feature correspondences, then decomposed into 3-D pose parameters. The proposed method is evaluated through a series of experiments for analyzing the errors of 3-D pose estimation and the sensitivity with respect to point locations.

A Study on Lane Sensing System Using Stereo Vision Sensors (스테레오 비전센서를 이용한 차선감지 시스템 연구)

  • Huh, Kun-Soo;Park, Jae-Sik;Rhee, Kwang-Woon;Park, Jae-Hak
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.28 no.3
    • /
    • pp.230-237
    • /
    • 2004
  • Lane Sensing techniques based on vision sensors are regarded promising because they require little infrastructure on the highway except clear lane markers. However, they require more intelligent processing algorithms in vehicles to generate the previewed roadway from the vision images. In this paper, a lane sensing algorithm using vision sensors is developed to improve the sensing robustness. The parallel stereo-camera is utilized to regenerate the 3-dimensional road geometry. The lane geometry models are derived such that their parameters represent the road curvature, lateral offset and heading angle, respectively. The parameters of the lane geometry models are estimated by the Kalman filter and utilized to reconstruct the lane geometry in the global coordinate. The inverse perspective mapping from the image plane to the global coordinate considers roll and pitch motions of a vehicle so that the mapping error is minimized during acceleration, braking or steering. The proposed sensing system has been built and implemented on a 1/10-scale model car.

Autostereoscopic Multiview 3D Display System based on Volume Hologram (체적 홀로그램을 이용한 무안경 다안식 3D 디스플레이 시스템)

  • 이승현;이상훈
    • Journal of the Korea Computer Industry Society
    • /
    • v.2 no.12
    • /
    • pp.1609-1616
    • /
    • 2001
  • We present an autostereoscopic 3D display system using volume hologram. In this proposed system, the interference pattern of angular multiplexed plane reference and object beams are recorded into a volume hologram, which plays a role of guiding object beams of multi-view images into the desired perspective directions. For reconstruction, object beams containing the desired multi-view image information, which satisfy Bragg matching condition, are illuminated in the time-division multiplexed manner onto the crystal. Then multiple stereoscopic images are projected to the display plane for autostereoscopic 3D viewing. It is possible to make a high resolution multiview 3D display system independent upon the viewpoint.

  • PDF