• Title/Summary/Keyword: 3-D coordinates transformation

Search Result 66, Processing Time 0.026 seconds

Coordinate Calibration and Object Tracking of the ODVS (Omni-directional Image에서의 이동객체 좌표 보정 및 추적)

  • Park, Yong-Min;Nam, Hyun-Jung;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.408-413
    • /
    • 2005
  • This paper presents a technique which extracts a moving object from omni-directional images and estimates a real coordinates of the moving object using 3D parabolic coordinate transformation. To process real-time, a moving object was extracted by proposed Hue histogram Matching Algorithms. We demonstrate our proposed technique could extract a moving object strongly without effects of light changing and estimate approximation values of real coordinates with theoretical and experimental arguments.

  • PDF

Geometry-based quality metric for multi-view autostereoscopic 3D display

  • Saveljev, Vladimir;Son, Jung-Young;Kwack, Kae-Dal
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2009.10a
    • /
    • pp.1014-1017
    • /
    • 2009
  • The analytical expression for quality function is found including the dependence on disparity. The problem is considered in the projective coordinates for which the forward and backward transformation matrices are found. The formation of side observer regions is considered. The probability of the pseudo stereo effect is also estimated. Testing patterns are improved in order to provide higher accuracy of measurements. This is confirmed in experiments.

  • PDF

Generalized Kinematic Analysis for the Motion of 3-D Linkages using Symbolic Equation (기호방정식을 이용한 3차원 연쇄기구 운동해석의 일반화)

  • 김호룡
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.10 no.1
    • /
    • pp.102-109
    • /
    • 1986
  • Based on the Hartenberg-Denavit symbolic equation, which is one of equations for the kinematic analysis of three dimensional (3-D) linkage, a generalized kinematic motion equation is derived utilizing Euler angles and employing the coordinates transformation. The derived equation can feasibly be used for the motion analysis of any type of 3-D linkages as well as 2-D ones. In order to simulate the general motion of 3-D linkgages on digital computer, the generalized equation is programmed through the process of numerical analysis after converting the equation to the type of Newton-Raphson formula and denoting it in matrix form. The feasibility of theoretically derived equation is experimentally proved by comparing the results from the computer with those from experimental setup of three differrent but generally empolyed 3-D linkages.

Localization of Unmanned Ground Vehicle using 3D Registration of DSM and Multiview Range Images: Application in Virtual Environment (DSM과 다시점 거리영상의 3차원 등록을 이용한 무인이동차량의 위치 추정: 가상환경에서의 적용)

  • Park, Soon-Yong;Choi, Sung-In;Jang, Jae-Seok;Jung, Soon-Ki;Kim, Jun;Chae, Jeong-Sook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.7
    • /
    • pp.700-710
    • /
    • 2009
  • A computer vision technique of estimating the location of an unmanned ground vehicle is proposed. Identifying the location of the unmaned vehicle is very important task for automatic navigation of the vehicle. Conventional positioning sensors may fail to work properly in some real situations due to internal and external interferences. Given a DSM(Digital Surface Map), location of the vehicle can be estimated by the registration of the DSM and multiview range images obtained at the vehicle. Registration of the DSM and range images yields the 3D transformation from the coordinates of the range sensor to the reference coordinates of the DSM. To estimate the vehicle position, we first register a range image to the DSM coarsely and then refine the result. For coarse registration, we employ a fast random sample matching method. After the initial position is estimated and refined, all subsequent range images are registered by applying a pair-wise registration technique between range images. To reduce the accumulation error of pair-wise registration, we periodically refine the registration between range images and the DSM. Virtual environment is established to perform several experiments using a virtual vehicle. Range images are created based on the DSM by modeling a real 3D sensor. The vehicle moves along three different path while acquiring range images. Experimental results show that registration error is about under 1.3m in average.

Acceleration of Viewport Extraction for Multi-Object Tracking Results in 360-degree Video (360도 영상에서 다중 객체 추적 결과에 대한 뷰포트 추출 가속화)

  • Heesu Park;Seok Ho Baek;Seokwon Lee;Myeong-jin Lee
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.3
    • /
    • pp.306-313
    • /
    • 2023
  • Realistic and graphics-based virtual reality content is based on 360-degree videos, and viewport extraction through the viewer's intention or automatic recommendation function is essential. This paper designs a viewport extraction system based on multiple object tracking in 360-degree videos and proposes a parallel computing structure necessary for multiple viewport extraction. The viewport extraction process in 360-degree videos is parallelized by composing pixel-wise threads, through 3D spherical surface coordinate transformation from ERP coordinates and 2D coordinate transformation of 3D spherical surface coordinates within the viewport. The proposed structure evaluated the computation time for up to 30 viewport extraction processes in aerial 360-degree video sequences and confirmed up to 5240 times acceleration compared to the CPU-based computation time proportional to the number of viewports. When using high-speed I/O or memory buffers that can reduce ERP frame I/O time, viewport extraction time can be further accelerated by 7.82 times. The proposed parallelized viewport extraction structure can be applied to simultaneous multi-access services for 360-degree videos or virtual reality contents and video summarization services for individual users.

Accuracy Analysis of 3D Position of Close-range Photogrammetry Using Direct Linear Transformation and Self-calibration Bundle Adjustment with Additional Parameters (DLT와 부가변수에 의한 광속조정법을 활용한 근접사진측량의 3차원 위치정확도 분석)

  • Kim, Hyuk Gil;Hwang, Jin Sang;Yun, Hong Sic
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.23 no.2
    • /
    • pp.27-38
    • /
    • 2015
  • In this study, the 3D position coordinates were calculated for the targets using DLT and self-calibration bundle adjustment with additional parameters in close-range photogrammetry. And then, the accuracy of the results were analysed. For this purpose, the results of camera calibration and orientation parameters were calculated for each images by performing reference surveying using total station though the composition of experimental conditions attached numerous targets. To analyze the accuracy, 3D position coordinates were calculated for targets that has been identically selected and compared with the reference coordinates obtained from a total station. For the image coordinate measurement of the stereo images, we performed the ellipse fitting procedure for measuring the center point of the circular target. And then, the results were utilized for the image coordinate for targets. As a results from experiments, position coordinates calculated by the stereo images-based photogrammetry have resulted out the deviation of less than an average 4mm within the maximum error range of less than about 1cm. From this result, it is expected that the stereo images-based photogrammetry would be used to field of various close-range photogrammetry required for precise accuracy.

Calibration of VLP-16 Lidar Sensor and Vision Cameras Using the Center Coordinates of a Spherical Object (구형물체의 중심좌표를 이용한 VLP-16 라이다 센서와 비전 카메라 사이의 보정)

  • Lee, Ju-Hwan;Lee, Geun-Mo;Park, Soon-Yong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.2
    • /
    • pp.89-96
    • /
    • 2019
  • 360 degree 3-dimensional lidar sensors and vision cameras are commonly used in the development of autonomous driving techniques for automobile, drone, etc. By the way, existing calibration techniques for obtaining th e external transformation of the lidar and the camera sensors have disadvantages in that special calibration objects are used or the object size is too large. In this paper, we introduce a simple calibration method between two sensors using a spherical object. We calculated the sphere center coordinates using four 3-D points selected by RANSAC of the range data of the sphere. The 2-dimensional coordinates of the object center in the camera image are also detected to calibrate the two sensors. Even when the range data is acquired from various angles, the image of the spherical object always maintains a circular shape. The proposed method results in about 2 pixel reprojection error, and the performance of the proposed technique is analyzed by comparing with the existing methods.

3D Coordinates Transformation in Orthogonal Stereo Vision (직교식 스테레오 비젼 시스템에서의 3차원 좌표 변환)

  • Yoon, Hee-Joo;Cha, Sun-Hee;Cha, Eui-Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2005.05a
    • /
    • pp.855-858
    • /
    • 2005
  • 본 시스템은 어항 속의 물고기 움직임을 추적하기 위해 직교식 스테레오 비젼 시스템(Othogonal Stereo Vision System)으로부터 동시에 독립된 영상을 획득하고 획득된 영상을 처리하여 좌표를 얻어내고 3차원 좌표로 생성해내는 시스템이다. 제안하는 방법은 크게 두 대의 카메라로부터 동시에 영상을 획득하는 방법과 획득된 영상에 대한 처리 및 물체 위치 검출, 그리고 3차원 좌표 생성으로 구성된다. Frame Grabber를 사용하여 초당 8-Frame의 두 개의 영상(정면영상, 상면영상)을 획득하며, 실시간으로 갱신하는 배경영상과의 차영상을 통하여 이동객체를 추출하고, Labeling을 이용하여 Clustering한 후, Cluster의 중심좌표를 검출한다. 검출된 각각의 좌표를 직선방정식을 이용하여 3차원 좌표보정을 수행하여 이동객체의 좌표를 생성한다.

  • PDF

2D Spatial-Map Construction for Workers Identification and Avoidance of AGV (AGV의 작업자 식별 및 회피를 위한 2D 공간 지도 구성)

  • Ko, Jung-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.9
    • /
    • pp.347-352
    • /
    • 2012
  • In this paper, an 2D spatial-map construction for workers identification and avoidance of AGV using the detection scheme of the spatial coordinates based on stereo camera is proposed. In the proposed system, face area of a moving person is detected from a left image among the stereo image pairs by using the YCbCr color model and its center coordinates are computed by using the centroid method and then using these data, the stereo camera embedded on the mobile robot can be controlled for tracking the moving target in real-time. Moreover, using the disparity map obtained from the left and right images captured by the tracking-controlled stereo camera system and the perspective transformation between a 3-D scene and an image plane, depth map can be detected. From some experiments on AGV driving with 240 frames of the stereo images, it is analyzed that error ratio between the calculated and measured values of the worker's width is found to be very low value of 2.19% and 1.52% on average.

TLS (Total Least-Squares) within Gauss-Helmert Model: 3D Planar Fitting and Helmert Transformation of Geodetic Reference Frames (가우스-헬머트 모델 전최소제곱: 평면방정식과 측지좌표계 변환)

  • Bae, Tae-Suk;Hong, Chang-Ki;Lim, Soo-Hyeon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.4
    • /
    • pp.315-324
    • /
    • 2022
  • The conventional LESS (LEast-Squares Solution) is calculated under the assumption that there is no errors in independent variables. However, the coordinates of a point, either from traditional ground surveying such as slant distances, horizontal and/or vertical angles, or GNSS (Global Navigation Satellite System) positioning, cannot be determined independently (and the components are correlated each other). Therefore, the TLS (Total Least Squares) adjustment should be applied for all applications related to the coordinates. Many approaches were suggested in order to solve this problem, resulting in equivalent solutions except some restrictions. In this study, we calculated the normal vector of the 3D plane determined by the trace of the VLBI targets based on TLS within GHM (Gauss-Helmert Model). Another numerical test was conducted for the estimation of the Helmert transformation parameters. Since the errors in the horizontal components are very small compared to the radius of the circle, the final estimates are almost identical. However, the estimated variance components are significantly reduced as well as show a different characteristic depending on the target location. The Helmert transformation parameters are estimated more precisely compared to the conventional LESS case. Furthermore, the residuals can be predicted on both reference frames with much smaller magnitude (in absolute sense).