• Title/Summary/Keyword: object coordinates

Search Result 299, Processing Time 0.032 seconds

A Study on the Three Dimensional Coordinates Analysis by Direct Linear Transformation (직접선형변환을 이용한 3차원 좌표해석에 관한 연구)

  • 김감래;이호남
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.5 no.2
    • /
    • pp.47-55
    • /
    • 1987
  • In this paper, the direct linear transformation is described in which a inner and exterior orientation parameters are treated as unknown for non-iterative direct space resection, and the computer program was developed to obtain object space coordinates. Image coordinates measurements are conducted with analogue stereo-plotter and digitizer. To prove the appropriateness of the two image coordinate measurement devices and the DLT method, the standard errors of object space coordinates are compared with semi-analytical method.

  • PDF

3D Coordinates Acquisition by using Multi-view X-ray Images (다시점 X선 영상을 이용한 3차원 좌표 획득)

  • Yi, Sooyeong;Rhi, Jaeyoung;Kim, Soonchul;Lee, Jeonggyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.10
    • /
    • pp.886-890
    • /
    • 2013
  • In this paper, a 3D coordinates acquisition method for a mechanical assembly is developed by using multiview X-ray images. The multi-view X-ray images of an object are obtained by a rotary table. From the rotation transformation, it is possible to obtain the 3D coordinates of corresponding edge points on multi-view X-ray images by triangulation. The edge detection algorithm in this paper is based on the attenuation characteristic of the X-ray. The 3D coordinates of the object points are represented on a graphic display, which is used for the inspection of a mechanical assembly.

A Coordination Control Methodlolgy for Two Cooperating Arms Handling a Single Object (단일물체 조작을 위한 두 협조 로봇의 협조제어)

  • Yeo, Hee-Joo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.6 no.2
    • /
    • pp.190-196
    • /
    • 2000
  • A hybrid position/force control scheme to regulate the force and position by dual arms is proposed where two arms are treated as one rm in a kinematic viewpoint. The force error calculated from the information of two force/torque sensors attached to the end of each arm is transferred to minimum configuration space coordinates and then is distributed to total system joint coordinates, The position adjustment at the total con-figuration coordinates is computed based on the effective compliance matrix with respect to total joint coordinates which is obtained by coordinate transformation between the task coordinates and the total joint coordinates. The proposed scheme is applied to sawing task. When the trajectory of the saw is planned to follow a line in a horizontal plane 2 position parameters are to be controlled(i.e., two translational positions) Also a certain level of contact force has to be controlled along the vertical direction(i.e. minus z-direction) not to loose the contact with the object to be sawn. We experimentally show that the performance of the velocity and force response are satisfactory. The proposed hybrid control scheme can be applied to arbitrary two cooperating arm system regardless of their kinematic structure and the number of actuated joints.

  • PDF

Behavior Pattern Analysis System based on Temporal Histogram of Moving Object Coordinates. (이동 객체 좌표의 시간적 히스토그램 기반 행동패턴분석시스템)

  • Lee, Jae-kwang;Lee, Kyu-won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.05a
    • /
    • pp.571-575
    • /
    • 2015
  • This paper propose a temporal histogram -based behavior pattern analysis algorithm to analyze the movement features of moving objects from the image inputted in real-time. For the purpose of tracking and analysis of moving objects, it needs to be performed background learning which separated moving objects from the background. Moving object is extracted as a background learning after identifying the object by using the center of gravity and the coordinate correlation is performed by the object tracking. The start frame of each of the tracked object, the end frame, the coordinates information and size information are stored and managed by the linked list. Temporal histogram defines movement features pattern using x, y coordinates based on time axis, it compares each coordinates of objects for understanding its movement features and behavior pattern. Behavior pattern analysis system based on temporal histogram confirmed high tracking rate over 95% with sustaining high processing speed 45~50fps through the demo experiment.

  • PDF

Stereo-Vision-Based Human-Computer Interaction with Tactile Stimulation

  • Yong, Ho-Joong;Back, Jong-Won;Jang, Tae-Jeong
    • ETRI Journal
    • /
    • v.29 no.3
    • /
    • pp.305-310
    • /
    • 2007
  • If a virtual object in a virtual environment represented by a stereo vision system could be touched by a user with some tactile feeling on his/her fingertip, the sense of reality would be heightened. To create a visual impression as if the user were directly pointing to a desired point on a virtual object with his/her own finger, we need to align virtual space coordinates and physical space coordinates. Also, if there is no tactile feeling when the user touches a virtual object, the virtual object would seem to be a ghost. Therefore, a haptic interface device is required to give some tactile sensation to the user. We have constructed such a human-computer interaction system in the form of a simple virtual reality game using a stereo vision system, a vibro-tactile device module, and two position/orientation sensors.

  • PDF

A Study on Three-Dimensional Computer Generated Holograms by 3-D Coordinates Transformation (3차원 좌표변환에 의한 입체 컴퓨터 형성 홀로그램에 관한 연구)

  • Ryu, Won-Hyeon;Jeong, Man-Ho
    • Korean Journal of Optics and Photonics
    • /
    • v.17 no.6
    • /
    • pp.525-531
    • /
    • 2006
  • Synthesized 3-D CGH of a general three dimensional object is obtained by using a new 3-D coordinates transformation technique. A CCD camera is used to record several projected images of the 3-D object from different viewing angles. The recorded data are numerically calculated and processed to yield two-dimensional complex functions, which are then encoded fer the final synthesized 3-D CGH.

Calibration of VLP-16 Lidar Sensor and Vision Cameras Using the Center Coordinates of a Spherical Object (구형물체의 중심좌표를 이용한 VLP-16 라이다 센서와 비전 카메라 사이의 보정)

  • Lee, Ju-Hwan;Lee, Geun-Mo;Park, Soon-Yong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.2
    • /
    • pp.89-96
    • /
    • 2019
  • 360 degree 3-dimensional lidar sensors and vision cameras are commonly used in the development of autonomous driving techniques for automobile, drone, etc. By the way, existing calibration techniques for obtaining th e external transformation of the lidar and the camera sensors have disadvantages in that special calibration objects are used or the object size is too large. In this paper, we introduce a simple calibration method between two sensors using a spherical object. We calculated the sphere center coordinates using four 3-D points selected by RANSAC of the range data of the sphere. The 2-dimensional coordinates of the object center in the camera image are also detected to calibrate the two sensors. Even when the range data is acquired from various angles, the image of the spherical object always maintains a circular shape. The proposed method results in about 2 pixel reprojection error, and the performance of the proposed technique is analyzed by comparing with the existing methods.

Coordinates Matching in the Image Detection System For the Road Traffic Data Analysis

  • Kim, Jinman;Kim, Hiesik
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.35.4-35
    • /
    • 2001
  • Image detection system for road traffic data analysis is a real time detection system using image processing techniques to get the real-time traffic information which is used for traffic control and analysis. One of the most important functions in this system is to match the coordinates of real world and that of image on video camera. When there in no way to know the exact position of camera and it´s height from the object. If some points on the road of real world are known it is possible to calculate the coordinates of real world from image.

  • PDF

Coordinate Calibration and Object Tracking of the ODVS (Omni-directional Image에서의 이동객체 좌표 보정 및 추적)

  • Park, Yong-Min;Nam, Hyun-Jung;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.408-413
    • /
    • 2005
  • This paper presents a technique which extracts a moving object from omni-directional images and estimates a real coordinates of the moving object using 3D parabolic coordinate transformation. To process real-time, a moving object was extracted by proposed Hue histogram Matching Algorithms. We demonstrate our proposed technique could extract a moving object strongly without effects of light changing and estimate approximation values of real coordinates with theoretical and experimental arguments.

  • PDF

Study on the Error Compensation in Strain Measurement of Sheet Metal Forming (박판성형 변형률 측정 오차보정에 관한 연구)

  • 한병엽;차지혜;금영탁
    • Proceedings of the Korean Society for Technology of Plasticity Conference
    • /
    • 2003.05a
    • /
    • pp.270-273
    • /
    • 2003
  • The strain measurement of the panel in the sheet metal forming is essential work which provides experimental data needed to die design, process design, and product inspection. To measure efficiently the complex geometry strain, the 3-dimensional automative strain measurement system, which has high accuracy in theory, but has some 3∼5% errors in practice, is often used. The object of this study is to develop the error compensation technology to eliminate the strain, errors resulted when formed panels are measured using an automated strain measurement system. To achieve the study object, the position error calibration method correcting coordinates of the grid node recognized by a camera using error functions is suggested. Then the position errors were found by calculating the difference in the position of the cube node between real coordinates and measured coordinates in toms of node coordinates and the error calibration equations were derived by regressing the position errors. In order to show the validation of the suggested position error calibration method, finite element analysis and current calibration method was performed for the initial-blankformed.

  • PDF