• Title/Summary/Keyword: object coordinate

Search Result 297, Processing Time 0.045 seconds

An Estimation Method for Location Coordinate of Object in Image Using Single Camera and GPS (단일 카메라와 GPS를 이용한 영상 내 객체 위치 좌표 추정 기법)

  • Seung, Teak-Young;Kwon, Gi-Chang;Moon, Kwang-Seok;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.2
    • /
    • pp.112-121
    • /
    • 2016
  • ADAS(Advanced Driver Assistance Systems) and street furniture information collecting car like as MMS(Mobile Mapping System), they require object location estimation method for recognizing spatial information of object in road images. But, the case of conventional methods, these methods require additional hardware module for gathering spatial information of object and have high computational complexity. In this paper, for a coordinate of road sign in single camera image, a position estimation scheme of object in road images is proposed using the relationship between the pixel and object size in real world. In this scheme, coordinate value and direction are used to get coordinate value of a road sign in images after estimating the equation related on pixel and real size of road sign. By experiments with test video set, it is confirmed that proposed method has high accuracy for mapping estimated object coordinate into commercial map. Therefore, proposed method can be used for MMS in commercial region.

An Efficient Rendering Method of Object Representation Based on Spherical Coordinate System (물체의 구 좌표계 표현을 이용한 효율적인 렌더링 방법)

  • Han, Eun-Ho;Hong, Hyun-Ki
    • Journal of Korea Game Society
    • /
    • v.8 no.3
    • /
    • pp.69-76
    • /
    • 2008
  • This paper presents a novel rendering algorithm based on sperical coordinate representation of the object. The vertices of the object are transformed into the sperical coordinate system, and we construct additional maps: the centroid and index of the triangle, the memory access table. While OpenGL rendering pipeline touches all vertices of an object, the proposed method takes account of the only visible vertices by examining the visible triangles of the object. Simulation results demonstrated that the proposed method achieve an efficient rendering performace.

  • PDF

Improved Rendering on Spherical Coordinate System using Convex Hull (컨벡스 헐을 이용한 개선된 구 좌표계 기반 렌더링 방법)

  • Kim, Nam-Jung;Hong, Hyun-Ki
    • Journal of Korea Game Society
    • /
    • v.10 no.1
    • /
    • pp.157-165
    • /
    • 2010
  • This paper presents a novel real-time rendering algorithm based on spherical coordinate system of the object using convex hull. While OpenGL rendering pipeline touches all vertices of an object, the proposed method takes account the only visible vertices by examining the visible triangles of the object. In order to determine the visible areas of the object in its spherical coordinate representation, the proposed method uses 3D geometric relation of 6 plane equations of the camera frustum and the bounding sphere of the object. In addition, we compute the convex hull of the object and its maximum side factors for hidden surface removal. Simulation results showed that the quality of result image is almost same compared to original image and rendering performance is greatly improved.

A method for image processing by use of inertial data of camera

  • Kaba, K.;Kashiwagi, H.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1998.10a
    • /
    • pp.221-225
    • /
    • 1998
  • This paper is to present a method for recognizing an image of a tracking object by processing the image from a camera, whose attitude is controlled in inertial space with inertial co-ordinate system. In order to recognize an object, a pseudo-random M-array is attached on the object and it is observed by the camera which is controlled on inertial coordinate basis by inertial stabilization unit. When the attitude of the camera is changed, the observed image of M-array is transformed by use of affine transformation to the image in inertial coordinate system. Taking the cross-correlation function between the affine-transformed image and the original image, we can recognize the object. As parameters of the attitude of the camera, we used the azimuth angle of camera, which is de-fected by gyroscope of an inertial sensor, and elevation an91e of camera which is calculated from the gravitational acceleration detected by servo accelerometer.

  • PDF

Coordinate Calibration and Object Tracking of the ODVS (Omni-directional Image에서의 이동객체 좌표 보정 및 추적)

  • Park, Yong-Min;Nam, Hyun-Jung;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.408-413
    • /
    • 2005
  • This paper presents a technique which extracts a moving object from omni-directional images and estimates a real coordinates of the moving object using 3D parabolic coordinate transformation. To process real-time, a moving object was extracted by proposed Hue histogram Matching Algorithms. We demonstrate our proposed technique could extract a moving object strongly without effects of light changing and estimate approximation values of real coordinates with theoretical and experimental arguments.

  • PDF

Object Recognition using Smart Tag and Stereo Vision System on Pan-Tilt Mechanism

  • Kim, Jin-Young;Im, Chang-Jun;Lee, Sang-Won;Lee, Ho-Gil
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2379-2384
    • /
    • 2005
  • We propose a novel method for object recognition using the smart tag system with a stereo vision on a pan-tilt mechanism. We developed a smart tag which included IRED device. The smart tag is attached onto the object. We also developed a stereo vision system which pans and tilts for the object image to be the centered on each whole image view. A Stereo vision system on the pan-tilt mechanism can map the position of IRED to the robot coordinate system by using pan-tilt angles. And then, to map the size and pose of the object for the robot to coordinate the system, we used a simple model-based vision algorithm. To increase the possibility of tag-based object recognition, we implemented our approach by using as easy and simple techniques as possible.

  • PDF

The Image Position Measurement for the Selected Object out of the Center using the 2 Points Polar Coordinate Transform (2 포인트 극좌표계 변환을 이용한 중심으로부터의 목표물 영상 위치 측정)

  • Seo, Choon Weon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.11
    • /
    • pp.147-155
    • /
    • 2015
  • For the image processing system to be classified the selected object in the nature, the rotation, scale and transition invariant features is to be necessary. There are many investigations to get the information for the object processing system and the log-polar transform which is to be get the invariant feature for the scale and rotation is used. In this paper, we suggested the 2 points polar coordinate transform methods to measure the selected object position out of the center in input image including the centroid method. In this proposed system, the position results of objects are very good, and we obtained the similarity ratio 99~104% for the object coordinate values.

A Development of Object Shape Recognition Module using Laser Sensor (레이저 센서를 이용한 물체의 형상인식 모듈 개발)

  • Kwak, Sung-Hwan;Lee, Seung-Kyu;Lee, Seung-Jae;Oh, Kyu-Hyun;Kim, Young-Sik;Choi, Joong-Koung;Park, Mu-Hun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.11
    • /
    • pp.1923-1932
    • /
    • 2008
  • In this paper, We suggest a method, which extract the 3-Dimension location coordinate of object, stat and coil, using Laser sensor. In order to extract the 3-Dimension location coordinate of object, First, we extract the edge of object. Second, extract the z-axis angle of Laser sensor. Third, extract the 2-Dimension location coordinate of object using edge of object and z-axis of Laser senor. Fourth, discriminate between Slat and Coil. The result of study is expected that the help which is considerable to the automation system development of unmanned transportation equipment will become.

Automation of Bio-Industrial Process Via Tele-Task Command(I) -identification and 3D coordinate extraction of object- (원격작업 지시를 이용한 생물산업공정의 생력화 (I) -대상체 인식 및 3차원 좌표 추출-)

  • Kim, S. C.;Choi, D. Y.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.26 no.1
    • /
    • pp.21-28
    • /
    • 2001
  • Major deficiencies of current automation scheme including various robots for bioproduction include the lack of task adaptability and real time processing, low job performance for diverse tasks, and the lack of robustness of take results, high system cost, failure of the credit from the operator, and so on. This paper proposed a scheme that could solve the current limitation of task abilities of conventional computer controlled automatic system. The proposed scheme is the man-machine hybrid automation via tele-operation which can handle various bioproduction processes. And it was classified into two categories. One category was the efficient task sharing between operator and CCM(computer controlled machine). The other was the efficient interface between operator and CCM. To realize the proposed concept, task of the object identification and extraction of 3D coordinate of an object was selected. 3D coordinate information was obtained from camera calibration using camera as a measurement device. Two stereo images were obtained by moving a camera certain distance in horizontal direction normal to focal axis and by acquiring two images at different locations. Transformation matrix for camera calibration was obtained via least square error approach using specified 6 known pairs of data points in 2D image and 3D world space. 3D world coordinate was obtained from two sets of image pixel coordinates of both camera images with calibrated transformation matrix. As an interface system between operator and CCM, a touch pad screen mounted on the monitor and remotely captured imaging system were used. Object indication was done by the operator’s finger touch to the captured image using the touch pad screen. A certain size of local image processing area was specified after the touch was made. And image processing was performed with the specified local area to extract desired features of the object. An MS Windows based interface software was developed using Visual C++6.0. The software was developed with four modules such as remote image acquisiton module, task command module, local image processing module and 3D coordinate extraction module. Proposed scheme shoed the feasibility of real time processing, robust and precise object identification, and adaptability of various job and environments though selected sample tasks.

  • PDF

Realization for Moving Object Tracking System in Two Dimensional Plane using Stereo Line CCD

  • Kim, Young-Bin;Ryu, Kwang-Ryol;Sun, Min-Gui;Sclabassi, Robert
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.10a
    • /
    • pp.157-160
    • /
    • 2008
  • A realization for moving object detecting and tracking system in two dimensional plane using stereo line CCDs and lighting source is presented in this paper. Instead of processing camera images directly, two line CCD sensor and input line image is used to measure two dimensional distance by comparing the brightness on line CCDs. The algorithms are used the moving object tracking and coordinate converting method. To ensure the effective detection of moving path, a detection algorithm to evaluate the reliability of each measured distance is developed. The realized system results are that the performance of moving object recognizing shows 5mm resolution and mean error is 1.89%, and enables to track a moving path of object per 100ms period.

  • PDF