• Title/Summary/Keyword: object coordinates

Search Result 300, Processing Time 0.046 seconds

A Study on Tracking a Moving Object using Photogrammetric Techniques - Focused on a Soccer Field Model - (사진측랑기법을 이용한 이동객체 추적에 관한 연구 - 축구장 모형을 중심으로 -)

  • Bae Sang-Keun;Kim Byung-Guk;Jung Jae-Seung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.24 no.2
    • /
    • pp.217-226
    • /
    • 2006
  • Extraction and tracking objects are fundamental and important steps of the digital image processing and computer vision. Many algorithms about extracting and tracking objects have been developed. In this research, a method is suggested for tracking a moving object using a pair of CCD cameras and calculating the coordinate of the moving object. A 1/100 miniature of soccer field was made to apply the developed algorithms. After candidates were selected from the acquired images using the RGB value of a moving object (soccer ball), the object was extracted using its size (MBR size) among the candidates. And then, image coordinates of a moving object are obtained. The real-time position of a moving object is tracked in the boundary of the expected motion, which is determined by centering the moving object. The 3D position of a moving object can be obtained by conducting the relative orientation, absolute orientation, and space intersection of a pair of the CCD camera image.

Real-time moving object tracking and distance measurement system using stereo camera (스테레오 카메라를 이용한 이동객체의 실시간 추적과 거리 측정 시스템)

  • Lee, Dong-Seok;Lee, Dong-Wook;Kim, Su-Dong;Kim, Tae-June;Yoo, Ji-Sang
    • Journal of Broadcast Engineering
    • /
    • v.14 no.3
    • /
    • pp.366-377
    • /
    • 2009
  • In this paper, we implement the real-time system which extracts 3-dimensional coordinates from right and left images captured by a stereo camera and provides users with reality through a virtual space operated by the 3-dimensional coordinates. In general, all pixels in correspondence region are compared for the disparity estimation. However, for a real time process, the central coordinates of the correspondence region are only used in the proposed algorithm. In the implemented system, 3D coordinates are obtained by using the depth information derived from the estimated disparity and we set user's hand as a region of interest(ROI). After user's hand is detected as the ROI, the system keeps tracking a hand's movement and generates a virtual space that is controled by the hand. Experimental results show that the implemented system could estimate the disparity in real -time and gave the mean-error less than 0.68cm within a range of distance, 1.5m. Also It had more than 90% accuracy in the hand recognition.

Optimization of Cable Stayed Bridges Considering Initial Cable Tension and Tower Coordinates (사장교의 초기인장력과 주탑좌표를 고려한 최적설계)

  • Kim, Kyung Seung;Kim, Moon Kyum;Hwang, Hak Joo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.8 no.2
    • /
    • pp.205-213
    • /
    • 1988
  • It is not a simple task to optimize a cable stayed bridge, because it involves, in addition to the section properties, number and arrangement of cables, initial tension forces of cables, and type and height of the tower as design variables. This study deals with an optimization problem of cable stayed bridges considering initial cable forces, section properties of the girder and the tower, and coordinates of the tower. In order to avoid difficulties in dealing with numerous variables which interact mutually, separate design spaces are adopted for initial cable forces, section properties, and coordinates, respectively. Strain energy stored in the structure is used as the object function in the design of the initial cable forces, while weight of the structure is used in the design of section and coordinates. Upper and lower limits of the initial forces, allowable stresses including the effect of buckling, and lower limit of the sectional area are considered as constraints. The proposed method is applied to a fan type bridge and a harp type bridge. It is believed through comparison of the results to the previous results in the literature that the proposed method renders rational design values. It is also shown that the coordinate optimization, which is usually deleted in the optimization process, results in additional saving of materials.

  • PDF

Augmented Reality System of Using Vanishing lines (소실선을 이용한 증강현실 시스템)

  • Ban, Kyeong-Jin;Kim, Jong-Chan;Kim, Kyeong-Og;Kim, Eung-Kon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.05a
    • /
    • pp.676-678
    • /
    • 2010
  • Conventional Augmented Reality has used data gloves or markers for smooth interaction between objects and background. This causes inconvenience of use and lower immersion. To build up immersion in Augmented Reality, additional input devices must be removed. This paper proposes a method to create virtual space coordinates for interaction without wearing additional input devices so as to improve immersion in Augmented Reality. The acquired image was projected to a two-dimensional space and vanishing lines were extracted to calculate the virtual space coordinates. Then the sizes of the inserted objects were varied in accordance with the size of the virtual coordinates area based on the image projected onto the two-dimensional coordinates. This resulted in improved immersion. This method can increase the efficiency of object creation by excluding the use of a 3D modeler for creation of three-dimensional objects.

  • PDF

Estimation of two-dimensional position of soybean crop for developing weeding robot (제초로봇 개발을 위한 2차원 콩 작물 위치 자동검출)

  • SooHyun Cho;ChungYeol Lee;HeeJong Jeong;SeungWoo Kang;DaeHyun Lee
    • Journal of Drive and Control
    • /
    • v.20 no.2
    • /
    • pp.15-23
    • /
    • 2023
  • In this study, two-dimensional location of crops for auto weeding was detected using deep learning. To construct a dataset for soybean detection, an image-capturing system was developed using a mono camera and single-board computer and the system was mounted on a weeding robot to collect soybean images. A dataset was constructed by extracting RoI (region of interest) from the raw image and each sample was labeled with soybean and the background for classification learning. The deep learning model consisted of four convolutional layers and was trained with a weakly supervised learning method that can provide object localization only using image-level labeling. Localization of the soybean area can be visualized via CAM and the two-dimensional position of the soybean was estimated by clustering the pixels associated with the soybean area and transforming the pixel coordinates to world coordinates. The actual position, which is determined manually as pixel coordinates in the image was evaluated and performances were 6.6(X-axis), 5.1(Y-axis) and 1.2(X-axis), 2.2(Y-axis) for MSE and RMSE about world coordinates, respectively. From the results, we confirmed that the center position of the soybean area derived through deep learning was sufficient for use in automatic weeding systems.

Multi-objects detection using HOG and effective individual object tracking (HOG를 이용한 다중객체 검출과 효과적인 개별객체 추적)

  • Choi, Min;Lee, Kyu-won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2012.10a
    • /
    • pp.894-897
    • /
    • 2012
  • We propose a effective method using the HOG (Histogram of Oriented Gradients) feature vector to track individual objects in an environment which multiple objects are moving. The proposed algorithm consists of pre-processing, object detection and object tracking. We experimented with six videos which have various trajectories and the movement. When occlusion between objects was occurred, we identified individual object by using center and predicted coordinates of moving objects. The algorithm shows 85.45% of tracking rate in the videos we experimented. We expect the proposed system is utilized in security systems which require the alalysis of the position and motion pattern of objects.

  • PDF

3D Shape Descriptor for Segmenting Point Cloud Data

  • Park, So Young;Yoo, Eun Jin;Lee, Dong-Cheon;Lee, Yong Wook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.6_2
    • /
    • pp.643-651
    • /
    • 2012
  • Object recognition belongs to high-level processing that is one of the difficult and challenging tasks in computer vision. Digital photogrammetry based on the computer vision paradigm has begun to emerge in the middle of 1980s. However, the ultimate goal of digital photogrammetry - intelligent and autonomous processing of surface reconstruction - is not achieved yet. Object recognition requires a robust shape description about objects. However, most of the shape descriptors aim to apply 2D space for image data. Therefore, such descriptors have to be extended to deal with 3D data such as LiDAR(Light Detection and Ranging) data obtained from ALS(Airborne Laser Scanner) system. This paper introduces extension of chain code to 3D object space with hierarchical approach for segmenting point cloud data. The experiment demonstrates effectiveness and robustness of the proposed method for shape description and point cloud data segmentation. Geometric characteristics of various roof types are well described that will be eventually base for the object modeling. Segmentation accuracy of the simulated data was evaluated by measuring coordinates of the corners on the segmented patch boundaries. The overall RMSE(Root Mean Square Error) is equivalent to the average distance between points, i.e., GSD(Ground Sampling Distance).

Implementation of Disparity Information-based 3D Object Tracking

  • Ko, Jung-Hwan;Jung, Yong-Woo;Kim, Eun-Soo
    • Journal of Information Display
    • /
    • v.6 no.4
    • /
    • pp.16-25
    • /
    • 2005
  • In this paper, a new 3D object tracking system using the disparity motion vector (DMV) is presented. In the proposed method, the time-sequential disparity maps are extracted from the sequence of the stereo input image pairs and these disparity maps are used to sequentially estimate the DMV defined as a disparity difference between two consecutive disparity maps Similarly to motion vectors in the conventional video signals, the DMV provides us with motion information of a moving target by showing a relatively large change in the disparity values in the target areas. Accordingly, this DMV helps detect the target area and its location coordinates. Based on these location data of a moving target, the pan/tilt embedded in the stereo camera system can be controlled and consequently achieve real-time stereo tracking of a moving target. From the results of experiments with 9 frames of the stereo image pairs having 256x256 pixels, it is shown that the proposed DMV-based stereo object tracking system can track the moving target with a relatively low error ratio of about 3.05 % on average.

Stereo Object Tracking System using Multiview Image Reconstruction Scheme (다시점 영상복원 기법을 이용한 스테레오 물체추적 시스템)

  • Ko, Jung-Hwan;Ohm, Woo-Young
    • 전자공학회논문지 IE
    • /
    • v.43 no.2
    • /
    • pp.54-62
    • /
    • 2006
  • In this paper, a new stereo object tracking system using the disparity motion vector is proposed. In the proposed method, the time-sequential disparity motion vector can be estimated from the disparity vectors which are extracted from the sequence of the stereo input image pair and then using these disparity motion vectors, the area where the target object is located and its location coordinate are detected from the input stereo image. Basing on this location data of the target object, the pan/tilt embedded in the stereo camera system can be controlled and as a result, stereo tracking of the target object can be possible. From some experiments with the 2 frames of the stereo image pairs having $256\times256$ pixels, it is shown that the proposed stereo tracking system can adaptively track the target object with a low error ratio of about 3.05 % on average between the detected and actual location coordinates of the target object.

Contact Detection based on Relative Distance Prediction using Deep Learning-based Object Detection (딥러닝 기반의 객체 검출을 이용한 상대적 거리 예측 및 접촉 감지)

  • Hong, Seok-Mi;Sun, Kyunghee;Yoo, Hyun
    • Journal of Convergence for Information Technology
    • /
    • v.12 no.1
    • /
    • pp.39-44
    • /
    • 2022
  • The purpose of this study is to extract the type, location, and absolute size of an object in an image using a deep learning algorithm, predict the relative distance between objects, and use this to detect contact between objects. To analyze the size ratio of objects, YOLO, a CNN-based object detection algorithm, is used. Through the YOLO algorithm, the absolute size and position of an object are extracted in the form of coordinates. The extraction result extracts the ratio between the size in the image and the actual size from the standard object-size list having the same object name and size stored in advance, and predicts the relative distance between the camera and the object in the image. Based on the predicted value, it detects whether the objects are in contact.