• Title/Summary/Keyword: object coordinates

Search Result 299, Processing Time 0.03 seconds

The ConvexHull using Outline Extration Algorithm in Gray Scale Image (이진 영상에서 ConvexHull을 이용한 윤곽선 추출 알고리즘)

  • Cho, Young-bok;Kim, U-ju;Woo, Sung-hee
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.162-165
    • /
    • 2017
  • The proposed paper extracts the region of interest from the x-lay input image and compares it with the reference image. The x-ray image has the same shape, but the size, direction and position of the object are photographed differently. In this way, we measure the erection difference of darkness and darkness using the similarity measurement method for the same object. Distance measurement also calculates the distance between two points with vector coordinates (x, y, z) of x-lay data. Experimental results show that the proposed method improves the accuracy of ROI extraction and the reference image matching time is more efficient than the conventional method.

  • PDF

Building DSMs Generation Integrating Three Line Scanner (TLS) and LiDAR

  • Suh, Yong-Cheol;Nakagawa , Masafumi
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.3
    • /
    • pp.229-242
    • /
    • 2005
  • Photogrammetry is a current method of GIS data acquisition. However, as a matter of fact, a large manpower and expenditure for making detailed 3D spatial information is required especially in urban areas where various buildings exist. There are no photogrammetric systems which can automate a process of spatial information acquisition completely. On the other hand, LiDAR has high potential of automating 3D spatial data acquisition because it can directly measure 3D coordinates of objects, but it is rather difficult to recognize the object with only LiDAR data, for its low resolution at this moment. With this background, we believe that it is very advantageous to integrate LiDAR data and stereo CCD images for more efficient and automated acquisition of the 3D spatial data with higher resolution. In this research, the automatic urban object recognition methodology was proposed by integrating ultra highresolution stereo images and LiDAR data. Moreover, a method to enable more reliable and detailed stereo matching method for CCD images was examined by using LiDAR data as an initial 3D data to determine the search range and to detect possibility of occlusions. Finally, intellectual DSMs, which were identified urban features with high resolution, were generated with high speed processing.

Algorithm of Level-3 Digital Model Generation for Cable-stayed Bridges and its Applications (Level-3 사장교 디지털 모델 생성을 위한 알고리즘 및 활용)

  • Roh, Gi-Tae;Dang, Ngoc Son;Shim, Chang-Su
    • Journal of KIBIM
    • /
    • v.9 no.4
    • /
    • pp.41-50
    • /
    • 2019
  • Digital models for a cable-stayed bridge are defined considering data-driven engineering from design to construction. Algorithms for digital object generation of each component of the cable-stayed bridge were developed. Using these algorithms, Level-3 BIM practices can be realized from design stages. Based on previous practices, digital object library can be accumulated. Basic digital models are modified according to given design conditions by a designer. Once design models are planned, various applications using the models are linked the models such as estimation, drawings and mechanical properties. Federated bridge models are delivered to construction stages. In construction stage, the models can be efficiently revised according to the changed situations during construction phases. In this paper, measured coordinates are imported to the model generation algorithms and revised models are obtained. Augmented reality devices and their applications are proposed. AR simulations in construction site and in office condition are tested. From this pilot test of digital models, it can be said that Level-3 BIM practices can be realized by using in-house modeling algorithms according to different purposes.

Development of the Accuracy Improvement Algorithm of Geopositioning of High Resolution Satellite Imagery based on RF Models (고해상도 위성영상의 RF모델 기반 지상위치의 정확도 개선 알고리즘 개발)

  • Lee, Jin-Duk;So, Jae-Kyeong
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.12 no.1
    • /
    • pp.106-118
    • /
    • 2009
  • Satellite imagery with high resolution of about one meter is used widely in commerce and government applications ranging from earth observation and monitoring to national digital mapping. Due to the expensiveness of IKONOS Pro and Precision products, it is attractive to use the low-cost IKONOS Geo product with vendor-provided rational polynomial coefficients (RPCs), to produce highly accurate mapping products. The imaging geometry of IKONOS high-resolution imagery is described by RFs instead of rigorous sensor models. This paper presents four different polynomial models, that are the offset model, the scale and offset model, the Affine model, and the 2nd-order polynomial model, defined respectively in object space and image space to improve the accuracies of the RF-derived ground coordinates. Not only the algorithm for RF-based ground coordinates but also the algorithm for accuracy improvement of RF-based ground coordinates are developed which is based on the four models, The experiment also evaluates the effect of different cartographic parameters such as the number, configuration, and accuracy of ground control points on the accuracy of geopositioning. As the result of a experimental application, the root mean square errors of three dimensional ground coordinates which are first derived by vendor-provided Rational Function models were averagely 8.035m in X, 10.020m in Y and 13.318m in Z direction. After applying polynomial correction algorithm, those errors were dramatically decreased to averagely 2.791m in X, 2.520m in Y and 1.441m in Z. That is, accuracy was greatly improved by 65% in planmetry and 89% in vertical direction.

  • PDF

Automation of Bio-Industrial Process Via Tele-Task Command(I) -identification and 3D coordinate extraction of object- (원격작업 지시를 이용한 생물산업공정의 생력화 (I) -대상체 인식 및 3차원 좌표 추출-)

  • Kim, S. C.;Choi, D. Y.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.26 no.1
    • /
    • pp.21-28
    • /
    • 2001
  • Major deficiencies of current automation scheme including various robots for bioproduction include the lack of task adaptability and real time processing, low job performance for diverse tasks, and the lack of robustness of take results, high system cost, failure of the credit from the operator, and so on. This paper proposed a scheme that could solve the current limitation of task abilities of conventional computer controlled automatic system. The proposed scheme is the man-machine hybrid automation via tele-operation which can handle various bioproduction processes. And it was classified into two categories. One category was the efficient task sharing between operator and CCM(computer controlled machine). The other was the efficient interface between operator and CCM. To realize the proposed concept, task of the object identification and extraction of 3D coordinate of an object was selected. 3D coordinate information was obtained from camera calibration using camera as a measurement device. Two stereo images were obtained by moving a camera certain distance in horizontal direction normal to focal axis and by acquiring two images at different locations. Transformation matrix for camera calibration was obtained via least square error approach using specified 6 known pairs of data points in 2D image and 3D world space. 3D world coordinate was obtained from two sets of image pixel coordinates of both camera images with calibrated transformation matrix. As an interface system between operator and CCM, a touch pad screen mounted on the monitor and remotely captured imaging system were used. Object indication was done by the operator’s finger touch to the captured image using the touch pad screen. A certain size of local image processing area was specified after the touch was made. And image processing was performed with the specified local area to extract desired features of the object. An MS Windows based interface software was developed using Visual C++6.0. The software was developed with four modules such as remote image acquisiton module, task command module, local image processing module and 3D coordinate extraction module. Proposed scheme shoed the feasibility of real time processing, robust and precise object identification, and adaptability of various job and environments though selected sample tasks.

  • PDF

The Optimized Detection Range of RFID-based Positioning System using k-Nearest Neighbor Algorithm

  • Kim, Jung-Hwan;Heo, Joon;Han, Soo-Hee;Kim, Sang-Min
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 2008.10a
    • /
    • pp.297-302
    • /
    • 2008
  • The positioning technology for a moving object is an important and essential component of ubiquitous computing environment and applications, for which Radio Frequency Identification(RFID) has been considered as a core technology. RFID-based positioning system calculates the position of moving object based on k-nearest neighbor(k-nn) algorithm using detected k-tags which have known coordinates and kcan be determined according to the detection range of RFID system. In this paper, RFID-based positioning system determines the position of moving object not using weight factor which depends on received signal strength but assuming that tags within the detection range always operate and have same weight value. Because the latter system is much more economical than the former one. The geometries of tags were determined with considerations in huge buildings like office buildings, shopping malls and warehouses, so they were determined as the line in I-Dimensional space, the square in 2-Dimensional space. In 1-Dimensional space, the optimal detection range is determined as 125% of the tag spacing distance through the analytical and numerical approach. Here, the analytical approach means a mathematical proof and the numerical approach means a simulation using matlab. But the analytical approach is very difficult in 2-Dimensional space, so through the numerical approach, the optimal detection range is determined as 134% of the tag spacing distance in 2-Dimensional space. This result can be used as a fundamental study for designing RFID-based positioning system.

  • PDF

The Optimized Detection Range of RFID-based Positioning System using k-Nearest Neighbor Algorithm

  • Kim, Jung-Hwan;Heo, Joon;Han, Soo-Hee;Kim, Sang-Min
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 2008.10a
    • /
    • pp.270-271
    • /
    • 2008
  • The positioning technology for a moving object is an important and essential component of ubiquitous communication computing environment and applications, for which Radio Frequency IDentification Identification(RFID) is has been considered as also a core technology for ubiquitous wireless communication. RFID-based positioning system calculates the position of moving object based on k-nearest neighbor(k-nn) algorithm using detected k-tags which have known coordinates and k can be determined according to the detection range of RFID system. In this paper, RFID-based positioning system determines the position of moving object not using weight factor which depends on received signal strength but assuming that tags within the detection range always operate and have same weight value. Because the latter system is much more economical than the former one. The geometries of tags were determined with considerations in huge buildings like office buildings, shopping malls and warehouses, so they were determined as the line in 1-Dimensional space, the square in 2-Dimensional space and the cubic in 3-Dimensional space. In 1-Dimensional space, the optimal detection range is determined as 125% of the tag spacing distance through the analytical and numerical approach. Here, the analytical approach means a mathematical proof and the numerical approach means a simulation using matlab. But the analytical approach is very difficult in 2- and 3-Dimensional space, so through the numerical approach, the optimal detection range is determined as 134% of the tag spacing distance in 2-Dimensional space and 143% of the tag spacing distance in 3-Dimensional space. This result can be used as a fundamental study for designing RFID-based positioning system.

  • PDF

Fingertip Detection through Atrous Convolution and Grad-CAM (Atrous Convolution과 Grad-CAM을 통한 손 끝 탐지)

  • Noh, Dae-Cheol;Kim, Tae-Young
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.5
    • /
    • pp.11-20
    • /
    • 2019
  • With the development of deep learning technology, research is being actively carried out on user-friendly interfaces that are suitable for use in virtual reality or augmented reality applications. To support the interface using the user's hands, this paper proposes a deep learning-based fingertip detection method to enable the tracking of fingertip coordinates to select virtual objects, or to write or draw in the air. After cutting the approximate part of the corresponding fingertip object from the input image with the Grad-CAM, and perform the convolution neural network with Atrous Convolution for the cut image to detect fingertip location. This method is simpler and easier to implement than existing object detection algorithms without requiring a pre-processing for annotating objects. To verify this method we implemented an air writing application and showed that the recognition rate of 81% and the speed of 76 ms were able to write smoothly without delay in the air, making it possible to utilize the application in real time.

Distorted perception of 3D slant caused by disjunctive-eye-movements (반향 눈 운동에 의한 3차원 경사의 왜곡된 지각)

  • 이형철;감기택;김은수;윤장한
    • Korean Journal of Cognitive Science
    • /
    • v.13 no.2
    • /
    • pp.37-45
    • /
    • 2002
  • Despite dynamical retinal image changes caused by pursuit eye movements, we usually perceive the stable spatial properties of the environment suite successfully Helmholtz and his followers have suggested that the visual system coordinates the retinal and extraretinal eye position information to represent the spatial properties of the environment. However. there have been a significant amount of researches showing that this kind of mechanism may not operate perfectly, and the pursuit eye movement employed in those researches were limited to conjugate eye movements. When an observer tracks an object moving away from the observer with his/her eyes. the two eyes rotate in opposite direction. and this kind of disjunctive eye movement may produce undesirable binocular disparities for the objects in the background. The present study examined whether the visual system compensated for the undesirable binocular disparities caused by disjunctive eye movements with extraretinal eye position information. Although the target object was presented frontoparellely to the subjects. the subjects reported that the object was slanted toward (or alway from) them in consistent with the undesirable binocular disparities produced by the disjunctive eye movements. These results imply that the visual system may not perfectly compensate for the undesirable binocular disparities with extraretinal eye position information.

  • PDF

Control of an Omni-directional Mobile Robot Based on Camera Image (카메라 영상기반 전방향 이동 로봇의 제어)

  • Kim, Bong Kyu;Ryoo, Jung Rae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.1
    • /
    • pp.84-89
    • /
    • 2014
  • In this paper, an image-based visual servo control strategy for tracking a target object is applied to a camera-mounted omni-directional mobile robot. In order to get target angular velocity of each wheel from image coordinates of the target object, in general, a mathematical image Jacobian matrix is built using a camera model and a mobile robot kinematics. Unlike to the well-known mathematical image Jacobian, a simple rule-based control strategy is proposed to generate target angular velocities of the wheels in conjunction with size of the target object captured in a camera image. A camera image is divided into several regions, and a pre-defined rule corresponding to the target-located image region is applied to generate target angular velocities of wheels. The proposed algorithm is easily implementable in that no mathematical description for image Jacobian is required and a small number of rules are sufficient for target tracking. Experimental results are presented with descriptions about the overall experimental system.