• Title/Summary/Keyword: 3D Hough transformation

Search Result 8, Processing Time 0.025 seconds

The Segmentation and the Extraction of Precise Plane Equation of Building Roof Plane using 3D Hough Transformation of LiDAR Data (LiDAR 데이터의 3D Hough 변환을 이용한 건물 지붕 평면의 세그멘테이션 및 정밀 평면방정식 추출)

  • Lee, Young-Jin;Oh, Jae-Hong;Shin, Sung-Woong;Cho, Woo-Sug
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.26 no.5
    • /
    • pp.505-512
    • /
    • 2008
  • The 3D Hough transformation is the one of the most powerful and popular algorithm for extracting plane parameters from LiDAR data. However, there are some problems when extracting building roof plane using 3D Hough transformation. This paper explains possible problems and solution for extracting roof plane. The algorithm defines peak plane, exact plane, and LESS plane for extracting accurate plane parameters in the accumulator of the 3D Hough transformation. The peak plane is the plane which is represented by peak in the accumulator. The exact plane is the plane which is represented by the accumulator cell which is closest to the actual plane. The LESS plane can be calculated from all LiDAR points in the exact plane by using least-square adjustment. Test results show that proposed algorithm can extracts building roof plane very accurately.

Multiple Plane Area Detection Using Self Organizing Map (자기 조직화 지도를 이용한 다중 평면영역 검출)

  • Kim, Jeong-Hyun;Teng, Zhu;Kang, Dong-Joong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.1
    • /
    • pp.22-30
    • /
    • 2011
  • Plane detection is very important information for mission-critical of robot in 3D environment. A representative method of plane detection is Hough-transformation. Hough-transformation is robust to noise and makes the accurate plane detection possible. But it demands excessive memory and takes too much processing time. Iterative randomized Hough-transformation has been proposed to overcome these shortcomings. This method doesn't vote all data. It votes only one value of the randomly selected data into the Hough parameter space. This value calculated the value of the parameter of the shape that we want to extract. In Hough parameters space, it is possible to detect accurate plane through detection of repetitive maximum value. A common problem in these methods is that it requires too much computational cost and large number of memory space to find the distribution of mixed multiple planes in parameter space. In this paper, we detect multiple planes only via data sampling using Self Organizing Map method. It does not use conventional methods that include transforming to Hough parameter space, voting and repetitive plane extraction. And it improves the reliability of plane detection through division area searching and planarity evaluation. The proposed method is more accurate and faster than the conventional methods which is demonstrated the experiments in various conditions.

A Study on Automatic Seam Tracking using Vision Sensor (비전센서를 이용한 자동추적장치에 관한 연구)

  • 전진환;조택동;양상민
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.10a
    • /
    • pp.1105-1109
    • /
    • 1995
  • A CCD-camera, which is structured with vision system, was used to realize automatic seam-tracking system and 3-D information which is needed to generate torch path, was obtained by using laser-slip beam. To extract laser strip and obtain welding-specific point, Adaptive Hough-transformation was used. Although the basic Hough transformation takes too much time to process image on line, it has a tendency to be robust to the noises as like spatter. For that reson, it was complemented with Adaptive Hough transformation to have an on-line processing ability for scanning a welding-specific point. the dead zone,where the sensing of weld line is impossible, is eliminated by rotating the camera with its rotating axis centered at welding torch. The camera angle is controlled so as to get the minimum image data for the sensing of weld line, hence the image processing time is reduced. The fuzzy controller is adapted to control the camera angle.

  • PDF

Label Restoration Using Biquadratic Transformation

  • Le, Huy Phat;Nguyen, Toan Dinh;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • v.6 no.1
    • /
    • pp.6-11
    • /
    • 2010
  • Recently, there has been research to use portable digital camera to recognize objects in natural scene images, including labels or marks on a cylindrical surface. In many cases, text or logo in a label can be distorted by a structural movement of the object on which the label resides. Since the distortion in the label can degrade the performance of object recognition, the label should be rectified or restored from deformations. In this paper, a new method for label detection and restoration in digital images is presented. In the detection phase, the Hough transform is employed to detect two vertical boundaries of the label, and a horizontal edge profile is analyzed to detect upper-side and lower-side boundaries of the label. Then, the biquadratic transformation is used to restore the rectangular shape of the label. The proposed algorithm performs restoration of 3D objects in a 2D space, and it requires neither an auxiliary hardware such as 3D camera to construct 3D models nor a multi-camera to capture objects in different views. Experimental results demonstrate the effectiveness of the proposed method.

Volume measurement of limb edema using three dimensional registration method of depth images based on plane detection (깊이 영상의 평면 검출 기반 3차원 정합 기법을 이용한 상지 부종의 부피 측정 기술)

  • Lee, Wonhee;Kim, Kwang Gi;Chung, Seung Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.7
    • /
    • pp.818-828
    • /
    • 2014
  • After emerging of Microsoft Kinect, the interest in three-dimensional (3D) depth image was significantly increased. Depth image data of an object can be converted to 3D coordinates by simple arithmetic calculation and then can be reconstructed as a 3D model on computer. However, because the surface coordinates can be acquired only from the front area facing Kinect, total solid which has a closed surface cannot be reconstructed. In this paper, 3D registration method for multiple Kinects was suggested, in which surface information from each Kinect was simultaneously collected and registered in real time to build 3D total solid. To unify relative coordinate system used by each Kinect, 3D perspective transform was adopted. Also, to detect control points which are necessary to generate transformation matrix, 3D randomized Hough transform was used. Once transform matrices were generated, real time 3D reconstruction of various objects was possible. To verify the usefulness of suggested method, human arms were 3D reconstructed and the volumes of them were measured by using four Kinects. This volume measuring system was developed to monitor the level of lymphedema of patients after cancer treatment and the measurement difference with medical CT was lower than 5%, expected CT reconstruction error.

Determination of a holdsite of a curved object using range data

  • Yang, Woo-Suk;Jang, Jong-Whan
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10b
    • /
    • pp.399-404
    • /
    • 1992
  • Curved 3D objects represented by range data contain large amounts of information compared with planar objects, but do not have distinct features for matching to those of object models. This makes it difficult to represent and identify a general 3D curved object. This paper introduces a new approach to representing and finding a holdsite of general 3D curved objects using range data. We develop a three-dimensional generalized Hough transformation which can be also applied to general 3D curved object recognition and which reduces both the computation time and storage requirements. Our approach makes use of the relative geometric differences between particular points on the object surface and some model points which are prespecified arbitrarily and task dependently.

  • PDF

An method for building 2D virtual environment for a remote controlled mobile robot

  • Kim, Woo-Kyoung;Hyun, Woong-Keun;Park, Jea-Yong;Yoon, In-Mo;Jung, Y.K.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1430-1434
    • /
    • 2004
  • Recently, Virtual reality parts is applied in various fields of industry. In this paper we developed basic components for virtual robot control system interfaced with real environment. For this, a real robot with virtual interface module is developed and virtual robot of similar image with real robot is created by putting on 3D graphic texture to the real robot. To build an unknown environment to be linked with virtual environment, we proposed a hough transformation based algorithm. Our proposed algorithm consists of navigation module by using fuzzy engine and map building module. Experiments using a developed robot illustrate the method.

  • PDF

Assembly performance evaluation method for prefabricated steel structures using deep learning and k-nearest neighbors

  • Hyuntae Bang;Byeongjun Yu;Haemin Jeon
    • Smart Structures and Systems
    • /
    • v.32 no.2
    • /
    • pp.111-121
    • /
    • 2023
  • This study proposes an automated assembly performance evaluation method for prefabricated steel structures (PSSs) using machine learning methods. Assembly component images were segmented using a modified version of the receptive field pyramid. By factorizing channel modulation and the receptive field exploration layers of the convolution pyramid, highly accurate segmentation results were obtained. After completing segmentation, the positions of the bolt holes were calculated using various image processing techniques, such as fuzzy-based edge detection, Hough's line detection, and image perspective transformation. By calculating the distance ratio between bolt holes, the assembly performance of the PSS was estimated using the k-nearest neighbors (kNN) algorithm. The effectiveness of the proposed framework was validated using a 3D PSS printing model and a field test. The results indicated that this approach could recognize assembly components with an intersection over union (IoU) of 95% and evaluate assembly performance with an error of less than 5%.