• Title/Summary/Keyword: Hough transform

Search Result 437, Processing Time 0.034 seconds

A Study on a Lossless Compression Scheme for Cloud Point Data of the Target Construction (목표 구조물에 대한 점군데이터의 무손실 압축 기법에 관한 연구)

  • Bang, Min-Suk;Yun, Kee-Bang;Kim, Ki-Doo
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.5
    • /
    • pp.33-41
    • /
    • 2011
  • In this paper, we propose a lossless compression scheme for cloud point data of the target construction by using doubleness and decreasing useless information of cloud point data. We use Hough transform to find the horizontal angle between construction and terrestrial LIDAR. This angle is used for the rotation of the cloud point data. The cloud point data can be parallel to x-axis, then y-axis doubleness is increased. Therefore, the cloud point data can be more compressed. In addition, we apply two methods to decrease the number of cloud point data for useless information of them. One is decimation of the cloud point data, the other is to extract the range of y-coordinates of target construction, and then extract the cloud point data existing in the range only. The experimental result shows the performance of proposed scheme. To compress the data, we use only the position information without additional information. Therefore, this scheme can increase processing speed of the compression algorithm.

Tillage boundary detection based on RGB imagery classification for an autonomous tractor

  • Kim, Gookhwan;Seo, Dasom;Kim, Kyoung-Chul;Hong, Youngki;Lee, Meonghun;Lee, Siyoung;Kim, Hyunjong;Ryu, Hee-Seok;Kim, Yong-Joo;Chung, Sun-Ok;Lee, Dae-Hyun
    • Korean Journal of Agricultural Science
    • /
    • v.47 no.2
    • /
    • pp.205-217
    • /
    • 2020
  • In this study, a deep learning-based tillage boundary detection method for autonomous tillage by a tractor was developed, which consisted of image cropping, object classification, area segmentation, and boundary detection methods. Full HD (1920 × 1080) images were obtained using a RGB camera installed on the hood of a tractor and were cropped to 112 × 112 size images to generate a dataset for training the classification model. The classification model was constructed based on convolutional neural networks, and the path boundary was detected using a probability map, which was generated by the integration of softmax outputs. The results show that the F1-score of the classification was approximately 0.91, and it had a similar performance as the deep learning-based classification task in the agriculture field. The path boundary was determined with edge detection and the Hough transform, and it was compared to the actual path boundary. The average lateral error was approximately 11.4 cm, and the average angle error was approximately 8.9°. The proposed technique can perform as well as other approaches; however, it only needs low cost memory to execute the process unlike other deep learning-based approaches. It is possible that an autonomous farm robot can be easily developed with this proposed technique using a simple hardware configuration.

Drone-based Power-line Tracking System (드론 기반의 전력선 추적 제어 시스템)

  • Jeong, Jongmin;Kim, Jaeseung;Yoon, Tae Sung;Park, Jin Bae
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.6
    • /
    • pp.773-781
    • /
    • 2018
  • In recent years, a study of power-line inspection using an unmanned aerial vehicle (UAV) has been actively conducted. However, relevant studies have been conducting power-line inspection with an UAV operated by manual control, and they have developed just power-line detection algorithm on aerial images. To overcome limitations of existing research, we propose a drone-based power-line tracking system in this paper. The main contributions of this paper are to operate developed system under configured environment and to develop a power-line detection algorithm in real-time. Developed system is composed of the power-line detection and the image-based tracking control. To detect a power-line in real-time, a region of interest (ROI) image is extracted. Furthermore, clustering algorithm is used in order to discriminate the power-line from background. Finally, the power-line is detected by using the Hough transform, and a center position and a tilt angle are estimated by using the Kalman filter to control a drone smoothly. We design a position controller and an attitude controller for image-based tracking control, and both controllers are designed based on the proportional-derivative (PD) control method. The interaction between the position controller and the attitude controller makes the drone track the power-line. Several experiments were carried out in environments where conditions are similar to actual environments, which demonstrates the superiority of the developed system.

Lane Detection based Open-Source Hardware according to Change Lane Conditions (오픈소스 하드웨어 기반 차선검출 기술에 대한 연구)

  • Kim, Jae Sang;Moon, Hae Min;Pan, Sung Bum
    • Smart Media Journal
    • /
    • v.6 no.3
    • /
    • pp.15-20
    • /
    • 2017
  • Recently, the automotive industry has been studied about driver assistance systems for helping drivers to drive their cars easily by integrating them with the IT technology. This study suggests a method of detecting lanes, robust to road condition changes and applicable to lane departure warning and autonomous vehicles mode. The proposed method uses the method of detecting candidate areas by using the Gaussian filter and by determining the Otsu threshold value and edge. Moreover, the proposed method uses lane gradient and width information through the Hough transform to detect lanes. The method uses road lane information detected before to detect dashed lines as well as solid lines, calculates routes in which the lanes will be located in the next frame to draw virtual lanes. The proposed algorithm was identified to be able to detect lanes in both dashed- and solid-line situations, and implement real-time processing where applied to Raspberry Pi 2 which is open source hardware.

Information extraction of the moving objects based on edge detection and optical flow (Edge 검출과 Optical flow 기반 이동물체의 정보 추출)

  • Chang, Min-Hyuk;Park, Jong-An
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.8A
    • /
    • pp.822-828
    • /
    • 2002
  • Optical flow estimation based on multi constraint approaches is frequently used for recognition of moving objects. However, the use have been confined because of OF estimation time as well as error problem. This paper shows a new method form effectively extracting movement information using the multi-constraint base approaches with sobel edge detection. The moving objects anr extraced in the input image sequence using edge detection and segmentation. Edge detection and difference of the two input image sequence gives us the moving objects in the images. The process of thresholding removes the moving objects detected due to noise. After thresholding the real moving objects, we applied the Combinatorial Hough Transform (CHT) and voting accumulation to find the optimal constraint lines for optical flow estimation. The moving objects found in the two consecutive images by using edge detection and segmentation greatly reduces the time for comutation of CHT. The voting based CHT avoids the errors associated with least squares methods. Calculation of a large number of points along the constraint line is also avoided by using the transformed slope-intercept parameter domain. The simulation results show that the proposed method is very effective for extracting optical flow vectors and hence recognizing moving objects in the images.

Automatic Container Placard Recognition System (컨테이너 플래카드 자동 인식 시스템)

  • Heo, Gyeongyong;Lee, Imgeun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.6
    • /
    • pp.659-665
    • /
    • 2019
  • Various placards are attached to the surface of a container depending on the risk of the cargo loaded. Containers with dangerous goods should be managed separately from ordinary containers. Therefore, as part of the port automation system, there is a demand for automatic recognition of placards. In this paper, proposed is a system that automatically extracts the placard area based on the shape features of the placard and recognizes the contents in it. Various distortions can be caused by the surface curvature of the container, therefore, attention should be paid to the area extraction and recognition process. The proposed system can automatically extract the region of interest and recognize the placard using the feature that the placard is diamond shaped and the class number is written just above the lower vertex. When the proposed system is applied to real images, the placard can be recognized without error, and the used techniques can be applied to various image analysis systems.

A Study on the Autonomous Driving Algorithm Using Bluetooth and Rasberry Pi (블루투스 무선통신과 라즈베리파이를 이용한 자율주행 알고리즘에 대한 연구)

  • Kim, Ye-Ji;Kim, Hyeon-Woong;Nam, Hye-Won;Lee, Nyeon-Yong;Ko, Yun-Seok
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.4
    • /
    • pp.689-698
    • /
    • 2021
  • In this paper, lane recognition, steering control and speed control algorithms were developed using Bluetooth wireless communication and image processing techniques. Instead of recognizing road traffic signals based on image processing techniques, a methodology for recognizing the permissible road speed by receiving speed codes from electronic traffic signals using Bluetooth wireless communication was developed. In addition, a steering control algorithm based on PWM control that tracks the lanes using the Canny algorithm and Hough transform was developed. A vehicle prototype and a driving test track were developed to prove the accuracy of the developed algorithm. Raspberry Pi and Arduino were applied as main control devices for steering control and speed control, respectively. Also, Python and OpenCV were used as implementation languages. The effectiveness of the proposed methodology was confirmed by demonstrating effectiveness in the lane tracking and driving control evaluation experiments using a vehicle prototypes and a test track.

The Road Speed Sign Board Recognition, Steering Angle and Speed Control Methodology based on Double Vision Sensors and Deep Learning (2개의 비전 센서 및 딥 러닝을 이용한 도로 속도 표지판 인식, 자동차 조향 및 속도제어 방법론)

  • Kim, In-Sung;Seo, Jin-Woo;Ha, Dae-Wan;Ko, Yun-Seok
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.4
    • /
    • pp.699-708
    • /
    • 2021
  • In this paper, a steering control and speed control algorithm was presented for autonomous driving based on two vision sensors and road speed sign board. A car speed control algorithm was developed to recognize the speed sign by using TensorFlow, a deep learning program provided by Google to the road speed sign image provided from vision sensor B, and then let the car follows the recognized speed. At the same time, a steering angle control algorithm that detects lanes by analyzing road images transmitted from vision sensor A in real time, calculates steering angles, controls the front axle through PWM control, and allows the vehicle to track the lane. To verify the effectiveness of the proposed algorithm's steering and speed control algorithms, a car's prototype based on the Python language, Raspberry Pi and OpenCV was made. In addition, accuracy could be confirmed by verifying various scenarios related to steering and speed control on the test produced track.

Resolution Estimation Technique in Gaze Tracking System for HCI (HCI를 위한 시선추적 시스템에서 분해능의 추정기법)

  • Kim, Ki-Bong;Choi, Hyun-Ho
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.1
    • /
    • pp.20-27
    • /
    • 2021
  • Eye tracking is one of the NUI technologies, and it finds out where the user is gazing. This technology allows users to input text or control GUI, and further analyzes the user's gaze so that it can be applied to commercial advertisements. In the eye tracking system, the allowable range varies depending on the quality of the image and the degree of freedom of movement of the user. Therefore, there is a need for a method of estimating the accuracy of eye tracking in advance. The accuracy of eye tracking is greatly affected by how the eye tracking algorithm is implemented in addition to hardware variables. Accordingly, in this paper, we propose a method to estimate how many degrees of gaze changes when the pupil center moves by one pixel by estimating the maximum possible movement distance of the pupil center in the image.

A Study on Ball Tracking Algorithm to Analyze Amateur Futsal Data (아마추어 풋살 데이터 분석을 위한 공 추적 알고리즘 연구)

  • Jung, Soogyung;Kwon, Hangil;Lee, Gilhyeong;Jung, Halim;Ko, Dongbeom;Jeon, Gwangil;Park, Jeongmin
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.4
    • /
    • pp.189-198
    • /
    • 2021
  • This paper introduces the ball tracking system using image processing. The recent growth of the amateur futsal market has also raised requests for an analysis of amateur players' performance. Sports game analysis services for feedback and growth to athletes or teams are provided in various ways in various sports fields. However, the cost and spatial constraints of sports analysis services make it difficult for providing analysis services to amateur athletes. In this paper, we study and develop a ball tracking algorithm for analyzing futsal game based on the match filming service previously provided in the amateur futsal field. This allows the analysis of the match based on existing services.