• Title/Summary/Keyword: 비전 센서

Search Result 294, Processing Time 0.035 seconds

The Road Speed Sign Board Recognition, Steering Angle and Speed Control Methodology based on Double Vision Sensors and Deep Learning (2개의 비전 센서 및 딥 러닝을 이용한 도로 속도 표지판 인식, 자동차 조향 및 속도제어 방법론)

  • Kim, In-Sung;Seo, Jin-Woo;Ha, Dae-Wan;Ko, Yun-Seok
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.4
    • /
    • pp.699-708
    • /
    • 2021
  • In this paper, a steering control and speed control algorithm was presented for autonomous driving based on two vision sensors and road speed sign board. A car speed control algorithm was developed to recognize the speed sign by using TensorFlow, a deep learning program provided by Google to the road speed sign image provided from vision sensor B, and then let the car follows the recognized speed. At the same time, a steering angle control algorithm that detects lanes by analyzing road images transmitted from vision sensor A in real time, calculates steering angles, controls the front axle through PWM control, and allows the vehicle to track the lane. To verify the effectiveness of the proposed algorithm's steering and speed control algorithms, a car's prototype based on the Python language, Raspberry Pi and OpenCV was made. In addition, accuracy could be confirmed by verifying various scenarios related to steering and speed control on the test produced track.

A Distance Measurement System Using a Laser Pointer and a Monocular Vision Sensor (레이저포인터와 단일카메라를 이용한 거리측정 시스템)

  • Jeon, Yeongsan;Park, Jungkeun;Kang, Taesam;Lee, Jeong-Oog
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.41 no.5
    • /
    • pp.422-428
    • /
    • 2013
  • Recently, many unmanned aerial vehicle (UAV) studies have focused on small UAVs, because they are cost effective and suitable in dangerous indoor environments where human entry is limited. Map building through distance measurement is a key technology for the autonomous flight of small UAVs. In many researches for unmanned systems, distance could be measured by using laser range finders or stereo vision sensors. Even though a laser range finder provides accurate distance measurements, it has a disadvantage of high cost. Calculating the distance using a stereo vision sensor is straightforward. However, the sensor is large and heavy, which is not suitable for small UAVs with limited payload. This paper suggests a low-cost distance measurement system using a laser pointer and a monocular vision sensor. A method to measure distance using the suggested system is explained and some experiments on map building are conducted with these distance measurements. The experimental results are compared to the actual data and the reliability of the suggested system is verified.

Calibration of VLP-16 Lidar Sensor and Vision Cameras Using the Center Coordinates of a Spherical Object (구형물체의 중심좌표를 이용한 VLP-16 라이다 센서와 비전 카메라 사이의 보정)

  • Lee, Ju-Hwan;Lee, Geun-Mo;Park, Soon-Yong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.2
    • /
    • pp.89-96
    • /
    • 2019
  • 360 degree 3-dimensional lidar sensors and vision cameras are commonly used in the development of autonomous driving techniques for automobile, drone, etc. By the way, existing calibration techniques for obtaining th e external transformation of the lidar and the camera sensors have disadvantages in that special calibration objects are used or the object size is too large. In this paper, we introduce a simple calibration method between two sensors using a spherical object. We calculated the sphere center coordinates using four 3-D points selected by RANSAC of the range data of the sphere. The 2-dimensional coordinates of the object center in the camera image are also detected to calibrate the two sensors. Even when the range data is acquired from various angles, the image of the spherical object always maintains a circular shape. The proposed method results in about 2 pixel reprojection error, and the performance of the proposed technique is analyzed by comparing with the existing methods.

Estimation of Precise Relative Position using INS/Vision Sensor Integrated System (INS/비전 센서 통합 시스템을 이용한 정밀 상대 위치 추정)

  • Chun, Se-Bum;Won, Dae-Hee;Kang, Tae-Sam;Sung, Sang-Kyung;Lee, Eun-Sung;Cho, Jin-Soo;Lee, Young-Jae
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.36 no.9
    • /
    • pp.891-897
    • /
    • 2008
  • GPS can provide precise relative navigation information. But it needs a reference station in a close range and is effected by satellite observation environment. In this paper, we propose INS and Vision sensor integrated system with a known landmark geometry. This system is supposed to overcome problems of GPS only system. Using the proposed method, a relative navigation is available without a GPS reference station. The only need for the proposed system is a landmark image which is drawn on the ground. We conduct simple simulation to check the performance of this method. As a result, we confirm that it can improve the relative navigation information.

Map Building to Plan the Path for Biped Robot in Unknown Environments Using Vision and Ultrasonic Sensors (비전과 초음파 센서를 이용한 임의 환경에서 2족 로봇의 경로계획을 위한 맵 빌딩)

  • 차재환;김동일;기창두
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2004.10a
    • /
    • pp.1475-1478
    • /
    • 2004
  • This paper describes map building for the path planning to avoid obstacles with vision sensor and ultrasonic sensor. We get the 2 dimensional information from the processed images of CCD sensor and 1 dimensional range information from ultrasonic sensor. I proposed a way to generate the map which contains these two kinds of information in the program. And we made the biped robot which have 20 DOF with these sensors and get good experimental result to prove the validity of the proposed method.

  • PDF

Autonomous Robot Kinematic Calibration using a Laser-Vision Sensor (레이저-비전 센서를 이용한 Autonomous Robot Kinematic Calibration)

  • Jeong, Jeong-Woo;Kang, Hee-Jun
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.2 s.95
    • /
    • pp.176-182
    • /
    • 1999
  • This paper presents a new autonomous kinematic calibration technique by using a laser-vision sensor called "Perceptron TriCam Contour". Because the sensor measures by capturing the image of a projected laser line on the surface of the object, we set up a long, straight line of a very fine string inside the robot workspace, and then allow the sensor mounted on a robot to measure the point intersection of the line of string and the projected laser line. The point data collected by changing robot configuration and sensor measuring are constrained to on a single straght line such that the closed-loop calibration method can be applied. The obtained calibration method is simple and accurate and also suitable for on-site calibration in an industrial environment. The method is implemented using Hyundai VORG-35 for its effectiveness.

  • PDF

Modeling and Control of welding Processes Using Vision Sensor (비전센서를 이용한 용접시스템의 모델링 및 제어)

  • 엄기원;이세헌;김동철
    • Journal of Welding and Joining
    • /
    • v.14 no.4
    • /
    • pp.7-15
    • /
    • 1996
  • 아크 용접 공정을 자동화하고 무인화하는 것은 용접 품질향상 및 생산성 향상 에 기여할 뿐만 아니라 용접숙련공의 감소문제에도 대처할 수 있기 때문에 중요한 것이라 생각된다. 그것을 위해 아크용접 분야에도 로봇의 도입이 급격히 증가하고 있는 추세이다. 그러나 현재 현장에 도입된 로봇은 주로 off-line으로 그 기능을 수행 하고 있기 때문에 생산성 향상 및 용접 품질향상 면에서 그 기능을 충분히 발휘하고 있지 못하고 있는 실정이다. 이런 단점을 극복하기 위해서는 센서의 도입과 그것을 이용하여 용접시스템을 퍼드백 루프로 구성할 필요가 있고 용접선 추적(seam tracking), 용융지 형상제어, 아크 길이 제어 등은 그 예라고 할 수 있다. 본논문에서 는 그동안 국내에서는 심도있게 취급되지 않았던 용융지 형상 제어시스템과 그것을 구성하기 위해 많이 사용되고 있는 비전센서에 관하여 서술한다.

  • PDF

CNN-based People Recognition for Vision Occupancy Sensors (비전 점유센서를 위한 합성곱 신경망 기반 사람 인식)

  • Lee, Seung Soo;Choi, Changyeol;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.23 no.2
    • /
    • pp.274-282
    • /
    • 2018
  • Most occupancy sensors installed in buildings, households and so forth are pyroelectric infra-red (PIR) sensors. One of disadvantages is that PIR sensor can not detect the stationary person due to its functionality of detecting the variation of thermal temperature. In order to overcome this problem, the utilization of camera vision sensors has gained interests, where object tracking is used for detecting the stationary persons. However, the object tracking has an inherent problem such as tracking drift. Therefore, the recognition of humans in static trackers is an important task. In this paper, we propose a CNN-based human recognition to determine whether a static tracker contains humans. Experimental results validated that human and non-humans are classified with accuracy of about 88% and that the proposed method can be incorporated into practical vision occupancy sensors.

Assembly Performance Evaluation for Prefabricated Steel Structures Using k-nearest Neighbor and Vision Sensor (k-근접 이웃 및 비전센서를 활용한 프리팹 강구조물 조립 성능 평가 기술)

  • Bang, Hyuntae;Yu, Byeongjun;Jeon, Haemin
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.35 no.5
    • /
    • pp.259-266
    • /
    • 2022
  • In this study, we developed a deep learning and vision sensor-based assembly performance evaluation method isfor prefabricated steel structures. The assembly parts were segmented using a modified version of the receptive field block convolution module inspired by the eccentric function of the human visual system. The quality of the assembly was evaluated by detecting the bolt holes in the segmented assembly part and calculating the bolt hole positions. To validate the performance of the evaluation, models of standard and defective assembly parts were produced using a 3D printer. The assembly part segmentation network was trained based on the 3D model images captured from a vision sensor. The sbolt hole positions in the segmented assembly image were calculated using image processing techniques, and the assembly performance evaluation using the k-nearest neighbor algorithm was verified. The experimental results show that the assembly parts were segmented with high precision, and the assembly performance based on the positions of the bolt holes in the detected assembly part was evaluated with a classification error of less than 5%.

PSD센서를 이용한 모션캡쳐 시스템의 센서보정에 관한 연구

  • 최훈일;조용준;유영기
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2004.05a
    • /
    • pp.175-175
    • /
    • 2004
  • 애니메이션과 컴퓨터 게임 등의 다양한 산업에 이용되고 있는 모션캡쳐 시스템은 현재 카메라를 이용한 고가의 고속 카메라를 사용하여 일반인들이 범용으로 사용하기에는 많은 어려움이 따른다. 이에 본 연구에서는 고가의 고속카메라를 사용하는 대신 저가의 PSD 센서를 사용하여 광학방식의 모션캡쳐 시스템을 구성하였다. 또한 시스템에서 획득한 3 차원 데이터의 정확성을 위해 일반적으로 CCD 카메라에 사용되어지는 카메라 보정 알고리즘을 PSD 모션캡쳐 시스템에 적용하여 손쉽게 보정을 하면서 적은 오차를 가질 수 있는 방법을 제시하였다.(중략)

  • PDF