• 제목/요약/키워드: Vision sensor

검색결과 821건 처리시간 0.029초

레이저 비전 센서를 이용한 용접비드의 외부결함 검출에 관한 연구 (A Study of Inspection of Weld Bead Defects using Laser Vision Sensor)

  • 이정익;이세헌
    • Journal of Welding and Joining
    • /
    • 제17권2호
    • /
    • pp.53-60
    • /
    • 1999
  • Conventionally, CCD camera and vision sensor using the projected pattern of light is generally used to inspect the weld bead defects. But with this method, a lot of time is needed for image preprocessing, stripe extraction and thinning, etc. In this study, laser vision sensor using the scanning beam of light is used to shorten the time required for image preprocessing. The software for deciding whether the weld bead is in proper shape or not in real time is developed. The criteria are based upon the classification of imperfections in metallic fusion welds(ISO 6520) and limits for imperfections(ISO 5817).

  • PDF

레이더 센서와 비전 센서를 활용한 다중 센서 융합 기반 움직임 검지에 관한 연구 (A Study of Sensor Fusion using Radar Sensor and Vision Sensor in Moving Object Detection)

  • 김세진;변기훈;원인수;권장우
    • 한국ITS학회 논문지
    • /
    • 제16권2호
    • /
    • pp.140-152
    • /
    • 2017
  • 본 논문은 레이더 센서, 비전 센서를 활용한 다중 센서 융합 기반 움직임 검지에 관한 연구를 다룬다. 레이더 센서는 다량의 물체를 검지함에 있어 센서 자체의 움직임이 발생할 경우 주변건물이나 주변 가로수와 같은 사물 혹은 물체를 차량으로 오인하는 경우가 생긴다. 비전 센서의 경우 저렴하고 가장 많이 쓰는 형태이지만 빛, 흔들림, 날씨, 조도 등 외부환경에 취약하다는 문제점이 있다. 각 센서 간의 문제점을 보완하고자 센서 융합을 통한 움직임 검지를 제안하게 되었고 실험환경 내에서 매우 우수한 검지율을 보이게 되었다 센서 간 융합에서 좌표 통일문제와 실시간 전송문제 등을 해결하였으며, 각 센서 간 필터링을 통한 비가공데이터(raw data)의 신뢰성을 높였다. 특히 영상에서는 가우시안 혼합모델(GMM, Gaussian Mixture Model)을 사용하여 레이더 센서의 단점을 극복하고자 했다.

ACC/AEBS 시스템용 센서퓨전을 통한 주행경로 추정 알고리즘 (Development of the Driving path Estimation Algorithm for Adaptive Cruise Control System and Advanced Emergency Braking System Using Multi-sensor Fusion)

  • 이동우;이경수;이재완
    • 자동차안전학회지
    • /
    • 제3권2호
    • /
    • pp.28-33
    • /
    • 2011
  • This paper presents driving path estimation algorithm for adaptive cruise control system and advanced emergency braking system using multi-sensor fusion. Through data collection, yaw rate filtering based road curvature and vision sensor road curvature characteristics are analyzed. Yaw rate filtering based road curvature and vision sensor road curvature are fused into the one curvature by weighting factor which are considering characteristics of each curvature data. The proposed driving path estimation algorithm has been investigated via simulation performed on a vehicle package Carsim and Matlab/Simulink. It has been shown via simulation that the proposed driving path estimation algorithm improves primary target detection rate.

비전과 초음파 센서를 이용한 임의 환경에서 2족 로봇의 경로계획을 위한 맵 빌딩 (Map Building to Plan the Path for Biped Robot in Unknown Environments Using Vision and Ultrasonic Sensors)

  • 차재환;김동일;기창두
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2004년도 추계학술대회 논문집
    • /
    • pp.1475-1478
    • /
    • 2004
  • This paper describes map building for the path planning to avoid obstacles with vision sensor and ultrasonic sensor. We get the 2 dimensional information from the processed images of CCD sensor and 1 dimensional range information from ultrasonic sensor. I proposed a way to generate the map which contains these two kinds of information in the program. And we made the biped robot which have 20 DOF with these sensors and get good experimental result to prove the validity of the proposed method.

  • PDF

비전 센서를 이용한 유연한 로봇팔의 끝점 위치 측정 (The Tip Position Measurement of a Flexible Robot Arm Using a Vision Sensor)

  • 신효필;이종광;강이석
    • 제어로봇시스템학회논문지
    • /
    • 제6권8호
    • /
    • pp.682-688
    • /
    • 2000
  • To improve the performance of a flexible robot arm one of the important things is the vibration displacement measurement of a flexible arm. Many types of sensors have been used to measure it, The most popular has been strain gauges which measures the deflection of the beam,. Photo sensors have also been for detecting beam displacement and accelerometers are often used to measure the beam vibration. But the vibration displacement can be obtained indirectly from these sensors. In this article a vision sensor is used as a displacement sensor to measure the vibration displacement of a flexible robot arm. Several schemes are proposed to reduce the image processing time and increase its accuracy. From the experimental results it is seen that the vision sensor can be an alternative sensor for measuring the vibration displacement and has a potential for on-line tip position control of flexible robot systems.

  • PDF

Study of Intelligent Vision Sensor for the Robotic Laser Welding

  • Kim, Chang-Hyun;Choi, Tae-Yong;Lee, Ju-Jang;Suh, Jeong;Park, Kyoung-Taik;Kang, Hee-Shin
    • 한국산업융합학회 논문집
    • /
    • 제22권4호
    • /
    • pp.447-457
    • /
    • 2019
  • The intelligent sensory system is required to ensure the accurate welding performance. This paper describes the development of an intelligent vision sensor for the robotic laser welding. The sensor system includes a PC based vision camera and a stripe-type laser diode. A set of robust image processing algorithms are implemented. The laser-stripe sensor can measure the profile of the welding object and obtain the seam line. Moreover, the working distance of the sensor can be changed and other configuration is adjusted accordingly. The robot, the seam tracking system, and CW Nd:YAG laser are used for the laser welding robot system. The simple and efficient control scheme of the whole system is also presented. The profile measurement and the seam tracking experiments were carried out to validate the operation of the system.

센서퓨젼 기술을 이용한 정밀조립작업 (Precise assembly task using sensor fusion technology)

  • 이종길;이범희
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1993년도 한국자동제어학술회의논문집(국내학술편); Seoul National University, Seoul; 20-22 Oct. 1993
    • /
    • pp.287-292
    • /
    • 1993
  • We use three sensors such as a vision sensor, a proximity sensor, and a force/torque sensor fused by fuzzy logic in a peg-in-hole task. The vision and proximity sensors are usually used for gross motion control and the information is used here to position the peg around the hole. The force/torque sensor is used for fine motion control and the information is used to insert the peg into the hole precisely. Throughout the task, the information of all the three sensors is fused by a fuzzy logic controller. Some simulation results are also presented for verification.

  • PDF

레이더와 비전센서 융합을 통한 전방 차량 인식 알고리즘 개발 (Radar and Vision Sensor Fusion for Primary Vehicle Detection)

  • 양승한;송봉섭;엄재용
    • 제어로봇시스템학회논문지
    • /
    • 제16권7호
    • /
    • pp.639-645
    • /
    • 2010
  • This paper presents the sensor fusion algorithm that recognizes a primary vehicle by fusing radar and monocular vision data. In general, most of commercial radars may lose tracking of the primary vehicle, i.e., the closest preceding vehicle in the same lane, when it stops or goes with other preceding vehicles in the adjacent lane with similar velocity and range. In order to improve the performance degradation of radar, vehicle detection information from vision sensor and path prediction predicted by ego vehicle sensors will be combined for target classification. Then, the target classification will work with probabilistic association filters to track a primary vehicle. Finally the performance of the proposed sensor fusion algorithm is validated using field test data on highway.