• Title/Summary/Keyword: visual sensing system

Search Result 120, Processing Time 0.027 seconds

A Study on a Visual Sensor System for Weld Seam Tracking in Robotic GMA Welding (GMA 용접로봇용 용접선 시각 추적 시스템에 관한 연구)

  • 김재웅;김동호
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2000.11a
    • /
    • pp.643-646
    • /
    • 2000
  • In this study, we constructed a preview-sensing visual sensor system for weld seam tracking in real time in GMA welding. A sensor part consists of a CCD camera, a band-pass filter, a diode laser system with a cylindrical lens, and a vision board for inter frame process. We used a commercialized robot system which includes a GMA welding machine. To extract the weld seam we used a inter frame process in vision board from that we could remove the noise due to the spatters and fume in the image. Since the image was very reasonable by using the inter frame process, we could use the simplest way to extract the weld seam from the image, such as first differential and central difference method. Also we used a moving average method to the successive position data of weld seam for reducing the data fluctuation. In experiment the developed robot system with visual sensor could be able to track a most popular weld seam, such as a fillet-joint, a V-groove, and a lap-joint of which weld seam include planar and height directional variation.

  • PDF

Fire Detection Based on Image Learning by Collaborating CNN-SVM with Enhanced Recall

  • Yongtae Do
    • Journal of Sensor Science and Technology
    • /
    • v.33 no.3
    • /
    • pp.119-124
    • /
    • 2024
  • Effective fire sensing is important to protect lives and property from the disaster. In this paper, we present an intelligent visual sensing method for detecting fires based on machine learning techniques. The proposed method involves a two-step process. In the first step, fire and non-fire images are used to train a convolutional neural network (CNN), and in the next step, feature vectors consisting of 256 values obtained from the CNN are used for the learning of a support vector machine (SVM). Linear and nonlinear SVMs with different parameters are intensively tested. We found that the proposed hybrid method using an SVM with a linear kernel effectively increased the recall rate of fire image detection without compromising detection accuracy when an imbalanced dataset was used for learning. This is a major contribution of this study because recall is important, particularly in the sensing of disaster situations such as fires. In our experiments, the proposed system exhibited an accuracy of 96.9% and a recall rate of 92.9% for test image data.

Hand/Eye calibration of Robot arms with a 3D visual sensing system (3차원 시각 센서를 탑재한로봇의 Hand/Eye 캘리브레이션)

  • 김민영;노영준;조형석;김재훈
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.76-76
    • /
    • 2000
  • The calibration of the robot system with a visual sensor consists of robot, hand-to-eye, and sensor calibration. This paper describe a new technique for computing 3D position and orientation of a 3D sensor system relative to the end effect of a robot manipulator in an eye-on-hand robot configuration. When the 3D coordinates of the feature points at each robot movement and the relative robot motion between two robot movements are known, a homogeneous equation of the form AX : XB is derived. To solve for X uniquely, it is necessary to make two robot arm movements and form a system of two equation of the form: A$_1$X : XB$_1$ and A$_2$X = XB$_2$. A closed-form solution to this system of equations is developed and the constraints for solution existence are described in detail. Test results through a series of simulation show that this technique is simple, efficient, and accurate fur hand/eye calibration.

  • PDF

DEVELOPMENT OF 3D GUIDANCE SYSTEM FOR CLIMBING

  • Park, Jeong-Ho;Cho, Seong-Ik
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.872-875
    • /
    • 2006
  • This paper introduces the result of a 3D climbing navigation system development which is based on PDA. In the visual viewpoint, this system is better than conventional systems that were developed 2D based. In addition, the proposed system was developed so that it could become compatible with these systems. In this paper, we will illustrate as the functional viewpoint than technical description about the system development.

  • PDF

Development of Intelligent Rain Sensing Algorithm for Vision-based Smart Wiper System (비전 기반 스마트 와이퍼 시스템을 위한 지능형 레인 센싱 알고리즘 개발)

  • Lee, Kyung-Chang;Kim, Man-Ho;Lee, Seok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.10 no.7
    • /
    • pp.649-657
    • /
    • 2004
  • A windshield wiper system plays a key part in assurance of driver's safety at rainfall. However, because quantity of rain and snow vary irregularly according to time and velocity of automotive, a driver changes speed and operation period of a wiper from time to time in order to secure enough visual field in the traditional windshield wiper system. Because a manual operation of wiper distracts driver's sensitivity and causes inadvertent driving, this is becoming direct cause of traffic accident. Therefore, this paper presents the basic architecture of vision-based smart wiper system and the rain sensing algorithm that regulate speed and interval of wiper automatically according to quantity of rain or snow. Also, this paper introduces the fuzzy wiper control algorithm based on human's expertise, and evaluates performance of suggested algorithm in the simulator model. Especially the vision sensor can measure wider area relatively than the optical rain sensor, hence, this grasps rainfall state more exactly in case disturbance occurs.

A Study on the Image Processing of Visual Sensor for Weld Seam Tracking in GMA Welding

  • Kim, J.-W.;Chung, K.-C.
    • International Journal of Korean Welding Society
    • /
    • v.1 no.2
    • /
    • pp.23-29
    • /
    • 2001
  • In this study, a preview-sensing visual sensor system is constructed far weld seam tracking in GMA welding. The visual sensor system consists of a CCD camera, a diode laser system with a cylindrical lens, and a band-pass-filter to overcome the degrading of image due to spatters and/or arc light. Among the image processing methods, Hough transform method is compared with the central difference method from a viewpoint of the capability for extracting the accurate feature position. As a result, it was revealed that Hough transform method can more accurately extract the feature positions and it can be applied to real time weld seam tracking. Image processing which includes Hough transform method is carried out to extract straight lines that express laser stripe. After extracting the lines, weld joint position and edge points are determined by intersecting the lines. Even though the image includes a spatter trace on it, it is possible to recognize the position of weld joint. Weld seam tracking was precisely implemented with adopting Hough transform method, and it is possible to track the weld seam in the case of offset angle is in the region of $\pm$ $15^{\circ}$.

  • PDF

Landmark Detection Based on Sensor Fusion for Mobile Robot Navigation in a Varying Environment

  • Jin, Tae-Seok;Kim, Hyun-Sik;Kim, Jong-Wook
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.10 no.4
    • /
    • pp.281-286
    • /
    • 2010
  • We propose a space and time based sensor fusion method and a robust landmark detecting algorithm based on sensor fusion for mobile robot navigation. To fully utilize the information from the sensors, first, this paper proposes a new sensor-fusion technique where the data sets for the previous moments are properly transformed and fused into the current data sets to enable an accurate measurement. Exploration of an unknown environment is an important task for the new generation of mobile robots. The mobile robots may navigate by means of a number of monitoring systems such as the sonar-sensing system or the visual-sensing system. The newly proposed, STSF (Space and Time Sensor Fusion) scheme is applied to landmark recognition for mobile robot navigation in an unstructured environment as well as structured environment, and the experimental results demonstrate the performances of the landmark recognition.

Environment Modeling for Autonomous Welding Robotus

  • Kim, Min-Y.;Cho, Hyung-Suk;Kim, Jae-Hoon
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.3 no.2
    • /
    • pp.124-132
    • /
    • 2001
  • Autonomous of welding process in shipyard is ultimately necessary., since welding site is spatially enclosed by floors and girders, and therefore welding operators are exposed to hostile working conditions. To solve this problem, a welding robot that can navigate autonomously within the enclosure needs to be developed. To achieve the welding ra나, the robotic welding systems needs a sensor system for the recognition of the working environments and the weld seam tracking, and a specially designed environment recognition strategy. In this paper, a three-dimensional laser vision system is developed based on the optical triangulation technology in order to provide robots with work environmental map. At the same time a strategy for environment recognition for welding mobile robot is proposed in order to recognize the work environment efficiently. The design of the sensor system, the algorithm for sensing the structured environment, and the recognition strategy and tactics for sensing the work environment are described and dis-cussed in detail.

  • PDF

Development of an Image Processing System for the Large Size High Resolution Satellite Images (대용량 고해상 위성영상처리 시스템 개발)

  • 김경옥;양영규;안충현
    • Korean Journal of Remote Sensing
    • /
    • v.14 no.4
    • /
    • pp.376-391
    • /
    • 1998
  • Images from satellites will have 1 to 3 meter ground resolution and will be very useful for analyzing current status of earth surface. An image processing system named GeoWatch with more intelligent image processing algorithms has been designed and implemented to support the detailed analysis of the land surface using high-resolution satellite imagery. The GeoWatch is a valuable tool for satellite image processing such as digitizing, geometric correction using ground control points, interactive enhancement, various transforms, arithmetic operations, calculating vegetation indices. It can be used for investigating various facts such as the change detection, land cover classification, capacity estimation of the industrial complex, urban information extraction, etc. using more intelligent analysis method with a variety of visual techniques. The strong points of this system are flexible algorithm-save-method for efficient handling of large size images (e.g. full scenes), automatic menu generation and powerful visual programming environment. Most of the existing image processing systems use general graphic user interfaces. In this paper we adopted visual program language for remotely sensed image processing for its powerful programmability and ease of use. This system is an integrated raster/vector analysis system and equipped with many useful functions such as vector overlay, flight simulation, 3D display, and object modeling techniques, etc. In addition to the modules for image and digital signal processing, the system provides many other utilities such as a toolbox and an interactive image editor. This paper also presents several cases of image analysis methods with AI (Artificial Intelligent) technique and design concept for visual programming environment.

Effects of spatial resolution on digital image to detect pine trees damaged by pine wilt disease

  • Lee, Seung-Ho;Cho, Hyun-Kook
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.260-263
    • /
    • 2005
  • This study was carried out to investigate the effects of spatial resolutions on digital image for detecting pine trees damaged by pine wilt disease. Color infrared images taken from PKNU-3 multispectral airborne photographing system with a spatial resolution of 50cm was used as a basic data. Further test images with spatial resolutions of 1m, 2m and 4m were made from the basic data to test the detecting capacity on each spatial resolution. The test was performed with visual interpretation both on mono and stereo modus and compared with field surveying data. It can be conclude that it needs less than 1m spational resolutions or 1m spatial resolutions with stereo pair in order to detect pine trees damaged by pine wilt disease.

  • PDF