• Title/Summary/Keyword: Vision Based Sensor

Search Result 424, Processing Time 0.027 seconds

Trends of Sensor-based Intelligent Arc Welding Robot System (센서기반 지능형 아크 용접 로봇 시스템의 동향)

  • Joung, Ji Hoon;Shin, Hyeon-Ho;Song, Young Hoon;Kim, SooJong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.10
    • /
    • pp.1051-1056
    • /
    • 2014
  • In this paper, we introduce an intelligent robotic arc welding system which exploits sensors like as LVS (Laser Vision Sensor), Hall effect sensor, voltmeter and so on. The use of industrial robot is saturated because of its own limitation, and one of the major limitations is that industrial robot cannot recognize the environment. Lately, sensor-based environmental awareness research of the industrial robot is performed actively to overcome such limitation, and it can expand application field and improve productivity. We classify the sensor-based intelligent arc welding robot system by the goal and the sensing data. The goals can be categorized into detection of a welding start point, tracking of a welding line and correction of a torch deformation. The Sensing data can be categorized into welding data (i.e. current, voltage and short circuit detection) and displacement data (i.e. distance, position). This paper covers not only the explanation of the each category but also its advantage and limitation.

Vision-based Localization for AUVs using Weighted Template Matching in a Structured Environment (구조화된 환경에서의 가중치 템플릿 매칭을 이용한 자율 수중 로봇의 비전 기반 위치 인식)

  • Kim, Donghoon;Lee, Donghwa;Myung, Hyun;Choi, Hyun-Taek
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.8
    • /
    • pp.667-675
    • /
    • 2013
  • This paper presents vision-based techniques for underwater landmark detection, map-based localization, and SLAM (Simultaneous Localization and Mapping) in structured underwater environments. A variety of underwater tasks require an underwater robot to be able to successfully perform autonomous navigation, but the available sensors for accurate localization are limited. A vision sensor among the available sensors is very useful for performing short range tasks, in spite of harsh underwater conditions including low visibility, noise, and large areas of featureless topography. To overcome these problems and to a utilize vision sensor for underwater localization, we propose a novel vision-based object detection technique to be applied to MCL (Monte Carlo Localization) and EKF (Extended Kalman Filter)-based SLAM algorithms. In the image processing step, a weighted correlation coefficient-based template matching and color-based image segmentation method are proposed to improve the conventional approach. In the localization step, in order to apply the landmark detection results to MCL and EKF-SLAM, dead-reckoning information and landmark detection results are used for prediction and update phases, respectively. The performance of the proposed technique is evaluated by experiments with an underwater robot platform in an indoor water tank and the results are discussed.

Rule-Based Filler on Misidentification of Vision Sensor for Robot Knowledge Instantiation (Vision Sensor를 사용하는 로봇지식 관리를 위한 Rule 기반의 인식 오류 검출 필터)

  • Lee, Dae-Sic;Lim, Gi-Hyun;Suh, Il-Hong
    • Proceedings of the KIEE Conference
    • /
    • 2008.10b
    • /
    • pp.349-350
    • /
    • 2008
  • 지능 로봇은 표현 가능한 사물, 공간을 모델링하기 위해 주변 환경을 인지하고, 자신이 수행할 수 있는 행동을 결합하여 임무를 수행하게 된다. 이를 위해 온톨로지를 사용하여 사물, 공간, 상황 및 행동을 표현하고 특정 임무 수행을 위한 자바 기반 Rule을 통해 다양한 추론 방법을 제공하는 로봇 지식 체계를 사용하였다. 사용된 로봇 지식 체계는 생성되는 인스턴스가 자료의 클래스와 속성 값이 일관성 있고 다른 자료와 모순되지 않음을 보장해 준다. 이러한 로봇 지식 체계를 효율적으로 사용하기 위해서는 완전한 온톨로지 인스턴스의 생성이 밑받침 되어야 한다. 하지만 실제 환경에서 로봇이 Vision Sensor를 통해 사물을 인식할 때 False Positive False Negative와 같은 인식 오류를 발생시키는 문제점이 있다. 이를 보완 하기 위해 본 논문에서는 물체와 물체간의 Spatial Relation, Temporal Relation과 각 물체마다의 인식률 및 속성을 고려하여 물체 인식 오류에서도 안정적으로 인스턴스 관리를 가능하게 하는 Rule 기반의 일식오류 검출 필터를 제안한다.

  • PDF

Motion and Structure Estimation Using Fusion of Inertial and Vision Data for Helmet Tracker

  • Heo, Se-Jong;Shin, Ok-Shik;Park, Chan-Gook
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.11 no.1
    • /
    • pp.31-40
    • /
    • 2010
  • For weapon cueing and Head-Mounted Display (HMD), it is essential to continuously estimate the motion of the helmet. The problem of estimating and predicting the position and orientation of the helmet is approached by fusing measurements from inertial sensors and stereo vision system. The sensor fusion approach in this paper is based on nonlinear filtering, especially expended Kalman filter(EKF). To reduce the computation time and improve the performance in vision processing, we separate the structure estimation and motion estimation. The structure estimation tracks the features which are the part of helmet model structure in the scene and the motion estimation filter estimates the position and orientation of the helmet. This algorithm is tested with using synthetic and real data. And the results show that the result of sensor fusion is successful.

Application of the Laser Vision Sensor for Corrugated Type Workpiece

  • Lee, Ji-Hyoung;Kim, Jae-Gwon;Kim, Jeom-Gu;Park, In-Wan;Kim, Hyung-Shik
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.499-503
    • /
    • 2004
  • This application-oriented paper describes an automated welding carriage system to weld a thin corrugated workpiece with welding seam tracking function. Hyundai Heavy Industries Corporation has developed an automatic welding carriage system, which utilizes pulsed plasma arc welding process for corrugated sheets. It can obtain high speed welding more than 2 times faster than traditional TIG based welding system. The aim of this development is to increase the productivity by using automatic plasma welding carriage systems, to track weld seam line using vision sensor automatically, and finally to provide a convenience to operator in order to carry out welding. In this paper a robust image processing and a distance based tracking algorithms are introduced for corrugated workpiece welding. The automatic welding carriage system is controlled by the programmable logic controller(PLC), and the automatic welding seam tracking system is controlled by the industrial personal computer(IPC) equipped with embedded OS. The system was tested at actual workpiece to show the feasibility and performance of proposed algorithm and to confirm the reliability of developed controller.

  • PDF

Edge-based Method for Human Detection in an Image (영상 내 사람의 검출을 위한 에지 기반 방법)

  • Do, Yongtae;Ban, Jonghee
    • Journal of Sensor Science and Technology
    • /
    • v.25 no.4
    • /
    • pp.285-290
    • /
    • 2016
  • Human sensing is an important but challenging technology. Unlike other methods for sensing humans, a vision sensor has many advantages, and there has been active research in automatic human detection in camera images. The combination of Histogram of Oriented Gradients (HOG) and Support Vector Machine (SVM) is currently one of the most successful methods in vision-based human detection. However, extracting HOG features from an image is computer intensive, and it is thus hard to employ the HOG method in real-time processing applications. This paper describes an efficient solution to this speed problem of the HOG method. Our method obtains edge information of an image and finds candidate regions where humans very likely exist based on the distribution pattern of the detected edge points. The HOG features are then extracted only from the candidate image regions. Since complex HOG processing is adaptively done by the guidance of the simpler edge detection step, human detection can be performed quickly. Experimental results show that the proposed method is effective in various images.

Map-Building and Position Estimation based on Multi-Sensor Fusion for Mobile Robot Navigation in an Unknown Environment (이동로봇의 자율주행을 위한 다중센서융합기반의 지도작성 및 위치추정)

  • Jin, Tae-Seok;Lee, Min-Jung;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.5
    • /
    • pp.434-443
    • /
    • 2007
  • Presently, the exploration of an unknown environment is an important task for thee new generation of mobile service robots and mobile robots are navigated by means of a number of methods, using navigating systems such as the sonar-sensing system or the visual-sensing system. To fully utilize the strengths of both the sonar and visual sensing systems. This paper presents a technique for localization of a mobile robot using fusion data of multi-ultrasonic sensors and vision system. The mobile robot is designed for operating in a well-structured environment that can be represented by planes, edges, comers and cylinders in the view of structural features. In the case of ultrasonic sensors, these features have the range information in the form of the arc of a circle that is generally named as RCD(Region of Constant Depth). Localization is the continual provision of a knowledge of position which is deduced from it's a priori position estimation. The environment of a robot is modeled into a two dimensional grid map. we defines a vision-based environment recognition, phisically-based sonar sensor model and employs an extended Kalman filter to estimate position of the robot. The performance and simplicity of the approach is demonstrated with the results produced by sets of experiments using a mobile robot.

Road marking classification method based on intensity of 2D Laser Scanner (신호세기를 이용한 2차원 레이저 스캐너 기반 노면표시 분류 기법)

  • Park, Seong-Hyeon;Choi, Jeong-hee;Park, Yong-Wan
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.5
    • /
    • pp.313-323
    • /
    • 2016
  • With the development of autonomous vehicle, there has been active research on advanced driver assistance system for road marking detection using vision sensor and 3D Laser scanner. However, vision sensor has the weak points that detection is difficult in situations involving severe illumination variance, such as at night, inside a tunnel or in a shaded area; and that processing time is long because of a large amount of data from both vision sensor and 3D Laser scanner. Accordingly, this paper proposes a road marking detection and classification method using single 2D Laser scanner. This method road marking detection and classification based on accumulation distance data and intensity data acquired through 2D Laser scanner. Experiments using a real autonomous vehicle in a real environment showed that calculation time decreased in comparison with 3D Laser scanner-based method, thus demonstrating the possibility of road marking type classification using single 2D Laser scanner.

Kalman Filter-based Sensor Fusion for Posture Stabilization of a Mobile Robot (모바일 로봇 자세 안정화를 위한 칼만 필터 기반 센서 퓨전)

  • Jang, Taeho;Kim, Youngshik;Kyoung, Minyoung;Yi, Hyunbean;Hwan, Yoondong
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.40 no.8
    • /
    • pp.703-710
    • /
    • 2016
  • In robotics research, accurate estimation of current robot position is important to achieve motion control of a robot. In this research, we focus on a sensor fusion method to provide improved position estimation for a wheeled mobile robot, considering two different sensor measurements. In this case, we fuse camera-based vision and encode-based odometry data using Kalman filter techniques to improve the position estimation of the robot. An external camera-based vision system provides global position coordinates (x, y) for the mobile robot in an indoor environment. An internal encoder-based odometry provides linear and angular velocities of the robot. We then use the position data estimated by the Kalman filter as inputs to the motion controller, which significantly improves performance of the motion controller. Finally, we experimentally verify the performance of the proposed sensor fused position estimation and motion controller using an actual mobile robot system. In our experiments, we also compare the Kalman filter-based sensor fused estimation with two different single sensor-based estimations (vision-based and odometry-based).

Image-based structural dynamic displacement measurement using different multi-object tracking algorithms

  • Ye, X.W.;Dong, C.Z.;Liu, T.
    • Smart Structures and Systems
    • /
    • v.17 no.6
    • /
    • pp.935-956
    • /
    • 2016
  • With the help of advanced image acquisition and processing technology, the vision-based measurement methods have been broadly applied to implement the structural monitoring and condition identification of civil engineering structures. Many noncontact approaches enabled by different digital image processing algorithms are developed to overcome the problems in conventional structural dynamic displacement measurement. This paper presents three kinds of image processing algorithms for structural dynamic displacement measurement, i.e., the grayscale pattern matching (GPM) algorithm, the color pattern matching (CPM) algorithm, and the mean shift tracking (MST) algorithm. A vision-based system programmed with the three image processing algorithms is developed for multi-point structural dynamic displacement measurement. The dynamic displacement time histories of multiple vision points are simultaneously measured by the vision-based system and the magnetostrictive displacement sensor (MDS) during the laboratory shaking table tests of a three-story steel frame model. The comparative analysis results indicate that the developed vision-based system exhibits excellent performance in structural dynamic displacement measurement by use of the three different image processing algorithms. The field application experiments are also carried out on an arch bridge for the measurement of displacement influence lines during the loading tests to validate the effectiveness of the vision-based system.