• 제목/요약/키워드: Vision Based Sensor

검색결과 424건 처리시간 0.023초

센서기반 지능형 아크 용접 로봇 시스템의 동향 (Trends of Sensor-based Intelligent Arc Welding Robot System)

  • 정지훈;신현호;송영훈;김수종
    • 제어로봇시스템학회논문지
    • /
    • 제20권10호
    • /
    • pp.1051-1056
    • /
    • 2014
  • In this paper, we introduce an intelligent robotic arc welding system which exploits sensors like as LVS (Laser Vision Sensor), Hall effect sensor, voltmeter and so on. The use of industrial robot is saturated because of its own limitation, and one of the major limitations is that industrial robot cannot recognize the environment. Lately, sensor-based environmental awareness research of the industrial robot is performed actively to overcome such limitation, and it can expand application field and improve productivity. We classify the sensor-based intelligent arc welding robot system by the goal and the sensing data. The goals can be categorized into detection of a welding start point, tracking of a welding line and correction of a torch deformation. The Sensing data can be categorized into welding data (i.e. current, voltage and short circuit detection) and displacement data (i.e. distance, position). This paper covers not only the explanation of the each category but also its advantage and limitation.

구조화된 환경에서의 가중치 템플릿 매칭을 이용한 자율 수중 로봇의 비전 기반 위치 인식 (Vision-based Localization for AUVs using Weighted Template Matching in a Structured Environment)

  • 김동훈;이동화;명현;최현택
    • 제어로봇시스템학회논문지
    • /
    • 제19권8호
    • /
    • pp.667-675
    • /
    • 2013
  • This paper presents vision-based techniques for underwater landmark detection, map-based localization, and SLAM (Simultaneous Localization and Mapping) in structured underwater environments. A variety of underwater tasks require an underwater robot to be able to successfully perform autonomous navigation, but the available sensors for accurate localization are limited. A vision sensor among the available sensors is very useful for performing short range tasks, in spite of harsh underwater conditions including low visibility, noise, and large areas of featureless topography. To overcome these problems and to a utilize vision sensor for underwater localization, we propose a novel vision-based object detection technique to be applied to MCL (Monte Carlo Localization) and EKF (Extended Kalman Filter)-based SLAM algorithms. In the image processing step, a weighted correlation coefficient-based template matching and color-based image segmentation method are proposed to improve the conventional approach. In the localization step, in order to apply the landmark detection results to MCL and EKF-SLAM, dead-reckoning information and landmark detection results are used for prediction and update phases, respectively. The performance of the proposed technique is evaluated by experiments with an underwater robot platform in an indoor water tank and the results are discussed.

Vision Sensor를 사용하는 로봇지식 관리를 위한 Rule 기반의 인식 오류 검출 필터 (Rule-Based Filler on Misidentification of Vision Sensor for Robot Knowledge Instantiation)

  • 이대식;임기현;서일홍
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2008년도 학술대회 논문집 정보 및 제어부문
    • /
    • pp.349-350
    • /
    • 2008
  • 지능 로봇은 표현 가능한 사물, 공간을 모델링하기 위해 주변 환경을 인지하고, 자신이 수행할 수 있는 행동을 결합하여 임무를 수행하게 된다. 이를 위해 온톨로지를 사용하여 사물, 공간, 상황 및 행동을 표현하고 특정 임무 수행을 위한 자바 기반 Rule을 통해 다양한 추론 방법을 제공하는 로봇 지식 체계를 사용하였다. 사용된 로봇 지식 체계는 생성되는 인스턴스가 자료의 클래스와 속성 값이 일관성 있고 다른 자료와 모순되지 않음을 보장해 준다. 이러한 로봇 지식 체계를 효율적으로 사용하기 위해서는 완전한 온톨로지 인스턴스의 생성이 밑받침 되어야 한다. 하지만 실제 환경에서 로봇이 Vision Sensor를 통해 사물을 인식할 때 False Positive False Negative와 같은 인식 오류를 발생시키는 문제점이 있다. 이를 보완 하기 위해 본 논문에서는 물체와 물체간의 Spatial Relation, Temporal Relation과 각 물체마다의 인식률 및 속성을 고려하여 물체 인식 오류에서도 안정적으로 인스턴스 관리를 가능하게 하는 Rule 기반의 일식오류 검출 필터를 제안한다.

  • PDF

Motion and Structure Estimation Using Fusion of Inertial and Vision Data for Helmet Tracker

  • Heo, Se-Jong;Shin, Ok-Shik;Park, Chan-Gook
    • International Journal of Aeronautical and Space Sciences
    • /
    • 제11권1호
    • /
    • pp.31-40
    • /
    • 2010
  • For weapon cueing and Head-Mounted Display (HMD), it is essential to continuously estimate the motion of the helmet. The problem of estimating and predicting the position and orientation of the helmet is approached by fusing measurements from inertial sensors and stereo vision system. The sensor fusion approach in this paper is based on nonlinear filtering, especially expended Kalman filter(EKF). To reduce the computation time and improve the performance in vision processing, we separate the structure estimation and motion estimation. The structure estimation tracks the features which are the part of helmet model structure in the scene and the motion estimation filter estimates the position and orientation of the helmet. This algorithm is tested with using synthetic and real data. And the results show that the result of sensor fusion is successful.

Application of the Laser Vision Sensor for Corrugated Type Workpiece

  • Lee, Ji-Hyoung;Kim, Jae-Gwon;Kim, Jeom-Gu;Park, In-Wan;Kim, Hyung-Shik
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.499-503
    • /
    • 2004
  • This application-oriented paper describes an automated welding carriage system to weld a thin corrugated workpiece with welding seam tracking function. Hyundai Heavy Industries Corporation has developed an automatic welding carriage system, which utilizes pulsed plasma arc welding process for corrugated sheets. It can obtain high speed welding more than 2 times faster than traditional TIG based welding system. The aim of this development is to increase the productivity by using automatic plasma welding carriage systems, to track weld seam line using vision sensor automatically, and finally to provide a convenience to operator in order to carry out welding. In this paper a robust image processing and a distance based tracking algorithms are introduced for corrugated workpiece welding. The automatic welding carriage system is controlled by the programmable logic controller(PLC), and the automatic welding seam tracking system is controlled by the industrial personal computer(IPC) equipped with embedded OS. The system was tested at actual workpiece to show the feasibility and performance of proposed algorithm and to confirm the reliability of developed controller.

  • PDF

영상 내 사람의 검출을 위한 에지 기반 방법 (Edge-based Method for Human Detection in an Image)

  • 도용태;반종희
    • 센서학회지
    • /
    • 제25권4호
    • /
    • pp.285-290
    • /
    • 2016
  • Human sensing is an important but challenging technology. Unlike other methods for sensing humans, a vision sensor has many advantages, and there has been active research in automatic human detection in camera images. The combination of Histogram of Oriented Gradients (HOG) and Support Vector Machine (SVM) is currently one of the most successful methods in vision-based human detection. However, extracting HOG features from an image is computer intensive, and it is thus hard to employ the HOG method in real-time processing applications. This paper describes an efficient solution to this speed problem of the HOG method. Our method obtains edge information of an image and finds candidate regions where humans very likely exist based on the distribution pattern of the detected edge points. The HOG features are then extracted only from the candidate image regions. Since complex HOG processing is adaptively done by the guidance of the simpler edge detection step, human detection can be performed quickly. Experimental results show that the proposed method is effective in various images.

이동로봇의 자율주행을 위한 다중센서융합기반의 지도작성 및 위치추정 (Map-Building and Position Estimation based on Multi-Sensor Fusion for Mobile Robot Navigation in an Unknown Environment)

  • 진태석;이민중;이장명
    • 제어로봇시스템학회논문지
    • /
    • 제13권5호
    • /
    • pp.434-443
    • /
    • 2007
  • Presently, the exploration of an unknown environment is an important task for thee new generation of mobile service robots and mobile robots are navigated by means of a number of methods, using navigating systems such as the sonar-sensing system or the visual-sensing system. To fully utilize the strengths of both the sonar and visual sensing systems. This paper presents a technique for localization of a mobile robot using fusion data of multi-ultrasonic sensors and vision system. The mobile robot is designed for operating in a well-structured environment that can be represented by planes, edges, comers and cylinders in the view of structural features. In the case of ultrasonic sensors, these features have the range information in the form of the arc of a circle that is generally named as RCD(Region of Constant Depth). Localization is the continual provision of a knowledge of position which is deduced from it's a priori position estimation. The environment of a robot is modeled into a two dimensional grid map. we defines a vision-based environment recognition, phisically-based sonar sensor model and employs an extended Kalman filter to estimate position of the robot. The performance and simplicity of the approach is demonstrated with the results produced by sets of experiments using a mobile robot.

신호세기를 이용한 2차원 레이저 스캐너 기반 노면표시 분류 기법 (Road marking classification method based on intensity of 2D Laser Scanner)

  • 박성현;최정희;박용완
    • 대한임베디드공학회논문지
    • /
    • 제11권5호
    • /
    • pp.313-323
    • /
    • 2016
  • With the development of autonomous vehicle, there has been active research on advanced driver assistance system for road marking detection using vision sensor and 3D Laser scanner. However, vision sensor has the weak points that detection is difficult in situations involving severe illumination variance, such as at night, inside a tunnel or in a shaded area; and that processing time is long because of a large amount of data from both vision sensor and 3D Laser scanner. Accordingly, this paper proposes a road marking detection and classification method using single 2D Laser scanner. This method road marking detection and classification based on accumulation distance data and intensity data acquired through 2D Laser scanner. Experiments using a real autonomous vehicle in a real environment showed that calculation time decreased in comparison with 3D Laser scanner-based method, thus demonstrating the possibility of road marking type classification using single 2D Laser scanner.

모바일 로봇 자세 안정화를 위한 칼만 필터 기반 센서 퓨전 (Kalman Filter-based Sensor Fusion for Posture Stabilization of a Mobile Robot)

  • 장태호;김영식;경민영;이현빈;윤동환
    • 대한기계학회논문집A
    • /
    • 제40권8호
    • /
    • pp.703-710
    • /
    • 2016
  • 로보틱스 연구에서, 모바일 로봇의 모션 제어를 위해서는 로봇의 실제 위치를 정확히 추정하는 것이 중요하다. 이를 위해 본 연구에서는, 두 개의 서로 다른 센서 데이터를 칼만필터로 융합하여 로봇의 위치인식을 개선하는 연구를 진행한다. 칼만필터로 융합한 두 개의 센서 측정값은 카메라 영상으로부터 측정된 모바일 로봇의 전역(global) 위치 좌표(x, y)값과 모바일 로봇 바퀴에 부착된 엔코더로부터 측정된 로봇의 직선 및 각속도 값이다. 다음으로 칼만필터로부터 계산된 모바일 로봇의 위치값을 모바일 로봇의 자세 안정화에 피드백하여 모션 제어의 퍼포먼스를 향상시켰다. 최종적으로 논문에서 제안한 센서융합 위치인식 기술과 모션제어기를 실제 로봇에 적용하여 실험적으로 검증하였다. 또한 모션제어에 단일 센서를 피드백으로 사용한 경우와 칼만필터로 융합한 위치 값을 사용한 경우를 비교하므로 칼만필터 기반 센서 융합 기술을 사용한 경우의 퍼포먼스 향상을 확인하였다.

Image-based structural dynamic displacement measurement using different multi-object tracking algorithms

  • Ye, X.W.;Dong, C.Z.;Liu, T.
    • Smart Structures and Systems
    • /
    • 제17권6호
    • /
    • pp.935-956
    • /
    • 2016
  • With the help of advanced image acquisition and processing technology, the vision-based measurement methods have been broadly applied to implement the structural monitoring and condition identification of civil engineering structures. Many noncontact approaches enabled by different digital image processing algorithms are developed to overcome the problems in conventional structural dynamic displacement measurement. This paper presents three kinds of image processing algorithms for structural dynamic displacement measurement, i.e., the grayscale pattern matching (GPM) algorithm, the color pattern matching (CPM) algorithm, and the mean shift tracking (MST) algorithm. A vision-based system programmed with the three image processing algorithms is developed for multi-point structural dynamic displacement measurement. The dynamic displacement time histories of multiple vision points are simultaneously measured by the vision-based system and the magnetostrictive displacement sensor (MDS) during the laboratory shaking table tests of a three-story steel frame model. The comparative analysis results indicate that the developed vision-based system exhibits excellent performance in structural dynamic displacement measurement by use of the three different image processing algorithms. The field application experiments are also carried out on an arch bridge for the measurement of displacement influence lines during the loading tests to validate the effectiveness of the vision-based system.