• Title/Summary/Keyword: Vision sensor

Search Result 822, Processing Time 0.039 seconds

Rule-Based Filler on Misidentification of Vision Sensor for Robot Knowledge Instantiation (Vision Sensor를 사용하는 로봇지식 관리를 위한 Rule 기반의 인식 오류 검출 필터)

  • Lee, Dae-Sic;Lim, Gi-Hyun;Suh, Il-Hong
    • Proceedings of the KIEE Conference
    • /
    • 2008.10b
    • /
    • pp.349-350
    • /
    • 2008
  • 지능 로봇은 표현 가능한 사물, 공간을 모델링하기 위해 주변 환경을 인지하고, 자신이 수행할 수 있는 행동을 결합하여 임무를 수행하게 된다. 이를 위해 온톨로지를 사용하여 사물, 공간, 상황 및 행동을 표현하고 특정 임무 수행을 위한 자바 기반 Rule을 통해 다양한 추론 방법을 제공하는 로봇 지식 체계를 사용하였다. 사용된 로봇 지식 체계는 생성되는 인스턴스가 자료의 클래스와 속성 값이 일관성 있고 다른 자료와 모순되지 않음을 보장해 준다. 이러한 로봇 지식 체계를 효율적으로 사용하기 위해서는 완전한 온톨로지 인스턴스의 생성이 밑받침 되어야 한다. 하지만 실제 환경에서 로봇이 Vision Sensor를 통해 사물을 인식할 때 False Positive False Negative와 같은 인식 오류를 발생시키는 문제점이 있다. 이를 보완 하기 위해 본 논문에서는 물체와 물체간의 Spatial Relation, Temporal Relation과 각 물체마다의 인식률 및 속성을 고려하여 물체 인식 오류에서도 안정적으로 인스턴스 관리를 가능하게 하는 Rule 기반의 일식오류 검출 필터를 제안한다.

  • PDF

Autonomous Omni-Directional Cleaning Robot System Design

  • Choi, Jun-Yong;Ock, Seung-Ho;Kim, San;Kim, Dong-Hwan
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2019-2023
    • /
    • 2005
  • In this paper, an autonomous omni directional cleaning robot which recognizes an obstacle and a battery charger is introduced. It utilizes a robot vision, ultra sonic sensors, and infrared sensors information along with appropriate algorithm. Three omni-directional wheels make the robot move any direction, enabling a faster maneuvering than a simple track typed robot. The robot system transfers command and image data through Blue-tooth wireless modules to be operated in a remote place. The robot vision associated with sensor data makes the robot proceed in an autonomous behavior. An autonomous battery charger searching is implemented by using a map-building which results in overcoming the error due to the slip on the wheels, and camera and sensor information.

  • PDF

A Study on the Peg-in-hole of chamferless Parts using Force/Moment/Vision Sensor (힘/모멘트/비전센서를 사용한 챔퍼가 없는 부품의 삽입작업에 관한 연구)

  • Back, Seung-Hyop;Lim, Dong-Jin
    • Proceedings of the KIEE Conference
    • /
    • 2001.11c
    • /
    • pp.119-122
    • /
    • 2001
  • This paper discusses the peg-in-hole task of chamferless parts using force/moment/vision sensors. The directional error occurring during the task are categorized into two cases according to the degree of initial errors, And different Mechanical analysis has been accomplished for each cases. This paper proposes an algorithm which enables to reduce intial directional error using digital Images acquired from hand-eyed vision sensor, And to continue the task even with the large directional error by adjusting the error using digital image processing. The effectiveness of the algorithm has been demonstrated through experimentation using 5-axis robot equipped with a developed controller force/moment sensor and color digital camera on its hand.

  • PDF

Motion and Structure Estimation Using Fusion of Inertial and Vision Data for Helmet Tracker

  • Heo, Se-Jong;Shin, Ok-Shik;Park, Chan-Gook
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.11 no.1
    • /
    • pp.31-40
    • /
    • 2010
  • For weapon cueing and Head-Mounted Display (HMD), it is essential to continuously estimate the motion of the helmet. The problem of estimating and predicting the position and orientation of the helmet is approached by fusing measurements from inertial sensors and stereo vision system. The sensor fusion approach in this paper is based on nonlinear filtering, especially expended Kalman filter(EKF). To reduce the computation time and improve the performance in vision processing, we separate the structure estimation and motion estimation. The structure estimation tracks the features which are the part of helmet model structure in the scene and the motion estimation filter estimates the position and orientation of the helmet. This algorithm is tested with using synthetic and real data. And the results show that the result of sensor fusion is successful.

Position Control of an Object Using Vision Sensor (비전 센서를 이용한 물체의 위치 제어)

  • Ha, Eun-Hyeon;Choi, Goon-Ho
    • Journal of the Semiconductor & Display Technology
    • /
    • v.10 no.2
    • /
    • pp.49-56
    • /
    • 2011
  • In recent years, owing to the development of the image processing technology, the research to build control system using a vision sensor is stimulated. However, the time delay must be considered, because it works of time to get the result of an image processing in the system. It can be seen as an obstacle factor to real-time control. In this paper, using the pattern matching technique, the location of two objects is recognized from one image which was acquired by a camera. And it is implemented to a position control system as feedback data. Also, a possibility was shown to overcome a problem of time delay using PID controller. A number of experiments were done to show the validity of this study.

Inspection of Weld Bead using High Speed Laser Vision Sensor

  • Lee, H.;Ahn, S.;Sung, K.;Rhee, S.
    • International Journal of Korean Welding Society
    • /
    • v.3 no.2
    • /
    • pp.53-59
    • /
    • 2003
  • Visual inspection using laser vision sensor was proposed for fast and economic inspection and was verified experimentally. Welding is one of the most important manufacturing processes for automotive and electronics industries as well as heavy industries. The weld zone influences the reliability of the products. There are two kinds of weld inspection tests, destructive and non­destructive test. Even though the destructive test is much more reliable, the product should be destroyed, and hence the non­destructive test such as ultrasonic or X­ray test was used to overcome this problem. However, these tests are not used for real time inspection.

  • PDF

Development of a Ubiquitous Vision System for Location-awareness of Multiple Targets by a Matching Technique for the Identity of a Target;a New Approach

  • Kim, Chi-Ho;You, Bum-Jae;Kim, Hag-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.68-73
    • /
    • 2005
  • Various techniques have been proposed for detection and tracking of targets in order to develop a real-world computer vision system, e.g., visual surveillance systems, intelligent transport systems (ITSs), and so forth. Especially, the idea of distributed vision system is required to realize these techniques in a wide-spread area. In this paper, we develop a ubiquitous vision system for location-awareness of multiple targets. Here, each vision sensor that the system is composed of can perform exact segmentation for a target by color and motion information, and visual tracking for multiple targets in real-time. We construct the ubiquitous vision system as the multiagent system by regarding each vision sensor as the agent (the vision agent). Therefore, we solve matching problem for the identity of a target as handover by protocol-based approach. We propose the identified contract net (ICN) protocol for the approach. The ICN protocol not only is independent of the number of vision agents but also doesn't need calibration between vision agents. Therefore, the ICN protocol raises speed, scalability, and modularity of the system. We adapt the ICN protocol in our ubiquitous vision system that we construct in order to make an experiment. Our ubiquitous vision system shows us reliable results and the ICN protocol is successfully operated through several experiments.

  • PDF

Assembly Performance Evaluation for Prefabricated Steel Structures Using k-nearest Neighbor and Vision Sensor (k-근접 이웃 및 비전센서를 활용한 프리팹 강구조물 조립 성능 평가 기술)

  • Bang, Hyuntae;Yu, Byeongjun;Jeon, Haemin
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.35 no.5
    • /
    • pp.259-266
    • /
    • 2022
  • In this study, we developed a deep learning and vision sensor-based assembly performance evaluation method isfor prefabricated steel structures. The assembly parts were segmented using a modified version of the receptive field block convolution module inspired by the eccentric function of the human visual system. The quality of the assembly was evaluated by detecting the bolt holes in the segmented assembly part and calculating the bolt hole positions. To validate the performance of the evaluation, models of standard and defective assembly parts were produced using a 3D printer. The assembly part segmentation network was trained based on the 3D model images captured from a vision sensor. The sbolt hole positions in the segmented assembly image were calculated using image processing techniques, and the assembly performance evaluation using the k-nearest neighbor algorithm was verified. The experimental results show that the assembly parts were segmented with high precision, and the assembly performance based on the positions of the bolt holes in the detected assembly part was evaluated with a classification error of less than 5%.

Development of an FPGA-based Sealer Coating Inspection Vision System for Automotive Glass Assembly Automation Equipment (자동차 글라스 조립 자동화설비를 위한 FPGA기반 실러 도포검사 비전시스템 개발)

  • Ju-Young Kim;Jae-Ryul Park
    • Journal of Sensor Science and Technology
    • /
    • v.32 no.5
    • /
    • pp.320-327
    • /
    • 2023
  • In this study, an FPGA-based sealer inspection system was developed to inspect the sealer applied to install vehicle glass on a car body. The sealer is a liquid or paste-like material that promotes adhesion such as sealing and waterproofing for mounting and assembling vehicle parts to a car body. The system installed in the existing vehicle design parts line does not detect the sealer in the glass rotation section and takes a long time to process. This study developed a line laser camera sensor and an FPGA vision signal processing module to solve this problem. The line laser camera sensor was developed such that the resolution and speed of the camera for data acquisition could be modified according to the irradiation angle of the laser. Furthermore, it was developed considering the mountability of the entire system to prevent interference with the sealer ejection machine. In addition, a vision signal processing module was developed using the Zynq-7020 FPGA chip to improve the processing speed of the algorithm that converted the profile to the sealer shape image acquired from a 2D camera and calculated the width and height of the sealer using the converted profile. The performance of the developed sealer application inspection system was verified by establishing an experimental environment identical to that of an actual automobile production line. The experimental results confirmed the performance of the sealer application inspection at a level that satisfied the requirements of automotive field standards.

The Influence of the Reflected Arc Light on Vision Sensors for Welding Process Autimation (물체의 반사성질이 용접자동화용 시각센서의 아크노이즈에 미치는 영향에 관한 연구)

  • 이철원;나석주
    • Journal of Welding and Joining
    • /
    • v.13 no.1
    • /
    • pp.115-126
    • /
    • 1995
  • Vision sensors using the optical triangulation have been widely used for automatic welding systems in various ways, but their reliability is seriously affected by presence of the arc noise. The reliability of vision sensors was analyzed with variation of the arc noise by considering the reflectance of the base metal. first, the properties of the base metal's reflection were modelled by using the Bidirectional Reflectance-Distribution Function(BRDF), and then the variation of the reflected arc intensity was formulated for various configurations of the torch, base metal, and sensor. The experimental data of the gray level of the reflected arc light were obtained for two materials, mild steel and stainless steel. It was found that the results calculated from the proposed model were in good agreement with the experimental data.

  • PDF