• Title/Summary/Keyword: Vision data

Search Result 1,771, Processing Time 0.032 seconds

Autonomous Omni-Directional Cleaning Robot System Design

  • Choi, Jun-Yong;Ock, Seung-Ho;Kim, San;Kim, Dong-Hwan
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2019-2023
    • /
    • 2005
  • In this paper, an autonomous omni directional cleaning robot which recognizes an obstacle and a battery charger is introduced. It utilizes a robot vision, ultra sonic sensors, and infrared sensors information along with appropriate algorithm. Three omni-directional wheels make the robot move any direction, enabling a faster maneuvering than a simple track typed robot. The robot system transfers command and image data through Blue-tooth wireless modules to be operated in a remote place. The robot vision associated with sensor data makes the robot proceed in an autonomous behavior. An autonomous battery charger searching is implemented by using a map-building which results in overcoming the error due to the slip on the wheels, and camera and sensor information.

  • PDF

Learning Fuzzy Rules for Pattern Classification and High-Level Computer Vision

  • Rhee, Chung-Hoon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.1E
    • /
    • pp.64-74
    • /
    • 1997
  • In many decision making systems, rule-based approaches are used to solve complex problems in the areas of pattern analysis and computer vision. In this paper, we present methods for generating fuzzy IF-THEN rules automatically from training data for pattern classification and high-level computer vision. The rules are generated by construction minimal approximate fuzzy aggregation networks and then training the networks using gradient descent methods. The training data that represent features are treated as linguistic variables that appear in the antecedent clauses of the rules. Methods to generate the corresponding linguistic labels(values) and their membership functions are presented. In addition, an inference procedure is employed to deduce conclusions from information presented to our rule-base. Two experimental results involving synthetic and real are given.

  • PDF

Localization of Mobile Robot Using Active Omni-directional Ranging System (능동 전방향 거리 측정 시스템을 이용한 이동로봇의 위치 추정)

  • Ryu, Ji-Hyung;Kim, Jin-Won;Yi, Soo-Yeong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.5
    • /
    • pp.483-488
    • /
    • 2008
  • An active omni-directional raging system using an omni-directional vision with structured light has many advantages compared to the conventional ranging systems: robustness against external illumination noise because of the laser structured light and computational efficiency because of one shot image containing $360^{\circ}$ environment information from the omni-directional vision. The omni-directional range data represents a local distance map at a certain position in the workspace. In this paper, we propose a matching algorithm for the local distance map with the given global map database, thereby to localize a mobile robot in the global workspace. Since the global map database consists of line segments representing edges of environment object in general, the matching algorithm is based on relative position and orientation of line segments in the local map and the global map. The effectiveness of the proposed omni-directional ranging system and the matching are verified through experiments.

Motion and Structure Estimation Using Fusion of Inertial and Vision Data for Helmet Tracker

  • Heo, Se-Jong;Shin, Ok-Shik;Park, Chan-Gook
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.11 no.1
    • /
    • pp.31-40
    • /
    • 2010
  • For weapon cueing and Head-Mounted Display (HMD), it is essential to continuously estimate the motion of the helmet. The problem of estimating and predicting the position and orientation of the helmet is approached by fusing measurements from inertial sensors and stereo vision system. The sensor fusion approach in this paper is based on nonlinear filtering, especially expended Kalman filter(EKF). To reduce the computation time and improve the performance in vision processing, we separate the structure estimation and motion estimation. The structure estimation tracks the features which are the part of helmet model structure in the scene and the motion estimation filter estimates the position and orientation of the helmet. This algorithm is tested with using synthetic and real data. And the results show that the result of sensor fusion is successful.

Radar and Vision Sensor Fusion for Primary Vehicle Detection (레이더와 비전센서 융합을 통한 전방 차량 인식 알고리즘 개발)

  • Yang, Seung-Han;Song, Bong-Sob;Um, Jae-Young
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.7
    • /
    • pp.639-645
    • /
    • 2010
  • This paper presents the sensor fusion algorithm that recognizes a primary vehicle by fusing radar and monocular vision data. In general, most of commercial radars may lose tracking of the primary vehicle, i.e., the closest preceding vehicle in the same lane, when it stops or goes with other preceding vehicles in the adjacent lane with similar velocity and range. In order to improve the performance degradation of radar, vehicle detection information from vision sensor and path prediction predicted by ego vehicle sensors will be combined for target classification. Then, the target classification will work with probabilistic association filters to track a primary vehicle. Finally the performance of the proposed sensor fusion algorithm is validated using field test data on highway.

k-path diffusion method for Multi-vision Display Technique among Smart Devices (k-path 확산 방법을 이용한 스마트 디바이스 간 멀티비전 디스플레이 기술)

  • Ren, Hao;Kim, Paul;Kim, Sangwook
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.11a
    • /
    • pp.1183-1186
    • /
    • 2014
  • Our research is different form traditional to have some large LED screen grouping together to constitute multi-vision technique. In this paper, we purpose a method of using k-path diffusion method to build connect between the devices and find an optimal data transmission path. In second half of this paper, through practical application, we using this technique transmitting data successfully and achieving a simple Multi-vision effect. This technique possess smart devices and Wifi P2P's features, these features improve system's dynamic and decentralized processing ability make our technique has high scalability.

Efficient Digitizing in Reverse Engineering By Sensor Fusion (역공학에서 센서융합에 의한 효율적인 데이터 획득)

  • Park, Young-Kun;Ko, Tae-Jo;Kim, Hrr-Sool
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.18 no.9
    • /
    • pp.61-70
    • /
    • 2001
  • This paper introduces a new digitization method with sensor fusion for shape measurement in reverse engineering. Digitization can be classified into contact and non-contact type according to the measurement devices. Important thing in digitization is speed and accuracy. The former is excellent in speed and the latter is good for accuracy. Sensor fusion in digitization intends to incorporate the merits of both types so that the system can be automatized. Firstly, non-contact sensor with vision system acquires coarse 3D point data rapidly. This process is needed to identify and loco]ice the object located at unknown position on the table. Secondly, accurate 3D point data can be automatically obtained using scanning probe based on the previously measured coarse 3D point data. In the research, a great number of measuring points of equi-distance were instructed along the line acquired by the vision system. Finally, the digitized 3D point data are approximated to the rational B-spline surface equation, and the free-formed surface information can be transferred to a commercial CAD/CAM system via IGES translation in order to machine the modeled geometric shape.

  • PDF

A Study on the 3-dimensional feature measurement system for OMM using multiple-sensors (멀티센서 시스템을 이용한 3차원 형상의 기상측정에 관한 연구)

  • 권양훈;윤길상;조명우
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2002.10a
    • /
    • pp.158-163
    • /
    • 2002
  • This paper presents a multiple sensor system for rapid and high-precision coordinate data acquisition in the OMM (On-machine measurement) process. In this research, three sensors (touch probe, laser, and vision sensor) are integrated to obtain more accurate measuring results. The touch-type probe has high accuracy, but is time-consuming. Vision sensor can acquire many point data rapidly over a spatial range but its accuracy is less than other sensors. Also, it is not possible to acquire data for invisible areas. Laser sensor has medium accuracy and measuring speed among the sensors, and can acquire data for sharp or rounded edge and the features with very small holes and/or grooves. However, it has range- constraints to use because of its system structure. In this research, a new optimum sensor integration method for OMM is proposed by integrating the multiple-sensor to accomplish mote effective inspection planning. To verify the effectiveness of the proposed method, simulation and experimental works are performed, and the results are analyzed.

  • PDF

Real-time Reflection Light Detection Algorithm using Pixel Clustering Data (Pixel 군집화 Data를 이용한 실시간 반사광 검출 알고리즘)

  • Hwang, Dokyung;An, Jongwoo;Kang, Hosun;Lee, Jangmyung
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.4
    • /
    • pp.301-310
    • /
    • 2019
  • A new algorithm has been propose to detect the reflected light region as disturbances in a real-time vision system. There have been several attempts to detect existing reflected light region. The conventional mathematical approach requires a lot of complex processes so that it is not suitable for a real-time vision system. On the other hand, when a simple detection process has been applied, the reflected light region can not be detected accurately. Therefore, in order to detect reflected light region for a real-time vision system, the detection process requires a new algorithm that is as simple and accurate as possible. In order to extract the reflected light, the proposed algorithm has been adopted several filter equations and clustering processes in the HSI (Hue Saturation Intensity) color space. Also the proposed algorithm used the pre-defined reflected light data generated through the clustering processes to make the algorithm simple. To demonstrate the effectiveness of the proposed algorithm, several images with the reflected region have been used and the reflected regions are detected successfully.

Development and application of a vision-based displacement measurement system for structural health monitoring of civil structures

  • Lee, Jong Jae;Fukuda, Yoshio;Shinozuka, Masanobu;Cho, Soojin;Yun, Chung-Bang
    • Smart Structures and Systems
    • /
    • v.3 no.3
    • /
    • pp.373-384
    • /
    • 2007
  • For structural health monitoring (SHM) of civil infrastructures, displacement is a good descriptor of the structural behavior under all the potential disturbances. However, it is not easy to measure displacement of civil infrastructures, since the conventional sensors need a reference point, and inaccessibility to the reference point is sometimes caused by the geographic conditions, such as a highway or river under a bridge, which makes installation of measuring devices time-consuming and costly, if not impossible. To resolve this issue, a visionbased real-time displacement measurement system using digital image processing techniques is developed. The effectiveness of the proposed system was verified by comparing the load carrying capacities of a steel-plate girder bridge obtained from the conventional sensor and the present system. Further, to simultaneously measure multiple points, a synchronized vision-based system is developed using master/slave system with wireless data communication. For the purpose of verification, the measured displacement by a synchronized vision-based system was compared with the data measured by conventional contact-type sensors, linear variable differential transformers (LVDT) from a laboratory test.