• 제목/요약/키워드: Vision Systems

검색결과 1,716건 처리시간 0.031초

영상 기반 센서 융합을 이용한 이쪽로봇에서의 환경 인식 시스템의 개발 (Vision Based Sensor Fusion System of Biped Walking Robot for Environment Recognition)

  • 송희준;이선구;강태구;김동원;서삼준;박귀태
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2006년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.123-125
    • /
    • 2006
  • This paper discusses the method of vision based sensor fusion system for biped robot walking. Most researches on biped walking robot have mostly focused on walking algorithm itself. However, developing vision systems for biped walking robot is an important and urgent issue since biped walking robots are ultimately developed not only for researches but to be utilized in real life. In the research, systems for environment recognition and tole-operation have been developed for task assignment and execution of biped robot as well as for human robot interaction (HRI) system. For carrying out certain tasks, an object tracking system using modified optical flow algorithm and obstacle recognition system using enhanced template matching and hierarchical support vector machine algorithm by wireless vision camera are implemented with sensor fusion system using other sensors installed in a biped walking robot. Also systems for robot manipulating and communication with user have been developed for robot.

  • PDF

레이더와 비전 센서를 이용하여 선행차량의 횡방향 운동상태를 보정하기 위한 IMM-PDAF 기반 센서융합 기법 연구 (A Study on IMM-PDAF based Sensor Fusion Method for Compensating Lateral Errors of Detected Vehicles Using Radar and Vision Sensors)

  • 장성우;강연식
    • 제어로봇시스템학회논문지
    • /
    • 제22권8호
    • /
    • pp.633-642
    • /
    • 2016
  • It is important for advanced active safety systems and autonomous driving cars to get the accurate estimates of the nearby vehicles in order to increase their safety and performance. This paper proposes a sensor fusion method for radar and vision sensors to accurately estimate the state of the preceding vehicles. In particular, we performed a study on compensating for the lateral state error on automotive radar sensors by using a vision sensor. The proposed method is based on the Interactive Multiple Model(IMM) algorithm, which stochastically integrates the multiple Kalman Filters with the multiple models depending on lateral-compensation mode and radar-single sensor mode. In addition, a Probabilistic Data Association Filter(PDAF) is utilized as a data association method to improve the reliability of the estimates under a cluttered radar environment. A two-step correction method is used in the Kalman filter, which efficiently associates both the radar and vision measurements into single state estimates. Finally, the proposed method is validated through off-line simulations using measurements obtained from a field test in an actual road environment.

광추적기와 내부 비전센서를 이용한 수술도구의 3차원 자세 및 위치 추적 시스템 (3D Orientation and Position Tracking System of Surgical Instrument with Optical Tracker and Internal Vision Sensor)

  • 조영진;오현민;김민영
    • 제어로봇시스템학회논문지
    • /
    • 제22권8호
    • /
    • pp.579-584
    • /
    • 2016
  • When surgical instruments are tracked in an image-guided surgical navigation system, a stereo vision system with high accuracy is generally used, which is called optical tracker. However, this optical tracker has the disadvantage that a line-of-sight between the tracker and surgical instrument must be maintained. Therefore, to complement the disadvantage of optical tracking systems, an internal vision sensor is attached to a surgical instrument in this paper. Monitoring the target marker pattern attached on patient with this vision sensor, this surgical instrument is possible to be tracked even when the line-of-sight of the optical tracker is occluded. To verify the system's effectiveness, a series of basic experiments is carried out. Lastly, an integration experiment is conducted. The experimental results show that rotational error is bounded to max $1.32^{\circ}$ and mean $0.35^{\circ}$, and translation error is in max 1.72mm and mean 0.58mm. Finally, it is confirmed that the proposed tool tracking method using an internal vision sensor is useful and effective to overcome the occlusion problem of the optical tracker.

Vision 시스템을 이용한 위험운전 원인 분석 프로그램 개발에 관한 연구 (Development of a Cause Analysis Program to Risky Driving with Vision System)

  • 오주택;이상용
    • 한국ITS학회 논문지
    • /
    • 제8권6호
    • /
    • pp.149-161
    • /
    • 2009
  • 차량의 전자제어 시스템은 운전자의 안전을 확보하려는 법률적, 사회적 요구에 발맞추어 빠르게 발달하고 있으며, 하드웨어의 가격하락과 센서 및 프로세서의 고성능화에 따라 레이더, 카메라, 레이저와 같은 다양한 센서를 적용한 다양한 운전자 지원 시스템 (Driver Assistance System)이 실용화되고 있다. 이에 본 연구에서는 CCD 카메라로부터 취득되는 영상을 이용하여 실험차량의 주행 차선 및 주변에 위치하거나 접근하는 차량을 인식할 수 있는 프로그램을 개발하였으며, 선행 연구에서 개발된 위험운전 판단 알고리즘과 통합하여 위험운전에 대한 원인 및 결과를 분석 할 수 있는 Vision 시스템 기반 위험운전 분석 프로그램을 개발하였다. 본 연구에서 개발한 위험운전 분석 프로그램은 위험운전판단 알고리즘의 판단변수인 차량 거동 데이터와 차선 및 차량인식 프로그램에서 획득된 정보와 융합하여 위험운전 행위의 원인 및 결과를 효과적으로 분석할 수 있을 것으로 판단된다.

  • PDF

Tele-Operation of Dual Arm Robot Using 3-D vision

  • Shibagami, Genjirou;Itoh, Akihiko;Ishimatsu, Takakazu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1998년도 제13차 학술회의논문집
    • /
    • pp.386-390
    • /
    • 1998
  • A master-slave system is proposed as a teaching device for a dual arm robot. The slave robots are remotely controlled by two delta-type master arms. In order to help the operator to observe the target object from the desired position and desired direction, cameras are mounted on a specialized manipulator, Movements of two slave arms are coordinated with that of the cameras. Due to this coordinated movements, the operator needs not to care the geometrical relation between the cameras and the slave robots.

  • PDF

Real-Time Facial Recognition Using the Geometric Informations

  • Lee, Seong-Cheol;Kang, E-Sok
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.55.3-55
    • /
    • 2001
  • The implementation of human-like robot has been advanced in various parts such as mechanic arms, legs, and applications of five senses. The vision applications have been developed in several decades and especially the face recognition have become a prominent issue. In addition, the development of computer systems makes it possible to process complex algorithms in realtime. The most of human recognition systems adopt the discerning method using fingerprint, iris, and etc. These methods restrict the motion of the person to be discriminated. Recently, the researchers of human recognition systems are interested in facial recognition by using machine vision. Thus, the object of this paper is the implementation of the realtime ...

  • PDF

영상기반 자동결함 검사시스템에서 재현성 향상을 위한 결함 모델링 및 측정 기법 (Robust Defect Size Measuring Method for an Automated Vision Inspection System)

  • 주영복;허경무
    • 제어로봇시스템학회논문지
    • /
    • 제19권11호
    • /
    • pp.974-978
    • /
    • 2013
  • AVI (Automatic Vision Inspection) systems automatically detect defect features and measure their sizes via camera vision. AVI systems usually report different measurements on the same defect with some variations on position or rotation mainly because different images are provided. This is caused by possible variations from the image acquisition process including optical factors, nonuniform illumination, random noises, and so on. For this reason, conventional area based defect measuring methods have problems of robustness and consistency. In this paper, we propose a new defect size measuring method to overcome this problem, utilizing volume information that is completely ignored in the area based defect measuring method. The results show that our proposed method dramatically improves the robustness and consistency of defect size measurement.

RAVIP: Real-Time AI Vision Platform for Heterogeneous Multi-Channel Video Stream

  • Lee, Jeonghun;Hwang, Kwang-il
    • Journal of Information Processing Systems
    • /
    • 제17권2호
    • /
    • pp.227-241
    • /
    • 2021
  • Object detection techniques based on deep learning such as YOLO have high detection performance and precision in a single channel video stream. In order to expand to multiple channel object detection in real-time, however, high-performance hardware is required. In this paper, we propose a novel back-end server framework, a real-time AI vision platform (RAVIP), which can extend the object detection function from single channel to simultaneous multi-channels, which can work well even in low-end server hardware. RAVIP assembles appropriate component modules from the RODEM (real-time object detection module) Base to create per-channel instances for each channel, enabling efficient parallelization of object detection instances on limited hardware resources through continuous monitoring with respect to resource utilization. Through practical experiments, RAVIP shows that it is possible to optimize CPU, GPU, and memory utilization while performing object detection service in a multi-channel situation. In addition, it has been proven that RAVIP can provide object detection services with 25 FPS for all 16 channels at the same time.

A Knowledge-Based Machine Vision System for Automated Industrial Web Inspection

  • Cho, Tai-Hoon;Jung, Young-Kee;Cho, Hyun-Chan
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제1권1호
    • /
    • pp.13-23
    • /
    • 2001
  • Most current machine vision systems for industrial inspection were developed with one specific task in mind. Hence, these systems are inflexible in the sense that they cannot easily be adapted to other applications. In this paper, a general vision system framework has been developed that can be easily adapted to a variety of industrial web inspection problems. The objective of this system is to automatically locate and identify \\\"defects\\\" on the surface of the material being inspected. This framework is designed to be robust, to be flexible, and to be as computationally simple as possible. To assure robustness this framework employs a combined strategy of top-down and bottom-up control, hierarchical defect models, and uncertain reasoning methods. To make this framework flexible, a modular Blackboard framework is employed. To minimize computational complexity the system incorporates a simple multi-thresholding segmentation scheme, a fuzzy logic focus of attention mechanism for scene analysis operations, and a partitioning if knowledge that allows concurrent parallel processing during recognition.cognition.

  • PDF