• Title/Summary/Keyword: Vision-Guided

Search Result 60, Processing Time 0.031 seconds

Implementation of Virtual Instrumentation based Realtime Vision Guided Autopilot System and Onboard Flight Test using Rotory UAV (가상계측기반 실시간 영상유도 자동비행 시스템 구현 및 무인 로터기를 이용한 비행시험)

  • Lee, Byoung-Jin;Yun, Suk-Chang;Lee, Young-Jae;Sung, Sang-Kyung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.9
    • /
    • pp.878-886
    • /
    • 2012
  • This paper investigates the implementation and flight test of realtime vision guided autopilot system based on virtual instrumentation platform. A graphical design process via virtual instrumentation platform is fully used for the image processing, communication between systems, vehicle dynamics control, and vision coupled guidance algorithms. A significatnt ojective of the algorithm is to achieve an environment robust autopilot despite wind and an irregular image acquisition condition. For a robust vision guided path tracking and hovering performance, the flight path guidance logic is combined in a multi conditional basis with the position estimation algorithm coupled with the vehicle attitude dynamics. An onboard flight test equipped with the developed realtime vision guided autopilot system is done using the rotary UAV system with full attitude control capability. Outdoor flight test demonstrated that the designed vision guided autopilot system succeeded in UAV's hovering on top of ground target within about several meters under geenral windy environment.

A Guideline Tracing Technique Based on a Virtual Tracing Wheel for Effective Navigation of Vision-based AGVs (비전 기반 무인반송차의 효과적인 운행을 위한 가상추적륜 기반 유도선 추적 기법)

  • Kim, Minhwan;Byun, Sungmin
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.3
    • /
    • pp.539-547
    • /
    • 2016
  • Automated guided vehicles (AGVs) are widely used in industry. Several types of vision-based AGVs have been studied in order to reduce cost of infrastructure building at floor of workspace and to increase flexibility of changing the navigation path layout. A practical vision-based guideline tracing method is proposed in this paper. A virtual tracing wheel is introduced and adopted in this method, which enables a vision-based AGV to trace a guideline in diverse ways. This method is also useful for preventing damage of the guideline by enforcing the real steering wheel of the AGV not to move on the guideline. Usefulness of the virtual tracing wheel is analyzed through computer simulations. Several navigation tests with a commercial AGV were also performed on a usual guideline layout and we confirmed that the virtual tracing wheel based tracing method could work practically well.

Vision Sensor-Based Driving Algorithm for Indoor Automatic Guided Vehicles

  • Quan, Nguyen Van;Eum, Hyuk-Min;Lee, Jeisung;Hyun, Chang-Ho
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.13 no.2
    • /
    • pp.140-146
    • /
    • 2013
  • In this paper, we describe a vision sensor-based driving algorithm for indoor automatic guided vehicles (AGVs) that facilitates a path tracking task using two mono cameras for navigation. One camera is mounted on vehicle to observe the environment and to detect markers in front of the vehicle. The other camera is attached so the view is perpendicular to the floor, which compensates for the distance between the wheels and markers. The angle and distance from the center of the two wheels to the center of marker are also obtained using these two cameras. We propose five movement patterns for AGVs to guarantee smooth performance during path tracking: starting, moving straight, pre-turning, left/right turning, and stopping. This driving algorithm based on two vision sensors gives greater flexibility to AGVs, including easy layout change, autonomy, and even economy. The algorithm was validated in an experiment using a two-wheeled mobile robot.

A real-time vision system for SMT automation

  • Hwang, Shin-Hwan;Kim, Dong-Sik;Yun, Il-Dong;Choi, Jin-Woo;Lee, Sang-Uk;Choi, Jong-Soo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1990.10b
    • /
    • pp.923-928
    • /
    • 1990
  • This paper describes the design and implementation of a real-time, high-precision vision system and its application to SMT(surface mounting technology) automation. The vision system employs a 32 bit MC68030 as a main processor, and consists of image acquisition unit. DSP56001 DSP based vision processor, and several algorithmically dedicated hardware modules. The image acquisition unit provides 512*480*8 bit image for high-precision vision tasks. The DSP vision processor and hardware modules, such as histogram extractor and feature extractor, are designed for a real-time excution of vision algorithms. Especially, the implementation of multi-processing architecture based on DSP vision processors allows us to employ more sophisticated and flexible vision algorithms for real-time operation. The developed vision system is combined with an Adept Robot system to form a complete SMD system. It has been found that the vision guided SMD assembly system is able to provide a satisfactory performance for SND automation.

  • PDF

Research on Local and Global Infrared Image Pre-Processing Methods for Deep Learning Based Guided Weapon Target Detection

  • Jae-Yong Baek;Dae-Hyeon Park;Hyuk-Jin Shin;Yong-Sang Yoo;Deok-Woong Kim;Du-Hwan Hur;SeungHwan Bae;Jun-Ho Cheon;Seung-Hwan Bae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.7
    • /
    • pp.41-51
    • /
    • 2024
  • In this paper, we explore the enhancement of target detection accuracy in the guided weapon using deep learning object detection on infrared (IR) images. Due to the characteristics of IR images being influenced by factors such as time and temperature, it's crucial to ensure a consistent representation of object features in various environments when training the model. A simple way to address this is by emphasizing the features of target objects and reducing noise within the infrared images through appropriate pre-processing techniques. However, in previous studies, there has not been sufficient discussion on pre-processing methods in learning deep learning models based on infrared images. In this paper, we aim to investigate the impact of image pre-processing techniques on infrared image-based training for object detection. To achieve this, we analyze the pre-processing results on infrared images that utilized global or local information from the video and the image. In addition, in order to confirm the impact of images converted by each pre-processing technique on object detector training, we learn the YOLOX target detector for images processed by various pre-processing methods and analyze them. In particular, the results of the experiments using the CLAHE (Contrast Limited Adaptive Histogram Equalization) shows the highest detection accuracy with a mean average precision (mAP) of 81.9%.

The Vision-based Autonomous Guided Vehicle Using a Virtual Photo-Sensor Array (VPSA) for a Port Automation (가상 포토센서 배열을 탑재한 항만 자동화 자을 주행 차량)

  • Kim, Soo-Yong;Park, Young-Su;Kim, Sang-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.2
    • /
    • pp.164-171
    • /
    • 2010
  • We have studied the port-automation system which is requested by the steep increment of cost and complexity for processing the freight. This paper will introduce a new algorithm for navigating and controlling the autonomous Guided Vehicle (AGV). The camera has the optical distortion in nature and is sensitive to the external ray, the weather, and the shadow, but it is very cheap and flexible to make and construct the automation system for the port. So we tried to apply to the AGV for detecting and tracking the lane using the CCD camera. In order to make the error stable and exact, this paper proposes new concept and algorithm for obtaining the error is generated by the Virtual Photo-Sensor Array (VPSA). VPSAs are implemented by programming and very easy to use for the various autonomous systems. Because the load of the computation is light, the AGV utilizes the maximal performance of the CCD camera and enables the CPU to take multi-tasks. We experimented on the proposed algorithm using the mobile robot and confirmed the stable and exact performance for tracking the lane.

A Clean Mobile Robot for 4th Generation LCD Cassette transfer (4세대 LCD Cassette 자동 반송 이동로봇)

  • 김진기;성학경;김성권
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.249-249
    • /
    • 2000
  • This paper introduces a clean mobile robot fur 4th generation LCD cassette, which is guided by optical sensor and position compensation using vision module. The mobile robot for LCD cassette transfer might be controlled by AGV controller which has powerful algorithms. It offers optimum routes to the destination of clean mobile robot by using dynamic dispatch algorithm and MAP data. This clean mobile robot is equipped with 4 axes fork type manipulator providing repeatability accuracy of $\pm$ 0.05mm.

  • PDF

Reflection Removal in Stereo Vision Under Night Illumination (야간 조명 아래 스테레오 비전의 반사 제거)

  • Naveed, Sairah;Lee, Sang-Woong
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2012.05a
    • /
    • pp.26-27
    • /
    • 2012
  • Reflection considered as the view disturbing noise in optical systems, such as stereo camera in autonomous vehicles especially in night. Reflection caused by the street light or due to rainwater under adverse weather conditions. A blur image detected by the camera that results in wrong guidance to vehicle for detecting its track. A vehicle guidance approach through stereo vision can be same in day and night time. However it cannot be guided with same image analysis due to diverse illumination conditions. We develop the technique that shows its efficacy with illustrations of reflection removal off the camera lens and vehicle tracking control.

  • PDF

Vision-Based Robot Manipulator for Grasping Objects (물체 잡기를 위한 비전 기반의 로봇 메뉴플레이터)

  • Baek, Young-Min;Ahn, Ho-Seok;Choi, Jin-Young
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.331-333
    • /
    • 2007
  • Robot manipulator is one of the important features in service robot area. Until now, there has been a lot of research on robot" manipulator that can imitate the functions of a human being by recognizing and grasping objects. In this paper, we present a robot arm based on the object recognition vision system. We have implemented closed-loop control that use the feedback from visual information, and used a sonar sensor to improve the accuracy. We have placed the web-camera on the top of the hand to recognize objects. We also present some vision-based manipulation issues and our system features.

  • PDF

A Study of Line Recognition and Driving Direction Control On Vision based AGV (Vision을 이용한 자율주행 로봇의 라인 인식 및 주행방향 결정에 관한 연구)

  • Kim, Young-Suk;Kim, Tae-Wan;Lee, Chang-Goo
    • Proceedings of the KIEE Conference
    • /
    • 2002.07d
    • /
    • pp.2341-2343
    • /
    • 2002
  • This paper describes a vision-based line recognition and control of driving direction for an AGV(autonomous guided vehicle). As navigation guide, black stripe attached on the corridor is used. Binary image of guide stripe captured by a CCD camera is used. For detect the guideline quickly and extractly, we use for variable thresholding algorithm. this low-cost line-tracking system is efficiently using pc-based real time vision processing. steering control is studied through controller with guide-line angle error. This method is tested via a typical agv with a single camera in laboratory environment.

  • PDF