• 제목/요약/키워드: Vision-based Control

검색결과 689건 처리시간 0.029초

Monocular Vision-Based Guidance and Control for a Formation Flight

  • Cheon, Bong-kyu;Kim, Jeong-ho;Min, Chan-oh;Han, Dong-in;Cho, Kyeum-rae;Lee, Dae-woo;Seong, kie-jeong
    • International Journal of Aeronautical and Space Sciences
    • /
    • 제16권4호
    • /
    • pp.581-589
    • /
    • 2015
  • This paper describes a monocular vision-based formation flight technology using two fixed wing unmanned aerial vehicles. To measuring relative position and attitude of a leader aircraft, a monocular camera installed in the front of the follower aircraft captures an image of the leader, and position and attitude are measured from the image using the KLT feature point tracker and POSIT algorithm. To verify the feasibility of this vision processing algorithm, a field test was performed using two light sports aircraft, and our experimental results show that the proposed monocular vision-based measurement algorithm is feasible. Performance verification for the proposed formation flight technology was carried out using the X-Plane flight simulator. The formation flight simulation system consists of two PCs playing the role of leader and follower. When the leader flies by the command of user, the follower aircraft tracks the leader by designed guidance and a PI control law, and all the information about leader was measured using monocular vision. This simulation shows that guidance using relative attitude information tracks the leader aircraft better than not using attitude information. This simulation shows absolute average errors for the relative position as follows: X-axis: 2.88 m, Y-axis: 2.09 m, and Z-axis: 0.44 m.

Korean Wide Area Differential Global Positioning System Development Status and Preliminary Test Results

  • Yun, Ho;Kee, Chang-Don;Kim, Do-Yoon
    • International Journal of Aeronautical and Space Sciences
    • /
    • 제12권3호
    • /
    • pp.274-282
    • /
    • 2011
  • This paper is focused on dynamic modeling and control system design as well as vision based collision avoidance for multi-rotor unmanned aerial vehicles (UAVs). Multi-rotor UAVs are defined as rotary-winged UAVs with multiple rotors. These multi-rotor UAVs can be utilized in various military situations such as surveillance and reconnaissance. They can also be used for obtaining visual information from steep terrains or disaster sites. In this paper, a quad-rotor model is introduced as well as its control system, which is designed based on a proportional-integral-derivative controller and vision-based collision avoidance control system. Additionally, in order for a UAV to navigate safely in areas such as buildings and offices with a number of obstacles, there must be a collision avoidance algorithm installed in the UAV's hardware, which should include the detection of obstacles, avoidance maneuvering, etc. In this paper, the optical flow method, one of the vision-based collision avoidance techniques, is introduced, and multi-rotor UAV's collision avoidance simulations are described in various virtual environments in order to demonstrate its avoidance performance.

Visual Tracking Control of Aerial Robotic Systems with Adaptive Depth Estimation

  • Metni, Najib;Hamel, Tarek
    • International Journal of Control, Automation, and Systems
    • /
    • 제5권1호
    • /
    • pp.51-60
    • /
    • 2007
  • This paper describes a visual tracking control law of an Unmanned Aerial Vehicle(UAV) for monitoring of structures and maintenance of bridges. It presents a control law based on computer vision for quasi-stationary flights above a planar target. The first part of the UAV's mission is the navigation from an initial position to a final position to define a desired trajectory in an unknown 3D environment. The proposed method uses the homography matrix computed from the visual information and derives, using backstepping techniques, an adaptive nonlinear tracking control law allowing the effective tracking and depth estimation. The depth represents the desired distance separating the camera from the target.

A User Interface for Vision Sensor based Indirect Teaching of a Robotic Manipulator (시각 센서 기반의 다 관절 매니퓰레이터 간접교시를 위한 유저 인터페이스 설계)

  • Kim, Tae-Woo;Lee, Hoo-Man;Kim, Joong-Bae
    • Journal of Institute of Control, Robotics and Systems
    • /
    • 제19권10호
    • /
    • pp.921-927
    • /
    • 2013
  • This paper presents a user interface for vision based indirect teaching of a robotic manipulator with Kinect and IMU (Inertial Measurement Unit) sensors. The user interface system is designed to control the manipulator more easily in joint space, Cartesian space and tool frame. We use the skeleton data of the user from Kinect and Wrist-mounted IMU sensors to calculate the user's joint angles and wrist movement for robot control. The interface system proposed in this paper allows the user to teach the manipulator without a pre-programming process. This will improve the teaching time of the robot and eventually enable increased productivity. Simulation and experimental results are presented to verify the performance of the robot control and interface system.

Development of a SLAM System for Small UAVs in Indoor Environments using Gaussian Processes (가우시안 프로세스를 이용한 실내 환경에서 소형무인기에 적합한 SLAM 시스템 개발)

  • Jeon, Young-San;Choi, Jongeun;Lee, Jeong Oog
    • Journal of Institute of Control, Robotics and Systems
    • /
    • 제20권11호
    • /
    • pp.1098-1102
    • /
    • 2014
  • Localization of aerial vehicles and map building of flight environments are key technologies for the autonomous flight of small UAVs. In outdoor environments, an unmanned aircraft can easily use a GPS (Global Positioning System) for its localization with acceptable accuracy. However, as the GPS is not available for use in indoor environments, the development of a SLAM (Simultaneous Localization and Mapping) system that is suitable for small UAVs is therefore needed. In this paper, we suggest a vision-based SLAM system that uses vision sensors and an AHRS (Attitude Heading Reference System) sensor. Feature points in images captured from the vision sensor are obtained by using GPU (Graphics Process Unit) based SIFT (Scale-invariant Feature Transform) algorithm. Those feature points are then combined with attitude information obtained from the AHRS to estimate the position of the small UAV. Based on the location information and color distribution, a Gaussian process model is generated, which could be a map. The experimental results show that the position of a small unmanned aircraft is estimated properly and the map of the environment is constructed by using the proposed method. Finally, the reliability of the proposed method is verified by comparing the difference between the estimated values and the actual values.

Intelligent Rain Sensing Algorithm for Vision-based Smart Wiper System (비전 기반 스마트 와이퍼 시스템을 위한 지능형 레인 감지 알고리즘 개발)

  • Lee, Kyung-Chang;Kim, Man-Ho;Im, Hong-Jun;Lee, Seok
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 한국정밀공학회 2003년도 춘계학술대회 논문집
    • /
    • pp.1727-1730
    • /
    • 2003
  • A windshield wiper system plays a key part in assurance of driver's safety at rainfall. However, because quantity of rain and snow vary irregularly according to time and velocity of automotive, a driver changes speed and operation period of a wiper from time to time in order to secure enough visual field in the traditional windshield wiper system. Because a manual operation of windshield wiper distracts driver's sensitivity and causes inadvertent driving, this is becoming direct cause of traffic accident. Therefore, this paper presents the basic architecture of vision-based smart windshield wiper system and the rain sensing algorithm that regulate speed and operation period of windshield wiper automatically according to quantity of rain or snow. Also, this paper introduces the fuzzy wiper control algorithm based on human's expertise, and evaluates performance of suggested algorithm in simulator model. In especial, the vision sensor can measure wide area relatively than the optical rain sensor. hence, this grasp rainfall state more exactly in case disturbance occurs.

  • PDF

Design and Fabrication of Multi-rotor system for Vision based Autonomous Landing (영상 기반 자동 착륙용 멀티로터 시스템 설계 및 개발)

  • Kim, Gyou-Beom;Song, Seung-Hwa;Yoon, Kwang-Joon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • 제12권6호
    • /
    • pp.141-146
    • /
    • 2012
  • This paper introduces development of multi-rotor system and vision based autonomous landing system. Multi-rotor platform is modeled by rigid body motion with Newton Euler concept. Also Multi-rotor platform is simulated and tuned by LQR control algorithm. Vision based Autonomous Landing system uses a single camera that is mounted Multi-rotor system. Augmented reality algorithm is used as marker detection algorithm and autonomous landing code is test with GCS for the precision landing.

Vision-based Reduction of Gyro Drift for Intelligent Vehicles (지능형 운행체를 위한 비전 센서 기반 자이로 드리프트 감소)

  • Kyung, MinGi;Nguyen, Dang Khoi;Kang, Taesam;Min, Dugki;Lee, Jeong-Oog
    • Journal of Institute of Control, Robotics and Systems
    • /
    • 제21권7호
    • /
    • pp.627-633
    • /
    • 2015
  • Accurate heading information is crucial for the navigation of intelligent vehicles. In outdoor environments, GPS is usually used for the navigation of vehicles. However, in GPS-denied environments such as dense building areas, tunnels, underground areas and indoor environments, non-GPS solutions are required. Yaw-rates from a single gyro sensor could be one of the solutions. In dealing with gyro sensors, the drift problem should be resolved. HDR (Heuristic Drift Reduction) can reduce the average heading error in straight line movement. However, it shows rather large errors in some moving environments, especially along curved lines. This paper presents a method called VDR (Vision-based Drift Reduction), a system which uses a low-cost vision sensor as compensation for HDR errors.

Development of a Lateral Control System for Autonomous Vehicles Using Data Fusion of Vision and IMU Sensors with Field Tests (비전 및 IMU 센서의 정보융합을 이용한 자율주행 자동차의 횡방향 제어시스템 개발 및 실차 실험)

  • Park, Eun Seong;Yu, Chang Ho;Choi, Jae Weon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • 제21권3호
    • /
    • pp.179-186
    • /
    • 2015
  • In this paper, a novel lateral control system is proposed for the purpose of improving lane keeping performance which is independent from GPS signals. Lane keeping is a key function for the realization of unmanned driving systems. In order to obtain this objective, a vision sensor based real-time lane detection scheme is developed. Furthermore, we employ a data fusion along with a real-time steering angle of the test vehicle to improve its lane keeping performance. The fused direction data can be obtained by an IMU sensor and vision sensor. The performance of the proposed system was verified by computer simulations along with field tests using MOHAVE, a commercial vehicle from Kia Motors of Korea.

Lateral Control of Vision-Based Autonomous Vehicle using Neural Network (신형회로망을 이용한 비젼기반 자율주행차량의 횡방향제어)

  • 김영주;이경백;김영배
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 한국정밀공학회 2000년도 추계학술대회 논문집
    • /
    • pp.687-690
    • /
    • 2000
  • Lately, many studies have been progressed for the protection human's lives and property as holding in check accidents happened by human's carelessness or mistakes. One part of these is the development of an autonomouse vehicle. General control method of vision-based autonomous vehicle system is to determine the navigation direction by analyzing lane images from a camera, and to navigate using proper control algorithm. In this paper, characteristic points are abstracted from lane images using lane recognition algorithm with sobel operator. And then the vehicle is controlled using two proposed auto-steering algorithms. Two steering control algorithms are introduced in this paper. First method is to use the geometric relation of a camera. After transforming from an image coordinate to a vehicle coordinate, a steering angle is calculated using Ackermann angle. Second one is using a neural network algorithm. It doesn't need to use the geometric relation of a camera and is easy to apply a steering algorithm. In addition, It is a nearest algorithm for the driving style of human driver. Proposed controller is a multilayer neural network using Levenberg-Marquardt backpropagation learning algorithm which was estimated much better than other methods, i.e. Conjugate Gradient or Gradient Decent ones.

  • PDF