• Title/Summary/Keyword: Learning Control Algorithm

Search Result 947, Processing Time 0.034 seconds

Iterative Learning Control with Feedback Using Fourier Series with Application to Robot Trajectory Tracking (퓨리에 급수 근사를 이용한 궤환을 가진 반복 학습제어와 로보트 궤적 추종에의 응용)

  • ;;Zeungnam Bien
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.30B no.4
    • /
    • pp.67-75
    • /
    • 1993
  • The Fourier series are employed to approximate the input/output(I/O) characteristics of a dynamic system and, based on the approximation, a new learing control algorithm is proposed in order to find iteratively the control input for tracking a desired trajectory. The use of the Fourier approximation of I/O renders at least a couple of useful consequences: the frequency characteristics of the system can be used in the controller design and the reconstruction of the system states is not required. The convergence condition of the proposed algorithm is provided and the existence and uniqueness of the desired control input is discussed. The effectiveness of the proposed algorithm is illustrated by computer simulation for a robot trajectory tracking. It is shown that, by adding feedback term in learning control algorithm, robustness and convergence speed can be improved.

  • PDF

A SPEED CONTROLLER FOR VEHICLES USING FUZZY CONTROL ALGORITHM WITH SELF0LEARNING (자기 학습 능력을 가진 퍼지 제어기를 이용한 차량의 속력 제어기 개발)

  • 정승현;김상우
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.880-883
    • /
    • 1996
  • This paper suggests a speed control algorithm for the ICC(Intelligent Cruise Controller) system. The speed controller is designed using the fuzzy controller which shows the good performance in nonlinear system having the complex mathematical model. The fuzzy controller was equipped with the capability of a self-learning in real time in order to maintain the good performance of the speed controller in a time-varying environment the self-learning properties and the performance of the fuzzy controller are showed via computer simulation. The suggested fuzzy controller will be applied to the PRV-III which is our test vehicle.

  • PDF

On Designing a Robot Manipulator Control System Using Multilayer Neural Network and Immune Algorithm (다층 신경망과 면역 알고리즘을 이용한 로봇 매니퓰레이터 제어 시스템 설계)

  • 서재용;김성현;전홍태
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1997.10a
    • /
    • pp.267-270
    • /
    • 1997
  • As an approach to develope a control system with robustness in changing control environment conditions, this paper will propose a robot manipulator control system using multilayer neural network and immune algorithm. The proposed immune algorithm which has the characteristics of immune system such as distributed and anomaly detection, probabilistic detection, learning and memory, consists of the innate immune algorithm and the adaptive immune algorithm. We will demonstrate the effectiveness of the proposed control system with simulations of a 2-link robot manipulator.

  • PDF

Design of shift controller using learning algorithm in automatic transmission (학습 알고리듬을 이용한 자동변속기의 변속제어기 설계)

  • Jun, Yoon-Sik;Chang, Hyo-Whan
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.22 no.3
    • /
    • pp.663-670
    • /
    • 1998
  • Most of feedback shift controllers developed in the past have fixed control parameters tuned by experts using a trial and error method. Therefore, those controllers cannot satisfy the best control performance under various driving conditions. To improve the shift quality under various driving conditions, a new self-organizing controller(SOC) that has an optimal control performance through self-learning of driving conditions and driver's pattern is designed in this study. The proposed SOC algorithm for the shift controller uses simple descent method and has less calculation time than complex fuzzy relation, thus makes real-time control passible. PCSV (Pressure Control Solenoid Valve) control current is used as a control input, and turbine speed of the torque converter is used indirectly to monitor the transient torque as a feedback signal, which is more convenient to use and economic than the torque signal measured directoly by a torque sensor. The results of computer simulations show that an apparent reduction of shift-transient torque is obtained through the process of each run without initial fuzzy rules and a good control performance in the shift-transient torque is also obtained.

A Study on the Fuzzy Learning Control for Force Control of Robot Manipulators (로봇 매니퓰레이터의 힘제어를 위한 퍼지 학습제어에 관한 연구)

  • 황용연
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.26 no.5
    • /
    • pp.581-588
    • /
    • 2002
  • A fuzzy learning control algorithm is proposed in this paper. In this method, two fuzzy controllers are used as a feedback and a feedforward type. The fuzzy feedback controller can be designed using simple knowledge for the controlled system. On the other hand, the fuzzy feedforward controller has a self-organizing mechanism and therefore, it does not need any knowledge in advance. The effectiveness of the proposed algorithm is demonstrated by experiment on the position and force control problem of a parallelogram type robot manipulator with two degrees of freedom. It is shown that the rapid learning and the robustness can be achieved by adopting the proposed method.

Traffic Control using Q-Learning Algorithm (Q 학습을 이용한 교통 제어 시스템)

  • Zheng, Zhang;Seung, Ji-Hoon;Kim, Tae-Yeong;Chong, Kil-To
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.11
    • /
    • pp.5135-5142
    • /
    • 2011
  • A flexible mechanism is proposed in this paper to improve the dynamic response performance of a traffic flow control system in an urban area. The roads, vehicles, and traffic control systems are all modeled as intelligent systems, wherein a wireless communication network is used as the medium of communication between the vehicles and the roads. The necessary sensor networks are installed in the roads and on the roadside upon which reinforcement learning is adopted as the core algorithm for this mechanism. A traffic policy can be planned online according to the updated situations on the roads, based on all the information from the vehicles and the roads. This improves the flexibility of traffic flow and offers a much more efficient use of the roads over a traditional traffic control system. The optimum intersection signals can be learned automatically online. An intersection control system is studied as an example of the mechanism using Q-learning based algorithm, and simulation results showed that the proposed mechanism can improve the traffic efficiency and the waiting time at the signal light by more than 30% in various conditions compare to the traditional signaling system.

STAR-24K: A Public Dataset for Space Common Target Detection

  • Zhang, Chaoyan;Guo, Baolong;Liao, Nannan;Zhong, Qiuyun;Liu, Hengyan;Li, Cheng;Gong, Jianglei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.2
    • /
    • pp.365-380
    • /
    • 2022
  • The target detection algorithm based on supervised learning is the current mainstream algorithm for target detection. A high-quality dataset is the prerequisite for the target detection algorithm to obtain good detection performance. The larger the number and quality of the dataset, the stronger the generalization ability of the model, that is, the dataset determines the upper limit of the model learning. The convolutional neural network optimizes the network parameters in a strong supervision method. The error is calculated by comparing the predicted frame with the manually labeled real frame, and then the error is passed into the network for continuous optimization. Strongly supervised learning mainly relies on a large number of images as models for continuous learning, so the number and quality of images directly affect the results of learning. This paper proposes a dataset STAR-24K (meaning a dataset for Space TArget Recognition with more than 24,000 images) for detecting common targets in space. Since there is currently no publicly available dataset for space target detection, we extracted some pictures from a series of channels such as pictures and videos released by the official websites of NASA (National Aeronautics and Space Administration) and ESA (The European Space Agency) and expanded them to 24,451 pictures. We evaluate popular object detection algorithms to build a benchmark. Our STAR-24K dataset is publicly available at https://github.com/Zzz-zcy/STAR-24K.

Q-Learning based Collision Avoidance for 802.11 Stations with Maximum Requirements

  • Chang Kyu Lee;Dong Hyun Lee;Junseok Kim;Xiaoying Lei;Seung Hyong Rhee
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.1035-1048
    • /
    • 2023
  • The IEEE 802.11 WLAN adopts a random backoff algorithm for its collision avoidance mechanism, and it is well known that the contention-based algorithm may suffer from performance degradation especially in congested networks. In this paper, we design an efficient backoff algorithm that utilizes a reinforcement learning method to determine optimal values of backoffs. The mobile nodes share a common contention window (CW) in our scheme, and using a Q-learning algorithm, they can avoid collisions by finding and implicitly reserving their optimal time slot(s). In addition, we introduce Frame Size Control (FSC) algorithm to minimize the possible degradation of aggregate throughput when the number of nodes exceeds the CW size. Our simulation shows that the proposed backoff algorithm with FSC method outperforms the 802.11 protocol regardless of the traffic conditions, and an analytical modeling proves that our mechanism has a unique operating point that is fair and stable.

Position Control of Linear Synchronous Motor by Dual Learning (이중 학습에 의한 선형동기모터의 위치제어)

  • Park, Jung-Il;Suh, Sung-Ho;Ulugbek, Umirov
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.29 no.1
    • /
    • pp.79-86
    • /
    • 2012
  • This paper proposes PID and RIC (Robust Internal-loop Compensator) based motion controller using dual learning algorithm for position control of linear synchronous motor respectively. Its gains are auto-tuned by using two learning algorithms, reinforcement learning and neural network. The feedback controller gains are tuned by reinforcement learning, and then the feedforward controller gains are tuned by neural network. Experiments prove the validity of dual learning algorithm. The RIC controller has better performance than does the PID-feedforward controller in reducing tracking error and disturbance rejection. Neural network shows its ability to decrease tracking error and to reject disturbance in the stop range of the target position and home.

Object tracking algorithm of Swarm Robot System for using Polygon based Q-learning and parallel SVM

  • Seo, Snag-Wook;Yang, Hyun-Chang;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.3
    • /
    • pp.220-224
    • /
    • 2008
  • This paper presents the polygon-based Q-leaning and Parallel SVM algorithm for object search with multiple robots. We organized an experimental environment with one hundred mobile robots, two hundred obstacles, and ten objects. Then we sent the robots to a hallway, where some obstacles were lying about, to search for a hidden object. In experiment, we used four different control methods: a random search, a fusion model with Distance-based action making (DBAM) and Area-based action making (ABAM) process to determine the next action of the robots, and hexagon-based Q-learning, and dodecagon-based Q-learning and parallel SVM algorithm to enhance the fusion model with Distance-based action making (DBAM) and Area-based action making (ABAM) process. In this paper, the result show that dodecagon-based Q-learning and parallel SVM algorithm is better than the other algorithm to tracking for object.