• Title/Summary/Keyword: Intelligent Learning System

Search Result 1,144, Processing Time 0.07 seconds

Active Control of Sound in a Duct System by Back Propagation Algorithm (역전파 알고리즘에 의한 덕트내 소음의 능동제어)

  • Shin, Joon;Kim, Heung-Seob;Oh, Jae-Eung
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.18 no.9
    • /
    • pp.2265-2271
    • /
    • 1994
  • With the improvement of standard of living, requirement for comfortable and quiet environment has been increased and, therefore, there has been a many researches for active noise reduction to overcome the limit of passive control method. In this study, active noise control is performed in a duct system using intelligent control technique which needs not decide the coefficients of high order filter and the mathematical modeling of a system. Back propagation algorithm is applied as an intelligent control technique and control system is organized to exclude the error microphone and high speed operational device which are indispensable for conventional active noise control techniques. Furthermore, learning is performed by organizing acoustic feedback model, and the effect of the proposed control technique is verified via computer simulation and experiment of active noise control in a duct system.

Additional Learning Framework for Multipurpose Image Recognition

  • Itani, Michiaki;Iyatomi, Hitoshi;Hagiwara, Masafumi
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.480-483
    • /
    • 2003
  • We propose a new framework that aims at multi-purpose image recognition, a difficult task for the conventional rule-based systems. This framework is farmed based on the idea of computer-based learning algorithm. In this research, we introduce the new functions of an additional learning and a knowledge reconstruction on the Fuzzy Inference Neural Network (FINN) (1) to enable the system to accommodate new objects and enhance the accuracy as necessary. We examine the capability of the proposed framework using two examples. The first one is the capital letter recognition task from UCI machine learning repository to estimate the effectiveness of the framework itself, Even though the whole training data was not given in advance, the proposed framework operated with a small loss of accuracy by introducing functions of the additional learning and the knowledge reconstruction. The other is the scenery image recognition. We confirmed that the proposed framework could recognize images with high accuracy and accommodate new object recursively.

  • PDF

Generating Cooperative Behavior by Multi-Agent Profit Sharing on the Soccer Game

  • Miyazaki, Kazuteru;Terada, Takashi;Kobayashi, Hiroaki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.166-169
    • /
    • 2003
  • Reinforcement learning if a kind of machine learning. It aims to adapt an agent to a given environment with a clue to a reward and a penalty. Q-learning [8] that is a representative reinforcement learning system treats a reward and a penalty at the same time. There is a problem how to decide an appropriate reward and penalty values. We know the Penalty Avoiding Rational Policy Making algorithm (PARP) [4] and the Penalty Avoiding Profit Sharing (PAPS) [2] as reinforcement learning systems to treat a reward and a penalty independently. though PAPS is a descendant algorithm of PARP, both PARP and PAPS tend to learn a local optimal policy. To overcome it, ion this paper, we propose the Multi Best method (MB) that is PAPS with the multi-start method[5]. MB selects the best policy in several policies that are learned by PAPS agents. By applying PS, PAPS and MB to a soccer game environment based on the SoccerBots[9], we show that MB is the best solution for the soccer game environment.

  • PDF

Object tracking algorithm of Swarm Robot System for using SVM and Polygon based Q-learning (SVM과 다각형 기반의 Q-learning 알고리즘을 이용한 군집로봇의 목표물 추적 알고리즘)

  • Seo, Sang-Wook;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2008.04a
    • /
    • pp.143-146
    • /
    • 2008
  • 본 논문에서는 군집로봇시스템에서 목표물 추적을 위하여 SVM을 이용한 12각형 기반의 Q-learning 알고리즘을 제안한다. 제안한 알고리즘의 유효성을 보이기 위해 본 논문에서는 여러대의 로봇과 장애물 그리고 하나의 목표물을 정하고, 각각의 로봇이 숨겨진 목표물을 찾아내는 실험을 가정하여 무작위, DBAM과 ABAM의 융합 모델, 그리고 마지막으로 본 논문에서 제안한 SVM과 12각형 기반의 Q-learning 알고리즘을 이용하여 실험을 수행하고, 이 3가지 방법을 비교하여 본 논문의 유효성을 검증하였다.

  • PDF

Fuzzy Classification Rule Learning by Decision Tree Induction

  • Lee, Keon-Myung;Kim, Hak-Joon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.3 no.1
    • /
    • pp.44-51
    • /
    • 2003
  • Knowledge acquisition is a bottleneck in knowledge-based system implementation. Decision tree induction is a useful machine learning approach for extracting classification knowledge from a set of training examples. Many real-world data contain fuzziness due to observation error, uncertainty, subjective judgement, and so on. To cope with this problem of real-world data, there have been some works on fuzzy classification rule learning. This paper makes a survey for the kinds of fuzzy classification rules. In addition, it presents a fuzzy classification rule learning method based on decision tree induction, and shows some experiment results for the method.

A Study on the Properness Constraint on Iterative Learning Controllers (반복 학습 제어기의 properness 제한에 관한 연구)

  • Moon, Jung-Ho;Doh, Tae-Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.12 no.5
    • /
    • pp.393-396
    • /
    • 2002
  • This note investigates the necessity of properness constraint on iterative learning controllers from the viewpoint of the initial condition problem. It is shown that unless the iterative learning controller is proper, the teaming control input may grow unboundedly and thus not be feasible in practice, though the convergence of tracking error is theoretically guaranteed. In addition, this note analyzes the effects of initial condition misalignment in the iterative learning control system on the control input and convergence property.

Object tracking algorithm of Swarm Robot System for using Polygon based Q-learning and parallel SVM

  • Seo, Snag-Wook;Yang, Hyun-Chang;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.3
    • /
    • pp.220-224
    • /
    • 2008
  • This paper presents the polygon-based Q-leaning and Parallel SVM algorithm for object search with multiple robots. We organized an experimental environment with one hundred mobile robots, two hundred obstacles, and ten objects. Then we sent the robots to a hallway, where some obstacles were lying about, to search for a hidden object. In experiment, we used four different control methods: a random search, a fusion model with Distance-based action making (DBAM) and Area-based action making (ABAM) process to determine the next action of the robots, and hexagon-based Q-learning, and dodecagon-based Q-learning and parallel SVM algorithm to enhance the fusion model with Distance-based action making (DBAM) and Area-based action making (ABAM) process. In this paper, the result show that dodecagon-based Q-learning and parallel SVM algorithm is better than the other algorithm to tracking for object.

Reinforcement Learning-Based Intelligent Decision-Making for Communication Parameters

  • Xie, Xia.;Dou, Zheng;Zhang, Yabin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.9
    • /
    • pp.2942-2960
    • /
    • 2022
  • The core of cognitive radio is the problem concerning intelligent decision-making for communication parameters, the objective of which is to find the most appropriate parameter configuration to optimize transmission performance. The current algorithms have the disadvantages of high dependence on prior knowledge, large amount of calculation, and high complexity. We propose a new decision-making model by making full use of the interactivity of reinforcement learning (RL) and applying the Q-learning algorithm. By simplifying the decision-making process, we avoid large-scale RL, reduce complexity and improve timeliness. The proposed model is able to find the optimal waveform parameter configuration for the communication system in complex channels without prior knowledge. Moreover, this model is more flexible than previous decision-making models. The simulation results demonstrate the effectiveness of our model. The model not only exhibits better decision-making performance in the AWGN channels than the traditional method, but also make reasonable decisions in the fading channels.

Adaptive Fuzzy Neural Control of Unknown Nonlinear Systems Based on Rapid Learning Algorithm

  • Kim, Hye-Ryeong;Kim, Jae-Hun;Kim, Euntai;Park, Mignon
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09b
    • /
    • pp.95-98
    • /
    • 2003
  • In this paper, an adaptive fuzzy neural control of unknown nonlinear systems based on the rapid learning algorithm is proposed for optimal parameterization. We combine the advantages of fuzzy control and neural network techniques to develop an adaptive fuzzy control system for updating nonlinear parameters of controller. The Fuzzy Neural Network(FNN), which is constructed by an equivalent four-layer connectionist network, is able to learn to control a process by updating the membership functions. The free parameters of the AFN controller are adjusted on-line according to the control law and adaptive law for the purpose of controlling the plant track a given trajectory and it's initial values are off-line preprocessing, In order to improve the convergence of the learning process, we propose a rapid learning algorithm which combines the error back-propagation algorithm with Aitken's $\delta$$\^$2/ algorithm. The heart of this approach ls to reduce the computational burden during the FNN learning process and to improve convergence speed. The simulation results for nonlinear plant demonstrate the control effectiveness of the proposed system for optimal parameterization.

  • PDF

A Study on Position Control of 2-Mass Resonant System Using Iterative Learning Control (반복 학습 제어를 이용한 2관성 공진계의 위치 제어에 관한 연구)

  • Lee, Hak-Sung;Moon, Seung-Bin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.6
    • /
    • pp.693-698
    • /
    • 2004
  • In this paper, an iterative learning control method is applied to suppress a vibration of a 2-mass system which has a flexible coupling between a load and a motor. More specifically, conditions for the load speed without vibration are derived based on the steady-state condition. And the desired motor position trajectory is synthesized based on the relation between the load and motor speed. Finally, a PD-type iterative learning control law is applied for the desired motor position trajectory. Since the learning law applied for the desired trajectory guarantees the perfect tracking performance, the resulting load speed shows no vibration even when there exist model uncertainties. A modification to the learning law is also Presented to suppress undesired effects of an initial position error, The simulation results show the effectiveness of the proposed learning method.