• Title/Summary/Keyword: neural network learning

Search Result 4,140, Processing Time 0.034 seconds

An Efficient and Accurate Artificial Neural Network through Induced Learning Retardation and Pruning Training Methods Sequence

  • Bandibas, Joel;Kohyama, Kazunori;Wakita, Koji
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.429-431
    • /
    • 2003
  • The induced learning retardation method involves the temporary inhibition of the artificial neural network’s active units from participating in the error reduction process during training. This stimulates the less active units to contribute significantly to reduce the network error. However, some less active units are not sensitive to stimulation making them almost useless. The network can then be pruned by removing the less active units to make it smaller and more efficient. This study focuses on making the network more efficient and accurate by developing the induced learning retardation and pruning sequence training method. The developed procedure results to faster learning and more accurate artificial neural network for satellite image classification.

  • PDF

The Comparison of Neural Network Learning Paradigms: Backpropagation, Simulated Annealing, Genetic Algorithm, and Tabu Search

  • Chen Ming-Kuen
    • Proceedings of the Korean Society for Quality Management Conference
    • /
    • 1998.11a
    • /
    • pp.696-704
    • /
    • 1998
  • Artificial neural networks (ANN) have successfully applied into various areas. But, How to effectively established network is the one of the critical problem. This study will focus on this problem and try to extensively study. Firstly, four different learning algorithms ANNs were constructed. The learning algorithms include backpropagation, simulated annealing, genetic algorithm, and tabu search. The experimental results of the above four different learning algorithms were tested by statistical analysis. The training RMS, training time, and testing RMS were used as the comparison criteria.

  • PDF

Intelligent Agent System by Self Organizing Neural Network

  • Cho, Young-Im
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1468-1473
    • /
    • 2005
  • In this paper, I proposed the INTelligent Agent System by Kohonen's Self Organizing Neural Network (INTAS). INTAS creates each user's profile from the information. Based on it, learning community grouping suitable to each individual is automatically executed by using unsupervised learning algorithm. In INTAS, grouping and learning are automatically performed on real time by multiagents, regardless of the number of learners. A new framework has been proposed to generate multiagents, and it is a feature that efficient multiagents can be executed by proposing a new negotiation mode between multiagents..

  • PDF

Precision Position Control of a Piezoelectric Actuator Using Neural Network (신경 회로망을 이용한 압전구동기의 정밀위치제어)

  • Kim, Hae-Seok;Lee, Byung-Ryong;Park, Kyu-Youl
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.11
    • /
    • pp.9-15
    • /
    • 1999
  • A piezoelectric actuator is widely used in precision positioning applications due to its excellent positioning resolution. However, the piezoelectric actuator lacks in repeatability because of its inherently high hysteresis characteristic between voltage and displacement. In this paper, a controller is proposed to compensate the hysteresis nonlinearity. The controller is composed of a PID and a neural network part in parallel manner. The output of the PID controller is used to teach the neural network controller by the unsupervised learning method. In addition, the PID controller stabilizes the piezoelectric actuator in the beginning of the learning process, when the neural network controller is not learned. However, after the learning process the piezoelectric actuator is mainly controlled by the neural netwok controller. In this paper, the excellent tracking performance of the proposed controller was verified by experiments and was compared with the classical PID controller.

  • PDF

A Learning Algorithm for a Recurrent Neural Network Base on Dual Extended Kalman Filter (두개의 Extended Kalman Filter를 이용한 Recurrent Neural Network 학습 알고리듬)

  • Song, Myung-Geun;Kim, Sang-Hee;Park, Won-Woo
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.349-351
    • /
    • 2004
  • The classical dynamic backpropagation learning algorithm has the problems of learning speed and the determine of learning parameter. The Extend Kalman Filter(EKF) is used effectively for a state estimation method for a non linear dynamic system. This paper presents a learning algorithm using Dual Extended Kalman Filter(DEKF) for Fully Recurrent Neural Network(FRNN). This DEKF learning algorithm gives the minimum variance estimate of the weights and the hidden outputs. The proposed DEKF learning algorithm is applied to the system identification of a nonlinear SISO system and compared with dynamic backpropagation learning algorithm.

  • PDF

Learning Control of Inverted Pendulum Using Neural Networks (신경회로망을 이용한 도립전자의 학습제어)

  • Lee, Jea-Kang;Kim, Il-Hwan
    • Journal of Industrial Technology
    • /
    • v.24 no.A
    • /
    • pp.99-107
    • /
    • 2004
  • This paper considers reinforcement learning control with the self-organizing map. Reinforcement learning uses the observable states of objective system and signals from interaction of the system and the environments as input data. For fast learning in neural network training, it is necessary to reduce learning data. In this paper, we use the self-organizing map to parition the observable states. Partitioning states reduces the number of learning data which is used for training neural networks. And neural dynamic programming design method is used for the controller. For evaluating the designed reinforcement learning controller, an inverted pendulum of the cart system is simulated. The designed controller is composed of serial connection of self-organizing map and two Multi-layer Feed-Forward Neural Networks.

  • PDF

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

Arabic Text Recognition with Harakat Using Deep Learning

  • Ashwag, Maghraby;Esraa, Samkari
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.1
    • /
    • pp.41-46
    • /
    • 2023
  • Because of the significant role that harakat plays in Arabic text, this paper used deep learning to extract Arabic text with its harakat from an image. Convolutional neural networks and recurrent neural network algorithms were applied to the dataset, which contained 110 images, each representing one word. The results showed the ability to extract some letters with harakat.

Blending Precess Optimization using Fuzzy Set Theory an Neural Networks (퍼지 및 신경망을 이용한 Blending Process의 최적화)

  • 황인창;김정남;주관정
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1993.10a
    • /
    • pp.488-492
    • /
    • 1993
  • This paper proposes a new approach to the optimization method of a blending process with neural network. The method is based on the error backpropagation learning algorithm for neural network. Since the neural network can model an arbitrary nonlinear mapping, it is used as a system solver. A fuzzy membership function is used in parallel with the neural network to minimize the difference between measurement value and input value of neural network. As a result, we can guarantee the reliability and stability of blending process by the help of neural network and fuzzy membership function.

  • PDF

퍼지 학습 규칙을 이용한 퍼지 신경회로망

  • 김용수
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1997.11a
    • /
    • pp.180-184
    • /
    • 1997
  • This paper presents the fuzzy neural network which utilizes a fuzzified Kohonen learning uses a fuzzy membership value, a function of the iteration, and a intra-membership value instead of a learning rate. The IRIS data set if used to test the fuzzy neural network. The test result shows the performance of the fuzzy neural network depends on k and the vigilance parameter T.

  • PDF