Browse > Article

Complexity Control Method of Chaos Dynamics in Recurrent Neural Networks  

Sakai, Masao (Dept. of Electric and Communication Eng., Graduate School of Eng., Tohoku University)
Homma, Noriyasu (Dept. of Radiological Tech., College of Medical Science, Tohoku University)
Abe, Kenichi (Dept. of Electric and Communication Eng., Graduate School of Eng., Tohoku University)
Publication Information
Transactions on Control, Automation and Systems Engineering / v.4, no.2, 2002 , pp. 124-129 More about this Journal
Abstract
This paper demonstrates that the largest Lyapunov exponent λ of recurrent neural networks can be controlled efficiently by a stochastic gradient method. An essential core of the proposed method is a novel stochastic approximate formulation of the Lyapunov exponent λ as a function of the network parameters such as connection weights and thresholds of neural activation functions. By a gradient method, a direct calculation to minimize a square error (λ - λ$\^$obj/)$^2$, where λ$\^$obj/ is a desired exponent value, needs gradients collection through time which are given by a recursive calculation from past to present values. The collection is computationally expensive and causes unstable control of the exponent for networks with chaotic dynamics because of chaotic instability. The stochastic formulation derived in this paper gives us an approximation of the gradients collection in a fashion without the recursive calculation. This approximation can realize not only a faster calculation of the gradient, but also stable control for chaotic dynamics. Due to the non-recursive calculation. without respect to the time evolutions, the running times of this approximation grow only about as N$^2$ compared to as N$\^$5/T that is of the direct calculation method. It is also shown by simulation studies that the approximation is a robust formulation for the network size and that proposed method can control the chaos dynamics in recurrent neural networks efficiently.
Keywords
recurrent neural networks; chaos; lyapunov exponent; stochastic analysis;
Citations & Related Records
연도 인용수 순위
  • Reference
1 /
[ Y. Nishikawa;S. Kitamura;K. Abe ] / Neural networks as applies to measurement and control(in Japanese)
2 Effect of complexity on learning ability of recurrent neural networks /
[ N. Homma;K. Kitagawa;K. Abe ] / Proc. of Artificial Life and Robotics
3 Identification and control of dynamical systems using neural networks /
[ K. S. Narendra;K.Parthasarathy ] / IEEE Trans. Neural Networks   DOI   ScienceOn
4 Dynamic modeling chaotic time series with neural nerworks /
[ J. C. Principe;J. Kuo;G. Tesauro(et al.)(eds.) ] / Neural Information Processing System
5 Generic constraints on underspecified target trajectories /
[ M. Jordan ] / Proc.International Joint Conference on Neural Networks(IJCNN-89)
6 Complexity control method of dynamics in recurrent neural Networks(in Japanese) /
[ N. Homma;M. Sakai;K. Abe;H.Takeda ] / Trans. SICE   DOI
7 Prediction of chaotic time series with neural networks and the issue of dynamic modeling /
[ J. C. Principe;A. Rathie;J. Kuo ] / International Journal of Bifurcation and Chaos   DOI
8 Capabilities of three-layered perceptrons /
[ B. Irie;S.Miyake ] / Proc. of IEEE ICNN 88
9 Control method of the Lyapunov exponents for recurrent neural networks /
[ N. Homma;M. Sakai;K. Abe;H. Takeda ] / Proc. of 14th IFAC World Congresss
10 Universal learning networks and its application to chaos control /
[ K. Hirasawa;X. Wang;J. Murata;J. Hu;C. Jin ] / Neural Networks   DOI   ScienceOn
11 Solving liner least squares problems by Gram-Schmidt orthogonalization /
[ A. Bjorck ] / BIT   DOI
12 Dynamic modeling chaotic time series /
[ G.Deco;B. Schurmann ] / Computational learning theory and neural learning systems, vol.4 of Making learning Systems Practical
13 /
[ M. Ueda;Y. Okada;Y. Yoshitani ] / Probability and statistics
14 Determining Lyapunov exponents from a time series /
[ A. Wolf;J.B. Swift;H. L. Swinney;J. A. Vastano ] / Physica