• Title/Summary/Keyword: Calculus of variation

Search Result 25, Processing Time 0.018 seconds

Comparison Analysis of Behavior between Differential Equation and Fractional Differential Equation in the Van der Pol Equation (Van der Pol 발진기에서의 미분방정식과 Fractional 미분방정식의 거동 비교 해석)

  • Bae, Young-Chul
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.11 no.1
    • /
    • pp.81-86
    • /
    • 2016
  • Three hundred years ago, the fractional differential equation that is one of concept of fractional calculus released. Now, many researchers continue to try best effort applying into the control engineering, mathematics and physics. In this paper, the dynamics equation which is represented by Van der Pol, represent integer order and fractional order that having real order. Then this paper performs the comparisons between integer and real order as time series and phase portrait according to variation of parameter value for real order.

A on-line learning algorithm for recurrent neural networks using variational method (변분법을 이용한 재귀신경망의 온라인 학습)

  • Oh, Oh, Won-Geun;Suh, Suh, Byung-Suhl
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.2 no.1
    • /
    • pp.21-25
    • /
    • 1996
  • In this paper we suggest a general purpose RNN training algorithm which is derived on the optimal control concepts and variational methods. First, learning is regared as an optimal control problem, then using the variational methods we obtain optimal weights which are given by a two-point boundary-value problem. Finally, the modified gradient descent algorithm is applied to RNN for on-line training. This algorithm is intended to be used on learning complex dynamic mappings between time varing I/O data. It is useful for nonlinear control, identification, and signal processing application of RNN because its storage requirement is not high and on-line learning is possible. Simulation results for a nonlinear plant identification are illustrated.

  • PDF

Catastrophe analysis of active-passive mechanisms for shallow tunnels with settlement

  • Yang, X.L.;Wang, H.Y.
    • Geomechanics and Engineering
    • /
    • v.15 no.1
    • /
    • pp.621-630
    • /
    • 2018
  • In the note a comprehensive and optimal passive-active mode for describing the limit failure of circular shallow tunnel with settlement is put forward to predict the catastrophic stability during the geotechnical construction. Since the surrounding soil mass around tunnel roof is not homogeneous, with tools of variation calculus, several different curve functions which depict several failure shapes in different soil layers are obtained using virtual work formulae. By making reference to the simple-form of Power-law failure criteria based on numerous experiments, a numerical procedure with consideration of combination of upper bound theorem and stochastic medium theory is applied to the optimal analysis of shallow-buried tunnel failure. With help of functional catastrophe theory, this work presented a more accurate and optimal failure profile compared with previous work. Lastly the note discusses different effects of parameters in new yield rule and soil mechanical coefficients on failure mechanisms. The scope of failure block becomes smaller with increase of the parameter A and the range of failure soil mass tends to decrease with decrease of unit weight of the soil and tunnel radius, which verifies the geomechanics and practical case in engineering.

Stress Analysis of Orthogonally Stiffened Rectangular Plates by the Laplace Transformation (직교보강재(直交補强材)가 붙은 구형평판(矩形平板)에 있어서의 응력해석(應力解析))

  • S.J.,Yim;J.D.,Kim
    • Bulletin of the Society of Naval Architects of Korea
    • /
    • v.13 no.3
    • /
    • pp.11-19
    • /
    • 1976
  • Grillages are abundant in ship structures and in many other types of structures such as bridges and building floors. Clarkson has shown that plated grillages can be satisfactorily analyzed as gridworks if an appropriate effective breadth is taken into account. Also, it has previously been pointed out, by Nielsen, that grillage calculations could be simplified by use of the Laplace transformation. In this paper, it is assumed that the torsional rigidity of the members and axial load are negligible, also that girders have the same scantling and spacing each other and so stiffeners do. Then the grillages composed of both-end-fixed girders and both-end-hinged stiffeners, which are subjected only to uniform normal loads are investigated. The calculus of variation is used to set up the differential equations and the Laplace transformation is applied to solve the differential equations. The program has been tested by FACOM 28 and the results show good agreements with those by the STRESS, which was developed in M.I.T.. The amount of the data input and computing time are much less than those of the STRESS. But this program has so much restrictions that it is urgent to extend the program to the grillage problems of arbitrary loading and boundary conditions.

  • PDF

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.