• Title/Summary/Keyword: gradient descent optimization

Search Result 82, Processing Time 0.026 seconds

Selecting Fuzzy Rules for Pattern Classification Systems

  • Lee, Sang-Bum;Lee, Sung-joo;Lee, Mai-Rey
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.2 no.2
    • /
    • pp.159-165
    • /
    • 2002
  • This paper proposes a GA and Gradient Descent Method-based method for choosing an appropriate set of fuzzy rules for classification problems. The aim of the proposed method is to fond a minimum set of fuzzy rules that can correctly classify all training patterns. The number of inference rules and the shapes of the membership functions in the antecedent part of the fuzzy rules are determined by the genetic algorithms. The real numbers in the consequent parts of the fuzzy rules are obtained through the use of the descent method. A fitness function is used to maximize the number of correctly classified patterns, and to minimize the number of fuzzy rules. A solution obtained by the genetic algorithm is a set of fuzzy rules, and its fitness is determined by the two objectives, in a combinatorial optimization problem. In order to demonstrate the effectiveness of the proposed method, computer simulation results are shown.

Nonlinear Prediction of Time Series Using Multilayer Neural Networks of Hybrid Learning Algorithm (하이브리드 학습알고리즘의 다층신경망을 이용한 시급수의 비선형예측)

  • 조용현;김지영
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.1281-1284
    • /
    • 1998
  • This paper proposes an efficient time series prediction of the nonlinear dynamical discrete-time systems using multilayer neural networks of a hybrid learning algorithm. The proposed learning algorithm is a hybrid backpropagation algorithm based on the steepest descent for high-speed optimization and the dynamic tunneling for global optimization. The proposed algorithm has been applied to the y00 samples of 700 sequences to predict the next 100 samples. The simulation results shows that the proposed algorithm has better performances of the convergence and the prediction, in comparision with that using backpropagation algorithm based on the gradient descent for multilayer neural network.

  • PDF

Design of Optimal Gains on Microprocessor-Based Voltage Source Inverter-Induction Motor System (마이크로프로세서에 의한 전압형 인버터-유도전동기 시스템의 최적이득 설계)

  • 박민호;전태원;민병훈
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.37 no.6
    • /
    • pp.368-375
    • /
    • 1988
  • This paper is concerned with the design of the optimal gains of the controller in the speed control system for the induction motor controlled by the microprocessor. The system is modelled with the discrete-time state equation, considering the time delay, for the facility of the optimization techniques. Introducing the conjugate gradient descent method, as the optimization technique, are derived the optimal gains, the gains which give the best transient characteristics. At the optimal gains obtained, the theoretcal transient responses are verified by experimental ones on a 5HP induction motor drive system.

  • PDF

Regularized Optimization of Collaborative Filtering for Recommander System based on Big Data (빅데이터 기반 추천시스템을 위한 협업필터링의 최적화 규제)

  • Park, In-Kyu;Choi, Gyoo-Seok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.1
    • /
    • pp.87-92
    • /
    • 2021
  • Bias, variance, error and learning are important factors for performance in modeling a big data based recommendation system. The recommendation model in this system must reduce complexity while maintaining the explanatory diagram. In addition, the sparsity of the dataset and the prediction of the system are more likely to be inversely proportional to each other. Therefore, a product recommendation model has been proposed through learning the similarity between products by using a factorization method of the sparsity of the dataset. In this paper, the generalization ability of the model is improved by applying the max-norm regularization as an optimization method for the loss function of this model. The solution is to apply a stochastic projection gradient descent method that projects a gradient. The sparser data became, it was confirmed that the propsed regularization method was relatively effective compared to the existing method through lots of experiment.

Stochastic Gradient Descent Optimization Model for Demand Response in a Connected Microgrid

  • Sivanantham, Geetha;Gopalakrishnan, Srivatsun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.1
    • /
    • pp.97-115
    • /
    • 2022
  • Smart power grid is a user friendly system that transforms the traditional electric grid to the one that operates in a co-operative and reliable manner. Demand Response (DR) is one of the important components of the smart grid. The DR programs enable the end user participation by which they can communicate with the electricity service provider and shape their daily energy consumption patterns and reduce their consumption costs. The increasing demands of electricity owing to growing population stresses the need for optimal usage of electricity and also to look out alternative and cheap renewable sources of electricity. The solar and wind energy are the promising sources of alternative energy at present because of renewable nature and low cost implementation. The proposed work models a smart home with renewable energy units. The random nature of the renewable sources like wind and solar energy brings an uncertainty to the model developed. A stochastic dual descent optimization method is used to bring optimality to the developed model. The proposed work is validated using the simulation results. From the results it is concluded that proposed work brings a balanced usage of the grid power and the renewable energy units. The work also optimizes the daily consumption pattern thereby reducing the consumption cost for the end users of electricity.

Application of Numerical Optimization Technique to the Design of Fans (송풍기 설계를 위한 수치최적설계기법의 응용)

  • Kim, K.Y.;Choi, J.H.;Kim, T.J.;Rew, H.S.
    • Korean Journal of Air-Conditioning and Refrigeration Engineering
    • /
    • v.7 no.4
    • /
    • pp.566-576
    • /
    • 1995
  • A Computational code has been developed in order to design axial fans by the numerical optimization techniques incorporated with flow analysis code solving three-dimensional Navier-Stokes equation. The steepest descent method and the conjugate gradient method are used to look for the search direction in the design space, and the golden section method is used for one-dimensional search. To solve the constrained optimization problem, sequential unconstrained minimization technique, SUMT, is used with imposed quadratic extended interior penalty functions. In the optimization of two-dimensional cascade design, the ratio of drag coefficient to lift coefficient is minimized by the design variables such as maximum thickness, maximum ordinate of camber and chord wise position of maximum ordinate. In the application of this numerical optimization technique to the design of an axial fan, the efficiency is maximized by the design variables related to the sweep angle distributed by quadratic function along the hub to tip of fan.

  • PDF

Adaptive stochastic gradient method under two mixing heterogenous models (두 이종 혼합 모형에서의 수정된 경사 하강법)

  • Moon, Sang Jun;Jeon, Jong-June
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.6
    • /
    • pp.1245-1255
    • /
    • 2017
  • The online learning is a process of obtaining the solution for a given objective function where the data is accumulated in real time or in batch units. The stochastic gradient descent method is one of the most widely used for the online learning. This method is not only easy to implement, but also has good properties of the solution under the assumption that the generating model of data is homogeneous. However, the stochastic gradient method could severely mislead the online-learning when the homogeneity is actually violated. We assume that there are two heterogeneous generating models in the observation, and propose the a new stochastic gradient method that mitigate the problem of the heterogeneous models. We introduce a robust mini-batch optimization method using statistical tests and investigate the convergence radius of the solution in the proposed method. Moreover, the theoretical results are confirmed by the numerical simulations.

Water Flowing and Shaking Optimization

  • Jung, Sung-Hoon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.2
    • /
    • pp.173-180
    • /
    • 2012
  • This paper proposes a novel optimization algorithm inspired by water flowing and shaking behaviors in a vessel. Water drops in our algorithm flow to the gradient descent direction and are sometimes shaken for getting out of local optimum areas when most water drops fall in local optimum areas. These flowing and shaking operations allow our algorithm to quickly approach to the global optimum without staying in local optimum areas. We experimented our algorithm with four function optimization problems and compared its results with those of particle swarm optimization. Experimental results showed that our algorithm is superior to the particle swarm optimization algorithm in terms of the speed and success ratio of finding the global optimum.

A STOCHASTIC VARIANCE REDUCTION METHOD FOR PCA BY AN EXACT PENALTY APPROACH

  • Jung, Yoon Mo;Lee, Jae Hwa;Yun, Sangwoon
    • Bulletin of the Korean Mathematical Society
    • /
    • v.55 no.4
    • /
    • pp.1303-1315
    • /
    • 2018
  • For principal component analysis (PCA) to efficiently analyze large scale matrices, it is crucial to find a few singular vectors in cheaper computational cost and under lower memory requirement. To compute those in a fast and robust way, we propose a new stochastic method. Especially, we adopt the stochastic variance reduced gradient (SVRG) method [11] to avoid asymptotically slow convergence in stochastic gradient descent methods. For that purpose, we reformulate the PCA problem as a unconstrained optimization problem using a quadratic penalty. In general, increasing the penalty parameter to infinity is needed for the equivalence of the two problems. However, in this case, exact penalization is guaranteed by applying the analysis in [24]. We establish the convergence rate of the proposed method to a stationary point and numerical experiments illustrate the validity and efficiency of the proposed method.

UNDERSTANDING NON-NEGATIVE MATRIX FACTORIZATION IN THE FRAMEWORK OF BREGMAN DIVERGENCE

  • KIM, KYUNGSUP
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.25 no.3
    • /
    • pp.107-116
    • /
    • 2021
  • We introduce optimization algorithms using Bregman Divergence for solving non-negative matrix factorization (NMF) problems. Bregman divergence is known a generalization of some divergences such as Frobenius norm and KL divergence and etc. Some algorithms can be applicable to not only NMF with Frobenius norm but also NMF with more general Bregman divergence. Matrix Factorization is a popular non-convex optimization problem, for which alternating minimization schemes are mostly used. We develop the Bregman proximal gradient method applicable for all NMF formulated in any Bregman divergences. In the derivation of NMF algorithm for Bregman divergence, we need to use majorization/minimization(MM) for a proper auxiliary function. We present algorithmic aspects of NMF for Bregman divergence by using MM of auxiliary function.