• 제목/요약/키워드: gradient descent optimization

검색결과 82건 처리시간 0.022초

Selecting Fuzzy Rules for Pattern Classification Systems

  • Lee, Sang-Bum;Lee, Sung-joo;Lee, Mai-Rey
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제2권2호
    • /
    • pp.159-165
    • /
    • 2002
  • This paper proposes a GA and Gradient Descent Method-based method for choosing an appropriate set of fuzzy rules for classification problems. The aim of the proposed method is to fond a minimum set of fuzzy rules that can correctly classify all training patterns. The number of inference rules and the shapes of the membership functions in the antecedent part of the fuzzy rules are determined by the genetic algorithms. The real numbers in the consequent parts of the fuzzy rules are obtained through the use of the descent method. A fitness function is used to maximize the number of correctly classified patterns, and to minimize the number of fuzzy rules. A solution obtained by the genetic algorithm is a set of fuzzy rules, and its fitness is determined by the two objectives, in a combinatorial optimization problem. In order to demonstrate the effectiveness of the proposed method, computer simulation results are shown.

하이브리드 학습알고리즘의 다층신경망을 이용한 시급수의 비선형예측 (Nonlinear Prediction of Time Series Using Multilayer Neural Networks of Hybrid Learning Algorithm)

  • 조용현;김지영
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 1998년도 추계종합학술대회 논문집
    • /
    • pp.1281-1284
    • /
    • 1998
  • This paper proposes an efficient time series prediction of the nonlinear dynamical discrete-time systems using multilayer neural networks of a hybrid learning algorithm. The proposed learning algorithm is a hybrid backpropagation algorithm based on the steepest descent for high-speed optimization and the dynamic tunneling for global optimization. The proposed algorithm has been applied to the y00 samples of 700 sequences to predict the next 100 samples. The simulation results shows that the proposed algorithm has better performances of the convergence and the prediction, in comparision with that using backpropagation algorithm based on the gradient descent for multilayer neural network.

  • PDF

마이크로프로세서에 의한 전압형 인버터-유도전동기 시스템의 최적이득 설계 (Design of Optimal Gains on Microprocessor-Based Voltage Source Inverter-Induction Motor System)

  • 박민호;전태원;민병훈
    • 대한전기학회논문지
    • /
    • 제37권6호
    • /
    • pp.368-375
    • /
    • 1988
  • This paper is concerned with the design of the optimal gains of the controller in the speed control system for the induction motor controlled by the microprocessor. The system is modelled with the discrete-time state equation, considering the time delay, for the facility of the optimization techniques. Introducing the conjugate gradient descent method, as the optimization technique, are derived the optimal gains, the gains which give the best transient characteristics. At the optimal gains obtained, the theoretcal transient responses are verified by experimental ones on a 5HP induction motor drive system.

  • PDF

빅데이터 기반 추천시스템을 위한 협업필터링의 최적화 규제 (Regularized Optimization of Collaborative Filtering for Recommander System based on Big Data)

  • 박인규;최규석
    • 한국인터넷방송통신학회논문지
    • /
    • 제21권1호
    • /
    • pp.87-92
    • /
    • 2021
  • 빅데이터 기반의 추천시스템 모델링에서 바이어스, 분산, 오류 및 학습은 성능에 중요한 요소이다. 이러한 시스템에서는 추천 모델이 설명도를 유지하면서 복잡도를 줄여야 한다. 또한 데이터의 희소성과 시스템의 예측은 서로 반비례의 속성을 가지기 마련이다. 따라서 희소성의 데이터를 인수분해 방법을 활용하여 상품간의 유사성을 학습을 통한 상품추천모델이 제안되어 왔다. 본 논문에서는 이 모델의 손실함수에 대한 최적화 방안으로 max-norm 규제를 적용하여 모델의 일반화 능력을 향상시키고자 한다. 해결방안은 기울기를 투영하는 확률적 투영 기울기 강하법을 적용하는 것이다. 많은 실험을 통하여 데이터가 희박해질수록 기존의 방법에 비해 제안된 규제 방법이 상대적으로 효과가 있다는 것을 확인하였다.

Stochastic Gradient Descent Optimization Model for Demand Response in a Connected Microgrid

  • Sivanantham, Geetha;Gopalakrishnan, Srivatsun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권1호
    • /
    • pp.97-115
    • /
    • 2022
  • Smart power grid is a user friendly system that transforms the traditional electric grid to the one that operates in a co-operative and reliable manner. Demand Response (DR) is one of the important components of the smart grid. The DR programs enable the end user participation by which they can communicate with the electricity service provider and shape their daily energy consumption patterns and reduce their consumption costs. The increasing demands of electricity owing to growing population stresses the need for optimal usage of electricity and also to look out alternative and cheap renewable sources of electricity. The solar and wind energy are the promising sources of alternative energy at present because of renewable nature and low cost implementation. The proposed work models a smart home with renewable energy units. The random nature of the renewable sources like wind and solar energy brings an uncertainty to the model developed. A stochastic dual descent optimization method is used to bring optimality to the developed model. The proposed work is validated using the simulation results. From the results it is concluded that proposed work brings a balanced usage of the grid power and the renewable energy units. The work also optimizes the daily consumption pattern thereby reducing the consumption cost for the end users of electricity.

송풍기 설계를 위한 수치최적설계기법의 응용 (Application of Numerical Optimization Technique to the Design of Fans)

  • 김광용;최재호;김태진;류호선
    • 설비공학논문집
    • /
    • 제7권4호
    • /
    • pp.566-576
    • /
    • 1995
  • A Computational code has been developed in order to design axial fans by the numerical optimization techniques incorporated with flow analysis code solving three-dimensional Navier-Stokes equation. The steepest descent method and the conjugate gradient method are used to look for the search direction in the design space, and the golden section method is used for one-dimensional search. To solve the constrained optimization problem, sequential unconstrained minimization technique, SUMT, is used with imposed quadratic extended interior penalty functions. In the optimization of two-dimensional cascade design, the ratio of drag coefficient to lift coefficient is minimized by the design variables such as maximum thickness, maximum ordinate of camber and chord wise position of maximum ordinate. In the application of this numerical optimization technique to the design of an axial fan, the efficiency is maximized by the design variables related to the sweep angle distributed by quadratic function along the hub to tip of fan.

  • PDF

두 이종 혼합 모형에서의 수정된 경사 하강법 (Adaptive stochastic gradient method under two mixing heterogenous models)

  • 문상준;전종준
    • Journal of the Korean Data and Information Science Society
    • /
    • 제28권6호
    • /
    • pp.1245-1255
    • /
    • 2017
  • 온라인 학습은 자료가 실시간으로 혹은 배치 단위로 축적되는 상황에서 주어진 목적함수의 해를 계산하는 방법을 말한다. 온라인 학습 알고리즘 중 배치를 이용한 확률적 경사 하강법 (stochastic gradient decent method)은 가장 많이 사용되는 방법 중 하나다. 이 방법은 구현이 쉬울 뿐만 아니라 자료가 동질적인 분포를 따른다는 가정 하에서 그 해의 성질이 잘 연구되어 있다. 하지만 자료에 특이값이 있거나 임의의 배치가 확률적으로 이질적 성질을 가질 때, 확률적 경사 하강법이 주는 해는 큰 편이를 가질 수 있다. 본 연구에서는 이러한 비정상 배치 (abnormal batch) 있는 자료 하에서 효과적으로 온라인 학습을 수행할 수 있는 수정된 경사 하강 알고리즘 (modified gradient decent algorithm)을 제안하고, 그 알고리즘을 통해 계산된 해의 수렴성을 밝혔다. 뿐만 아니라 간단한 모의실험을 통해 제안한 방법의 이론적 성질을 실증하였다.

Water Flowing and Shaking Optimization

  • Jung, Sung-Hoon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제12권2호
    • /
    • pp.173-180
    • /
    • 2012
  • This paper proposes a novel optimization algorithm inspired by water flowing and shaking behaviors in a vessel. Water drops in our algorithm flow to the gradient descent direction and are sometimes shaken for getting out of local optimum areas when most water drops fall in local optimum areas. These flowing and shaking operations allow our algorithm to quickly approach to the global optimum without staying in local optimum areas. We experimented our algorithm with four function optimization problems and compared its results with those of particle swarm optimization. Experimental results showed that our algorithm is superior to the particle swarm optimization algorithm in terms of the speed and success ratio of finding the global optimum.

A STOCHASTIC VARIANCE REDUCTION METHOD FOR PCA BY AN EXACT PENALTY APPROACH

  • Jung, Yoon Mo;Lee, Jae Hwa;Yun, Sangwoon
    • 대한수학회보
    • /
    • 제55권4호
    • /
    • pp.1303-1315
    • /
    • 2018
  • For principal component analysis (PCA) to efficiently analyze large scale matrices, it is crucial to find a few singular vectors in cheaper computational cost and under lower memory requirement. To compute those in a fast and robust way, we propose a new stochastic method. Especially, we adopt the stochastic variance reduced gradient (SVRG) method [11] to avoid asymptotically slow convergence in stochastic gradient descent methods. For that purpose, we reformulate the PCA problem as a unconstrained optimization problem using a quadratic penalty. In general, increasing the penalty parameter to infinity is needed for the equivalence of the two problems. However, in this case, exact penalization is guaranteed by applying the analysis in [24]. We establish the convergence rate of the proposed method to a stationary point and numerical experiments illustrate the validity and efficiency of the proposed method.

UNDERSTANDING NON-NEGATIVE MATRIX FACTORIZATION IN THE FRAMEWORK OF BREGMAN DIVERGENCE

  • KIM, KYUNGSUP
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • 제25권3호
    • /
    • pp.107-116
    • /
    • 2021
  • We introduce optimization algorithms using Bregman Divergence for solving non-negative matrix factorization (NMF) problems. Bregman divergence is known a generalization of some divergences such as Frobenius norm and KL divergence and etc. Some algorithms can be applicable to not only NMF with Frobenius norm but also NMF with more general Bregman divergence. Matrix Factorization is a popular non-convex optimization problem, for which alternating minimization schemes are mostly used. We develop the Bregman proximal gradient method applicable for all NMF formulated in any Bregman divergences. In the derivation of NMF algorithm for Bregman divergence, we need to use majorization/minimization(MM) for a proper auxiliary function. We present algorithmic aspects of NMF for Bregman divergence by using MM of auxiliary function.