• 제목/요약/키워드: Gradient Method

검색결과 3,134건 처리시간 0.027초

A BLOCKED VARIANT OF THE CONJUGATE GRADIENT METHOD

  • Yun, Jae Heon;Lee, Ji Young;Kim, Sang Wook
    • 충청수학회지
    • /
    • 제11권1호
    • /
    • pp.129-142
    • /
    • 1998
  • In this paper, we propose a blocked variant of the Conjugate Gradient method which performs as well as and has coarser parallelism than the classical Conjugate Gradient method.

  • PDF

네트워크 문제에서 내부점 방법의 활용 (내부점 선형계획법에서 효율적인 공액경사법) (Interior Point Methods for Network Problems (An Efficient Conjugate Gradient Method for Interior Point Methods))

  • 설동렬
    • 한국국방경영분석학회지
    • /
    • 제24권1호
    • /
    • pp.146-156
    • /
    • 1998
  • Cholesky factorization is known to be inefficient to problems with dense column and network problems in interior point methods. We use the conjugate gradient method and preconditioners to improve the convergence rate of the conjugate gradient method. Several preconditioners were applied to LPABO 5.1 and the results were compared with those of CPLEX 3.0. The conjugate gradient method shows to be more efficient than Cholesky factorization to problems with dense columns and network problems. The incomplete Cholesky factorization preconditioner shows to be the most efficient among the preconditioners.

  • PDF

THE GRADIENT RECOVERY FOR FINITE VOLUME ELEMENT METHOD ON QUADRILATERAL MESHES

  • Song, Yingwei;Zhang, Tie
    • 대한수학회지
    • /
    • 제53권6호
    • /
    • pp.1411-1429
    • /
    • 2016
  • We consider the nite volume element method for elliptic problems using isoparametric bilinear elements on quadrilateral meshes. A gradient recovery method is presented by using the patch interpolation technique. Based on some superclose estimates, we prove that the recovered gradient $R({\nabla}u_h)$ possesses the superconvergence: ${\parallel}{\nabla}u-R({\nabla}u_h){\parallel}=O(h^2){\parallel}u{\parallel}_3$. Finally, some numerical examples are provided to illustrate our theoretical analysis.

딥러닝을 위한 경사하강법 비교 (Comparison of Gradient Descent for Deep Learning)

  • 강민제
    • 한국산학기술학회논문지
    • /
    • 제21권2호
    • /
    • pp.189-194
    • /
    • 2020
  • 본 논문에서는 신경망을 학습하는 데 가장 많이 사용되고 있는 경사하강법에 대해 분석하였다. 학습이란 손실함수가 최소값이 되도록 매개변수를 갱신하는 것이다. 손실함수는 실제값과 예측값의 차이를 수치화 해주는 함수이다. 경사하강법은 오차가 최소화되도록 매개변수를 갱신하는데 손실함수의 기울기를 사용하는 것으로 현재 최고의 딥러닝 학습알고리즘을 제공하는 라이브러리에서 사용되고 있다. 그러나 이 알고리즘들은 블랙박스형태로 제공되고 있어서 다양한 경사하강법들의 장단점을 파악하는 것이 쉽지 않다. 경사하강법에서 현재 대표적으로 사용되고 있는 확률적 경사하강법(Stochastic Gradient Descent method), 모멘텀법(Momentum method), AdaGrad법 그리고 Adadelta법의 특성에 대하여 분석하였다. 실험 데이터는 신경망을 검증하는 데 널리 사용되는 MNIST 데이터 셋을 사용하였다. 은닉층은 2개의 층으로 첫 번째 층은 500개 그리고 두 번째 층은 300개의 뉴런으로 구성하였다. 출력 층의 활성화함수는 소프트 맥스함수이고 나머지 입력 층과 은닉 층의 활성화함수는 ReLu함수를 사용하였다. 그리고 손실함수는 교차 엔트로피 오차를 사용하였다.

Compression of Image Data Using Neural Networks based on Conjugate Gradient Algorithm and Dynamic Tunneling System

  • Cho, Yong-Hyun;Kim, Weon-Ook;Bang, Man-Sik;Kim, Young-il
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 1998년도 The Third Asian Fuzzy Systems Symposium
    • /
    • pp.740-749
    • /
    • 1998
  • This paper proposes compression of image data using neural networks based on conjugate gradient method and dynamic tunneling system. The conjugate gradient method is applied for high speed optimization .The dynamic tunneling algorithms, which is the deterministic method with tunneling phenomenon, is applied for global optimization. Converging to the local minima by using the conjugate gradient method, the new initial point for escaping the local minima is estimated by dynamic tunneling system. The proposed method has been applied the image data compression of 12 ${\times}$12 pixels. The simulation results shows the proposed networks has better learning performance , in comparison with that using the conventional BP as learning algorithm.

Magnetic Field Gradient Optimization for Electronic Anti-Fouling Effect in Heat Exchanger

  • Han, Yong;Wang, Shu-Tao
    • Journal of Electrical Engineering and Technology
    • /
    • 제9권6호
    • /
    • pp.1921-1927
    • /
    • 2014
  • A new method for optimizing the magnetic field gradient in the exciting coil of electronic anti-fouling (EAF) system is presented based on changing exciting coil size. In the proposed method, two optimization expressions are deduced based on biot-savart law. The optimization expressions, which can describe the distribution of the magnetic field gradient in the coil, are the function of coil radius and coil length. These optimization expressions can be used to obtain an accurate coil size if the magnetic field gradient on a certain point on the coil's axis of symmetry is needed to be the maximum value. Comparing with the experimental results and the computation results using Finite Element Method simulation to the magnetic field gradient on the coil's axis of symmetry, the computation results obtained by the optimization expression in this article can fit the experimental results and the Finite Element Method results very well. This new method can optimize the EAF system's anti-fouling performance based on improving the magnetic field gradient distribution in the exciting coil.

소재 크기효과를 고려한 미세가공공정 유한요소해석 (Finite Element Analysis for Micro-Forming Process Considering the Size Effect of Materials)

  • 변상민;이영석
    • 소성∙가공
    • /
    • 제15권8호
    • /
    • pp.544-549
    • /
    • 2006
  • In this work, we have employed the strain gradient plasticity theory to investigate the effect of material size on the deformation behavior in metal forming process. Flow stress is expressed in terms of strain, strain gradient (spatial derivative of strain) and intrinsic material length. The least square method coupled with strain gradient plasticity was used to calculate the components of strain gradient at each element of material. For demonstrating the size effect, the proposed approach has been applied to plane compression process and micro rolling process. Results show when the characteristic length of the material comes to the intrinsic material length, the effect of strain gradient is noteworthy. For the microcompression, the additional work hardening at higher strain gradient regions results in uniform distribution of strain. In the case of micro-rolling, the strain gradient is remarkable at the exit section where the actual reduction of the rolling finishes and subsequently strong work hardening take places at the section. This results in a considerable increase in rolling force. Rolling force with the strain gradient plasticity considered in analysis increases by 20% compared to that with conventional plasticity theory.

A CLASS OF NONMONOTONE SPECTRAL MEMORY GRADIENT METHOD

  • Yu, Zhensheng;Zang, Jinsong;Liu, Jingzhao
    • 대한수학회지
    • /
    • 제47권1호
    • /
    • pp.63-70
    • /
    • 2010
  • In this paper, we develop a nonmonotone spectral memory gradient method for unconstrained optimization, where the spectral stepsize and a class of memory gradient direction are combined efficiently. The global convergence is obtained by using a nonmonotone line search strategy and the numerical tests are also given to show the efficiency of the proposed algorithm.

구배법 알고리즘에 의한 슬래브축열의 최적제어 해석 (An Analysis of the Optimal Control of Air-Conditioning System with Slab Thermal Storage by the Gradient Method Algorithm)

  • 정재훈
    • 설비공학논문집
    • /
    • 제20권8호
    • /
    • pp.534-540
    • /
    • 2008
  • In this paper, the optimal bang-bang control problem of an air-conditioning system with slab thermal storage was formulated by gradient method. Furthermore, the numeric solution obtained by gradient method algorithm was compared with the analytic solution obtained on the basis of maximum principle. The control variable is changed uncontinuously at the start time of thermal storage operation in an analytic solution. On the other hand, it is showed as a continuous solution in a numeric solution. The numeric solution reproduces the analytic solution when a tolerance for convergence is applied severely. It is conceivable that gradient method is effective in the analysis of the optimal bang-bang control of the large-scale system like an air-conditioning system with slab thermal storage.

STOCHASTIC GRADIENT METHODS FOR L2-WASSERSTEIN LEAST SQUARES PROBLEM OF GAUSSIAN MEASURES

  • YUN, SANGWOON;SUN, XIANG;CHOI, JUNG-IL
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • 제25권4호
    • /
    • pp.162-172
    • /
    • 2021
  • This paper proposes stochastic methods to find an approximate solution for the L2-Wasserstein least squares problem of Gaussian measures. The variable for the problem is in a set of positive definite matrices. The first proposed stochastic method is a type of classical stochastic gradient methods combined with projection and the second one is a type of variance reduced methods with projection. Their global convergence are analyzed by using the framework of proximal stochastic gradient methods. The convergence of the classical stochastic gradient method combined with projection is established by using diminishing learning rate rule in which the learning rate decreases as the epoch increases but that of the variance reduced method with projection can be established by using constant learning rate. The numerical results show that the present algorithms with a proper learning rate outperforms a gradient projection method.