• Title/Summary/Keyword: 경사하강법

Search Result 55, Processing Time 0.025 seconds

Gradient Descent Training Method for Optimizing Data Prediction Models (데이터 예측 모델 최적화를 위한 경사하강법 교육 방법)

  • Hur, Kyeong
    • Journal of Practical Engineering Education
    • /
    • v.14 no.2
    • /
    • pp.305-312
    • /
    • 2022
  • In this paper, we focused on training to create and optimize a basic data prediction model. And we proposed a gradient descent training method of machine learning that is widely used to optimize data prediction models. It visually shows the entire operation process of gradient descent used in the process of optimizing parameter values required for data prediction models by applying the differential method and teaches the effective use of mathematical differentiation in machine learning. In order to visually explain the entire operation process of gradient descent, we implement gradient descent SW in a spreadsheet. In this paper, first, a two-variable gradient descent training method is presented, and the accuracy of the two-variable data prediction model is verified by comparison with the error least squares method. Second, a three-variable gradient descent training method is presented and the accuracy of a three-variable data prediction model is verified. Afterwards, the direction of the optimization practice for gradient descent was presented, and the educational effect of the proposed gradient descent method was analyzed through the results of satisfaction with education for non-majors.

Comparison of Gradient Descent for Deep Learning (딥러닝을 위한 경사하강법 비교)

  • Kang, Min-Jae
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.2
    • /
    • pp.189-194
    • /
    • 2020
  • This paper analyzes the gradient descent method, which is the one most used for learning neural networks. Learning means updating a parameter so the loss function is at its minimum. The loss function quantifies the difference between actual and predicted values. The gradient descent method uses the slope of the loss function to update the parameter to minimize error, and is currently used in libraries that provide the best deep learning algorithms. However, these algorithms are provided in the form of a black box, making it difficult to identify the advantages and disadvantages of various gradient descent methods. This paper analyzes the characteristics of the stochastic gradient descent method, the momentum method, the AdaGrad method, and the Adadelta method, which are currently used gradient descent methods. The experimental data used a modified National Institute of Standards and Technology (MNIST) data set that is widely used to verify neural networks. The hidden layer consists of two layers: the first with 500 neurons, and the second with 300. The activation function of the output layer is the softmax function, and the rectified linear unit function is used for the remaining input and hidden layers. The loss function uses cross-entropy error.

A Study on the Development of Teaching-Learning Materials for Gradient Descent Method in College AI Mathematics Classes (대학수학 경사하강법(gradient descent method) 교수·학습자료 개발)

  • Lee, Sang-Gu;Nam, Yun;Lee, Jae Hwa
    • Communications of Mathematical Education
    • /
    • v.37 no.3
    • /
    • pp.467-482
    • /
    • 2023
  • In this paper, we present our new teaching and learning materials on gradient descent method, which is widely used in artificial intelligence, available for college mathematics. These materials provide a good explanation of gradient descent method at the level of college calculus, and the presented SageMath code can help students to solve minimization problems easily. And we introduce how to solve least squares problem using gradient descent method. This study can be helpful to instructors who teach various college-level mathematics subjects such as calculus, engineering mathematics, numerical analysis, and applied mathematics.

Adaptive stochastic gradient method under two mixing heterogenous models (두 이종 혼합 모형에서의 수정된 경사 하강법)

  • Moon, Sang Jun;Jeon, Jong-June
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.6
    • /
    • pp.1245-1255
    • /
    • 2017
  • The online learning is a process of obtaining the solution for a given objective function where the data is accumulated in real time or in batch units. The stochastic gradient descent method is one of the most widely used for the online learning. This method is not only easy to implement, but also has good properties of the solution under the assumption that the generating model of data is homogeneous. However, the stochastic gradient method could severely mislead the online-learning when the homogeneity is actually violated. We assume that there are two heterogeneous generating models in the observation, and propose the a new stochastic gradient method that mitigate the problem of the heterogeneous models. We introduce a robust mini-batch optimization method using statistical tests and investigate the convergence radius of the solution in the proposed method. Moreover, the theoretical results are confirmed by the numerical simulations.

Improvement of multi layer perceptron performance using combination of gradient descent and harmony search for prediction of groundwater level (지하수위 예측을 위한 경사하강법과 화음탐색법의 결합을 이용한 다층퍼셉트론 성능향상)

  • Lee, Won Jin;Lee, Eui Hoon
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.186-186
    • /
    • 2022
  • 강수 및 침투 등으로 발생하는 지하수위의 변동을 예측하는 것은 지하수 자원의 활용 및 관리에 필수적이다. 지하수위의 변동은 지하수 자원의 활용 및 관리뿐만이 아닌 홍수 발생과 지반의 응력상태 등에 직접적인 영향을 미치기 때문에 정확한 예측이 필요하다. 본 연구는 인공신경망 중 다층퍼셉트론(Multi Layer Perceptron, MLP)을 이용한 지하수위 예측성능 향상을 위해 MLP의 구조 중 Optimizer를 개량하였다. MLP는 입력자료와 출력자료간 최적의 상관관계(가중치 및 편향)를 찾는 Optimizer와 출력되는 값을 결정하는 활성화 함수의 연산을 반복하여 학습한다. 특히 Optimizer는 신경망의 출력값과 관측값의 오차가 최소가 되는 상관관계를 찾는 연산자로써 MLP의 학습 및 예측성능에 직접적인 영향을 미친다. 기존의 Optimizer는 경사하강법(Gradient Descent, GD)을 기반으로 하는 Optimizer를 사용했다. 하지만 기존의 Optimizer는 미분을 이용하여 상관관계를 찾기 때문에 지역탐색 위주로 진행되며 기존에 생성된 상관관계를 저장하는 구조가 없어 지역 최적해로 수렴할 가능성이 있다는 단점이 있다. 본 연구에서는 기존 Optimizer의 단점을 개선하기 위해 지역탐색과 전역탐색을 동시에 고려할 수 있으며 기존의 해를 저장하는 구조가 있는 메타휴리스틱 최적화 알고리즘을 이용하였다. 메타휴리스틱 최적화 알고리즘 중 구조가 간단한 화음탐색법(Harmony Search, HS)과 GD의 결합모형(HS-GD)을 MLP의 Optimizer로 사용하여 기존 Optimizer의 단점을 개선하였다. HS-GD를 이용한 MLP의 성능검토를 위해 이천시 지하수위 예측을 실시하였으며 예측 결과를 기존의 Optimizer를 이용한 MLP 및 HS를 이용한 MLP의 예측결과와 비교하였다.

  • PDF

Development of new artificial neural network optimizer to improve water quality index prediction performance (수질 지수 예측성능 향상을 위한 새로운 인공신경망 옵티마이저의 개발)

  • Ryu, Yong Min;Kim, Young Nam;Lee, Dae Won;Lee, Eui Hoon
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.2
    • /
    • pp.73-85
    • /
    • 2024
  • Predicting water quality of rivers and reservoirs is necessary for the management of water resources. Artificial Neural Networks (ANNs) have been used in many studies to predict water quality with high accuracy. Previous studies have used Gradient Descent (GD)-based optimizers as an optimizer, an operator of ANN that searches parameters. However, GD-based optimizers have the disadvantages of the possibility of local optimal convergence and absence of a solution storage and comparison structure. This study developed improved optimizers to overcome the disadvantages of GD-based optimizers. Proposed optimizers are optimizers that combine adaptive moments (Adam) and Nesterov-accelerated adaptive moments (Nadam), which have low learning errors among GD-based optimizers, with Harmony Search (HS) or Novel Self-adaptive Harmony Search (NSHS). To evaluate the performance of Long Short-Term Memory (LSTM) using improved optimizers, the water quality data from the Dasan water quality monitoring station were used for training and prediction. Comparing the learning results, Mean Squared Error (MSE) of LSTM using Nadam combined with NSHS (NadamNSHS) was the lowest at 0.002921. In addition, the prediction rankings according to MSE and R2 for the four water quality indices for each optimizer were compared. Comparing the average of ranking for each optimizer, it was confirmed that LSTM using NadamNSHS was the highest at 2.25.

An Optimal Filter Design for System Identification with GA (GA를 이용한 시스템 동정용 필터계수 최적화)

  • Song, Young-Jun;Kong, Seong-Gon
    • Proceedings of the KIEE Conference
    • /
    • 1999.07g
    • /
    • pp.2833-2835
    • /
    • 1999
  • 이 논문에서는 임의의 시스템 동정에 사용되는 적응필터의 계수를 최적화시키는 방법으로 광범위하게 사용되어지고 있는 기존의 적응 알고리즘인 Least Mean Square(LMS)방법과 최근들어 다양한 최적화 문제에 응용되고 있는 유전자 알고리즘(GA)을 합성한 하이브리드 형태의 적응 알고리즘을 사용한다. 이 알고리즘은 TIR 필터를 설계하는데 있어, 경사하강법의 개념을 사용함으로써 야기되는 지역 수렴문제의 단점을 보완하기 위해, 미분과 같은 결정론적인 규칙없이 단지 확률적인 연산자만으로 진행하는 유전자 알고리즘을 이용한다. 그리고 유전자 알고리즘에 있어서 확률적인 연산을 사용함으로써 발생하는 많은 계산량과 느린 수렴속도 문제를 LMS의 경사하강법을 이용하여 보완한다. 이처럼 유전자 알고리즘이 지닌 장점과 LMS 알고리즘이 갖는 장점을 이용하여 각 알고리즘이 지니는 단점을 서로 보완함으로써 알고리즘의 성능을 향상시키고 이 향상된 알고리즘을 이용하여 최적 필터계수를 찾는다 이렇게 얻은 필터계수값을 이용하여 적응 필터의 성능을 확인 평가한다.

  • PDF

Analysis of Changes in the Algal Ecosystem of Sihwa Lake and Design of Sihwa-Ecosystem-Index (SEI) Based on Gradient Descent (시화호 조류 생태계의 변화 분석 및 경사 하강법을 이용한 시화호 환경 지수 고안)

  • Kim, Dong-hun;Jang, Ha-gyung;Lee, Gwan-wu;Jung, Gyeong-rok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.143-145
    • /
    • 2021
  • The Sihwa River was first planned to be a fresh water lake, but it failed due to serious environment pollution. During times of destruction and regeneration, changes of ecosystem of Sihwa River was visible, especially the algal ecosystem. It's because many seasonal birds pass through the place. This paper analyzes the changes of algal ecosystem of Sihwa River based on eight ecosystem indices. Moreover, using gradient descent, COD is expressed has a function of three ecosystem indices selected from above which is newly defined as SEI, Sihwa Ecosystem Index. In conclusion, we can observe the current ecosystem more easily without its actual data, but only with informations of the ecosystem.

  • PDF

Performance Comparison of the Optimizers in a Faster R-CNN Model for Object Detection of Metaphase Chromosomes (중기 염색체 객체 검출을 위한 Faster R-CNN 모델의 최적화기 성능 비교)

  • Jung, Wonseok;Lee, Byeong-Soo;Seo, Jeongwook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.11
    • /
    • pp.1357-1363
    • /
    • 2019
  • In this paper, we compares the performance of the gredient descent optimizers of the Faster Region-based Convolutional Neural Network (R-CNN) model for the chromosome object detection in digital images composed of human metaphase chromosomes. In faster R-CNN, the gradient descent optimizer is used to minimize the objective function of the region proposal network (RPN) module and the classification score and bounding box regression blocks. The gradient descent optimizer. Through performance comparisons among these four gradient descent optimizers in our experiments, we found that the Adamax optimizer could achieve the mean average precision (mAP) of about 52% when considering faster R-CNN with a base network, VGG16. In case of faster R-CNN with a base network, ResNet50, the Adadelta optimizer could achieve the mAP of about 58%.

Optimal Algorithm and Number of Neurons in Deep Learning (딥러닝 학습에서 최적의 알고리즘과 뉴론수 탐색)

  • Jang, Ha-Young;You, Eun-Kyung;Kim, Hyeock-Jin
    • Journal of Digital Convergence
    • /
    • v.20 no.4
    • /
    • pp.389-396
    • /
    • 2022
  • Deep Learning is based on a perceptron, and is currently being used in various fields such as image recognition, voice recognition, object detection, and drug development. Accordingly, a variety of learning algorithms have been proposed, and the number of neurons constituting a neural network varies greatly among researchers. This study analyzed the learning characteristics according to the number of neurons of the currently used SGD, momentum methods, AdaGrad, RMSProp, and Adam methods. To this end, a neural network was constructed with one input layer, three hidden layers, and one output layer. ReLU was applied to the activation function, cross entropy error (CEE) was applied to the loss function, and MNIST was used for the experimental dataset. As a result, it was concluded that the number of neurons 100-300, the algorithm Adam, and the number of learning (iteraction) 200 would be the most efficient in deep learning learning. This study will provide implications for the algorithm to be developed and the reference value of the number of neurons given new learning data in the future.