• Title/Summary/Keyword: Learning rate

Search Result 2,158, Processing Time 0.027 seconds

Research on building AI learning data for rapid quality assessment of aggregates (골재의 신속한 품질평가를 위한 AI 학습용 데이터 구축에 관한 연구)

  • Min, Tae-Beom;Kim, In;Lee, Jae-Sam;Baek, Chul-Seoung
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2023.11a
    • /
    • pp.209-210
    • /
    • 2023
  • In this study, the accuracy of the assembly rate of fine aggregate and the cleavage rate of coarse aggregate was analyzed using the constructed learning data. As a result, it was possible to predict the distribution of assembly rate for fine aggregate through a simple sample collection image, showing an accuracy of 96%. The classification of the aggregates could be confirmed by analyzing the fracture shape of the gravel, showing an accuracy of 97%.

  • PDF

A Study on Real-time Drilling Parameters Prediction Using Recurrent Neural Network (순환신경망을 이용한 실시간 시추매개변수 예측 연구)

  • Han, Dong-kwon;Seo, Hyeong-jun;Kim, Min-soo;Kwon, Sun-il
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.204-206
    • /
    • 2021
  • Real-time drilling parameters prediction is a considerably important study from the viewpoint of maximizing drilling efficiency. Among the methods of maximizing drilling, the method of improving the drilling speed is common, which is related to the rate of penetration, drillstring rotational speed, weight on bit, and drilling mud flow rate. This study proposes a method of predicting the drilling rate, one of the real-time drilling parameters, using a recurrent neural network-based deep learning model, and compares the existing physical-based drilling rate prediction model with a prediction model using deep learning.

  • PDF

Study on the Effective Compensation of Quantization Error for Machine Learning in an Embedded System (임베디드 시스템에서의 양자화 기계학습을 위한 효율적인 양자화 오차보상에 관한 연구)

  • Seok, Jinwuk
    • Journal of Broadcast Engineering
    • /
    • v.25 no.2
    • /
    • pp.157-165
    • /
    • 2020
  • In this paper. we propose an effective compensation scheme to the quantization error arisen from quantized learning in a machine learning on an embedded system. In the machine learning based on a gradient descent or nonlinear signal processing, the quantization error generates early vanishing of a gradient and occurs the degradation of learning performance. To compensate such quantization error, we derive an orthogonal compensation vector with respect to a maximum component of the gradient vector. Moreover, instead of the conventional constant learning rate, we propose the adaptive learning rate algorithm without any inner loop to select the step size, based on a nonlinear optimization technique. The simulation results show that the optimization solver based on the proposed quantized method represents sufficient learning performance.

Function Approximation for accelerating learning speed in Reinforcement Learning (강화학습의 학습 가속을 위한 함수 근사 방법)

  • Lee, Young-Ah;Chung, Tae-Choong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.6
    • /
    • pp.635-642
    • /
    • 2003
  • Reinforcement learning got successful results in a lot of applications such as control and scheduling. Various function approximation methods have been studied in order to improve the learning speed and to solve the shortage of storage in the standard reinforcement learning algorithm of Q-Learning. Most function approximation methods remove some special quality of reinforcement learning and need prior knowledge and preprocessing. Fuzzy Q-Learning needs preprocessing to define fuzzy variables and Local Weighted Regression uses training examples. In this paper, we propose a function approximation method, Fuzzy Q-Map that is based on on-line fuzzy clustering. Fuzzy Q-Map classifies a query state and predicts a suitable action according to the membership degree. We applied the Fuzzy Q-Map, CMAC and LWR to the mountain car problem. Fuzzy Q-Map reached the optimal prediction rate faster than CMAC and the lower prediction rate was seen than LWR that uses training example.

On the configuration of learning parameter to enhance convergence speed of back propagation neural network (역전파 신경회로망의 수렴속도 개선을 위한 학습파라메타 설정에 관한 연구)

  • 홍봉화;이승주;조원경
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.11
    • /
    • pp.159-166
    • /
    • 1996
  • In this paper, the method for improving the speed of convergence and learning rate of back propagation algorithms is proposed which update the learning rate parameter and momentum term for each weight by generated error, changely the output layer of neural network generates a high value in the case that output value is far from the desired values, and genrates a low value in the opposite case this method decreases the iteration number and is able to learning effectively. The effectiveness of proposed method is verified through the simulation of X-OR and 3-parity problem.

  • PDF

A STUDY ON THE SIMULATED ANNEALING OF SELF ORGANIZED MAP ALGORITHM FOR KOREAN PHONEME RECOGNITION

  • Kang, Myung-Kwang;Ann, Tae-Ock;Kim, Lee-Hyung;Kim, Soon-Hyob
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06c
    • /
    • pp.407-410
    • /
    • 1994
  • In this paper, we describe the new unsuperivised learning algorithm, SASOM. It can solve the defects of the conventional SOM that the state of network can't converge to the minimum point. The proposed algorithm uses the object function which can evaluate the state of network in learning and adjusts the learning rate flexibly according to the evaluation of the object function. We implement the simulated annealing which is applied to the conventional network using the object function and the learning rate. Finally, the proposed algorithm can make the state of network converged to the global minimum. Using the two-dimensional input vectors with uniform distribution, we graphically compared the ordering ability of SOM with that of SASOM. We carried out the recognitioin on the new algorithm for all Korean phonemes and some continuous speech.

  • PDF

Improvement of the Convergence Rate of Deep Learning by Using Scaling Method

  • Ho, Jiacang;Kang, Dae-Ki
    • International journal of advanced smart convergence
    • /
    • v.6 no.4
    • /
    • pp.67-72
    • /
    • 2017
  • Deep learning neural network becomes very popular nowadays due to the reason that it can learn a very complex dataset such as the image dataset. Although deep learning neural network can produce high accuracy on the image dataset, it needs a lot of time to reach the convergence stage. To solve the issue, we have proposed a scaling method to improve the neural network to achieve the convergence stage in a shorter time than the original method. From the result, we can observe that our algorithm has higher performance than the other previous work.

A Learning Algorithm for Optimal Fuzzy Control Rules (최적의 퍼지제어규칙을 얻기위한 퍼지학습법)

  • Chung, Byeong-Mook
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.20 no.2
    • /
    • pp.399-407
    • /
    • 1996
  • A fuzzy learning algorithm to get the optimal fuzzy rules is presented in this paper. The algorithm introduces a reference model to generate a desired output and a performance index funtion instead of the performance index table. The performance index funtion is a cost function based on the error and error-rate between the reference and plant output. The cost function is minimized by a gradient method and the control input is also updated. In this case, the control rules which generate the desired response can be obtained by changing the portion of the error-rate in the cost funtion. In SISO(Single-Input Single- Output)plant, only by the learning delay, it is possible to experss the plant model and to get the desired control rules. In the long run, this algorithm gives us the good control rules with a minimal amount of prior informaiton about the environment.

Parameter Adaptationin in Neural Network Using Fuzzy (퍼지를 이용한 신경망에서의 파라미터의 수정)

  • Lee, Kwong-Won;Ko, Joe-Ho;Bae, Young-Chul;Yim, Wha-Young
    • Proceedings of the KIEE Conference
    • /
    • 1997.07b
    • /
    • pp.383-385
    • /
    • 1997
  • Back-propagation is one of the efficient algorithms used to nonlinear optimizations or controls. In spite of its structual simplicity and learning ability, learning time is very long or bad case converge local minimum on complicate input patterns. In order to improve these matters varing learning rate and momentums were proposed. In this paper, to improve its performance fuzzy is adjusted in parameters, learning rate and momentums. Parameters are adjusted by errors and change of errors adaptively. In order to evaluate proposed method simulated with MATLAB on inverted pendulum.

  • PDF

Study on the Load Frequency of 2-Area Power System Using Neural Network Controller (신경회로망 제어기을 이용한 2지역 전력계통의 부하주파수제어에 관한 연구)

  • Chong, H.H.;Lee, J.T.;Kim, S.H.;Joo, S.M.
    • Proceedings of the KIEE Conference
    • /
    • 1996.07b
    • /
    • pp.768-770
    • /
    • 1996
  • This paper propose neural network which is one of self-organizing techniques. It is composed neural network controller as input signal is error and change of error which is optimal output, and is learned system by using a error back-propagation learning algorithm is one of error mimizing learning methods. In order to achieve practical real time control reduce on learning time, it is applied to load-frequency control of nonlinear power system with using a moment learning method. It is described in such a case considering constraints for a rate of increace generation-rate.

  • PDF