• 제목/요약/키워드: Learning Rate

검색결과 2,191건 처리시간 0.03초

Enhanced RBF Network by Using Auto- Turning Method of Learning Rate, Momentum and ART2

  • Kim, Kwang-baek;Moon, Jung-wook
    • 한국산학기술학회:학술대회논문집
    • /
    • 한국산학기술학회 2003년도 Proceeding
    • /
    • pp.84-87
    • /
    • 2003
  • This paper proposes the enhanced REF network, which arbitrates learning rate and momentum dynamically by using the fuzzy system, to arbitrate the connected weight effectively between the middle layer of REF network and the output layer of REF network. ART2 is applied to as the learning structure between the input layer and the middle layer and the proposed auto-turning method of arbitrating the learning rate as the method of arbitrating the connected weight between the middle layer and the output layer. The enhancement of proposed method in terms of learning speed and convergence is verified as a result of comparing it with the conventional delta-bar-delta algorithm and the REF network on the basis of the ART2 to evaluate the efficiency of learning of the proposed method.

  • PDF

학습률 적용에 따른 흉부영상 폐렴 유무 분류 비교평가 (Comparative Evaluation of Chest Image Pneumonia based on Learning Rate Application)

  • 김지율;예수영
    • 한국방사선학회논문지
    • /
    • 제16권5호
    • /
    • pp.595-602
    • /
    • 2022
  • 본 연구는 딥러닝을 이용한 흉부 X선 폐렴 영상에 대하여 정확하고 효율적인 의료영상의 자동진단을 위해서 가장 효율적인 학습률을 제시하고자 하였다. Inception V3 딥러닝 모델에 학습률을 0.1, 0.01, 0.001, 0.0001로 각각 설정한 후 3회 딥러닝 모델링을 수행하였다. 그리고 검증 모델링의 평균 정확도 및 손실 함수 값, Test 모델링의 Metric을 성능평가 지표로 설정하여 딥러닝 모델링의 수행 결과로 획득한 결과값의 3회 평균값으로 성능을 비교 평가하였다. 딥러닝 검증 모델링 성능평가 및 Test 모델링 Metric에 대한 성능평가의 결과, 학습률 0.001을 적용한 모델링이 가장 높은 정확도와 우수한 성능을 나타내었다. 이러한 이유로 본 논문에서는 딥러닝 모델을 이용한 흉부 X선 영상에 대한 폐렴 유무 분류 시 학습률을 0.001로 적용할 것을 권고한다. 그리고 본 논문에서 제시하는 학습률의 적용을 통한 딥러닝 모델링 시 흉부 X선 영상에 대한 폐렴 유무 분류에 대한 인력의 보조적인 역할을 수행할 수 있을 거라고 판단하였다. 향후 딥러닝을 이용한 폐렴 유무 진단 분류 연구가 계속해서 진행될 시, 본 논문의 논문 연구 내용은 기초자료로 활용될 수 있다고 여겨지며 나아가 인공지능을 활용한 의료영상 분류에 있어 효율적인 학습률 선택에 도움이 될 것으로 기대된다.

Evolutionary Learning-Rate Selection for BPNN with Window Control Scheme

  • Hoon, Jung-Sung
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 1997년도 추계학술대회 학술발표 논문집
    • /
    • pp.301-308
    • /
    • 1997
  • The learning speed of the neural networks, the most important factor in applying to real problems, greatly depends on the learning rate of the networks, Three approaches-empirical, deterministic, and stochastic ones-have been proposed to date. We proposed a new learning-rate selection algorithm using an evolutionary programming search scheme. Even though the performance of our method showed better than those of the other methods, it was found that taking much time for selecting evolutionary learning rates made the performance of our method degrade. This was caused by using static intervals (called static windows) in order to update learning rates. Out algorithm with static windows updated the learning rates showed good performance or didn't update the learning rates even though previously updated learning rates shoved bad performance. This paper introduce a window control scheme to avoid such problems. With the window control scheme, our algorithm try to update the learning ra es only when the learning performance is continuously bad during a specified interval. If previously selected learning rates show good performance, new algorithm will not update the learning rates. This diminish the updating time of learning rates greatly. As a result, our algorithm with the window control scheme show better performance than that with static windows. In this paper, we will describe the previous and new algorithm and experimental results.

  • PDF

은닉층 뉴우런 추가에 의한 역전파 학습 알고리즘 (A Modified Error Back Propagation Algorithm Adding Neurons to Hidden Layer)

  • 백준호;김유신;손경식
    • 전자공학회논문지B
    • /
    • 제29B권4호
    • /
    • pp.58-65
    • /
    • 1992
  • In this paper new back propagation algorithm which adds neurons to hidden layer is proposed. this proposed algorithm is applied to the pattern recognition of written number coupled with back propagation algorithm through omitting redundant learning. Learning rate and recognition rate of the proposed algorithm are compared with those of the conventional back propagation algorithm and the back propagation through omitting redundant learning. The learning rate of proposed algorithm is 4 times as fast as the conventional back propagation algorithm and 2 times as fast as the back propagation through omitting redundant learning. The recognition rate is 96.2% in case of the conventional back propagation algorithm, 96.5% in case of the back propagation through omitting redundant learning and 97.4% in the proposed algorithm.

  • PDF

시불변 학습계수와 이진 강화 함수를 가진 자기 조직화 형상지도 신경회로망의 동적특성 (The dynamics of self-organizing feature map with constant learning rate and binary reinforcement function)

  • 석진욱;조성원
    • 제어로봇시스템학회논문지
    • /
    • 제2권2호
    • /
    • pp.108-114
    • /
    • 1996
  • We present proofs of the stability and convergence of Self-organizing feature map (SOFM) neural network with time-invarient learning rate and binary reinforcement function. One of the major problems in Self-organizing feature map neural network concerns with learning rate-"Kalman Filter" gain in stochsatic control field which is monotone decreasing function and converges to 0 for satisfying minimum variance property. In this paper, we show that the stability and convergence of Self-organizing feature map neural network with time-invariant learning rate. The analysis of the proposed algorithm shows that the stability and convergence is guranteed with exponentially stable and weak convergence properties as well.s as well.

  • PDF

STOCHASTIC GRADIENT METHODS FOR L2-WASSERSTEIN LEAST SQUARES PROBLEM OF GAUSSIAN MEASURES

  • YUN, SANGWOON;SUN, XIANG;CHOI, JUNG-IL
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • 제25권4호
    • /
    • pp.162-172
    • /
    • 2021
  • This paper proposes stochastic methods to find an approximate solution for the L2-Wasserstein least squares problem of Gaussian measures. The variable for the problem is in a set of positive definite matrices. The first proposed stochastic method is a type of classical stochastic gradient methods combined with projection and the second one is a type of variance reduced methods with projection. Their global convergence are analyzed by using the framework of proximal stochastic gradient methods. The convergence of the classical stochastic gradient method combined with projection is established by using diminishing learning rate rule in which the learning rate decreases as the epoch increases but that of the variance reduced method with projection can be established by using constant learning rate. The numerical results show that the present algorithms with a proper learning rate outperforms a gradient projection method.

Pipeline wall thinning rate prediction model based on machine learning

  • Moon, Seongin;Kim, Kyungmo;Lee, Gyeong-Geun;Yu, Yongkyun;Kim, Dong-Jin
    • Nuclear Engineering and Technology
    • /
    • 제53권12호
    • /
    • pp.4060-4066
    • /
    • 2021
  • Flow-accelerated corrosion (FAC) of carbon steel piping is a significant problem in nuclear power plants. The basic process of FAC is currently understood relatively well; however, the accuracy of prediction models of the wall-thinning rate under an FAC environment is not reliable. Herein, we propose a methodology to construct pipe wall-thinning rate prediction models using artificial neural networks and a convolutional neural network, which is confined to a straight pipe without geometric changes. Furthermore, a methodology to generate training data is proposed to efficiently train the neural network for the development of a machine learning-based FAC prediction model. Consequently, it is concluded that machine learning can be used to construct pipe wall thinning rate prediction models and optimize the number of training datasets for training the machine learning algorithm. The proposed methodology can be applied to efficiently generate a large dataset from an FAC test to develop a wall thinning rate prediction model for a real situation.

CMP 패드 컨디셔닝에서 딥러닝을 활용한 컨디셔너 스윙에 따른 패드 마모 프로파일에 관한 연구 (Study on the Pad Wear Profile Based on the Conditioner Swing Using Deep Learning for CMP Pad Conditioning)

  • 박병훈;황해성;이현섭
    • Tribology and Lubricants
    • /
    • 제40권2호
    • /
    • pp.67-70
    • /
    • 2024
  • Chemical mechanical planarization (CMP) is an essential process for ensuring high integration when manufacturing semiconductor devices. CMP mainly requires the use of polyurethane-based polishing pads as an ultraprecise process to achieve mechanical material removal and the required chemical reactions. A diamond disk performs pad conditioning to remove processing residues on the pad surface and maintain sufficient surface roughness during CMP. However, the diamond grits attached to the disk cause uneven wear of the pad, leading to the poor uniformity of material removal during CMP. This study investigates the pad wear rate profile according to the swing motion of the conditioner during swing-arm-type CMP conditioning using deep learning. During conditioning, the motion of the swing arm is independently controlled in eight zones of the same pad radius. The experiment includes six swingmotion conditions to obtain actual data on the pad wear rate profile, and deep learning learns the pad wear rate profile obtained in the experiment. The absolute average error rate between the experimental values and learning results is 0.01%. This finding confirms that the experimental results can be well represented by learning. Pad wear rate profile prediction using the learning results reveals good agreement between the predicted and experimental values.

Q-learning 알고리즘이 성능 향상을 위한 CEE(CrossEntropyError)적용 (Applying CEE (CrossEntropyError) to improve performance of Q-Learning algorithm)

  • 강현구;서동성;이병석;강민수
    • 한국인공지능학회지
    • /
    • 제5권1호
    • /
    • pp.1-9
    • /
    • 2017
  • Recently, the Q-Learning algorithm, which is one kind of reinforcement learning, is mainly used to implement artificial intelligence system in combination with deep learning. Many research is going on to improve the performance of Q-Learning. Therefore, purpose of theory try to improve the performance of Q-Learning algorithm. This Theory apply Cross Entropy Error to the loss function of Q-Learning algorithm. Since the mean squared error used in Q-Learning is difficult to measure the exact error rate, the Cross Entropy Error, known to be highly accurate, is applied to the loss function. Experimental results show that the success rate of the Mean Squared Error used in the existing reinforcement learning was about 12% and the Cross Entropy Error used in the deep learning was about 36%. The success rate was shown.

일정 학습계수와 이진 강화함수를 가진 자기 조직화 형상지도 신경회로망 (Self-Organizing Feature Map with Constant Learning Rate and Binary Reinforcement)

  • 조성원;석진욱
    • 전자공학회논문지B
    • /
    • 제32B권1호
    • /
    • pp.180-188
    • /
    • 1995
  • A modified Kohonen's self-organizing feature map (SOFM) algorithm which has binary reinforcement function and a constant learning rate is proposed. In contrast to the time-varing adaptaion gain of the original Kohonen's SOFM algorithm, the proposed algorithm uses a constant adaptation gain, and adds a binary reinforcement function in order to compensate for the lowered learning ability of SOFM due to the constant learning rate. Since the proposed algorithm does not have the complicated multiplication, it's digital hardware implementation is much easier than that of the original SOFM.

  • PDF