• Title/Summary/Keyword: Error function

Search Result 3,405, Processing Time 0.028 seconds

Applying CEE (CrossEntropyError) to improve performance of Q-Learning algorithm (Q-learning 알고리즘이 성능 향상을 위한 CEE(CrossEntropyError)적용)

  • Kang, Hyun-Gu;Seo, Dong-Sung;Lee, Byeong-seok;Kang, Min-Soo
    • Korean Journal of Artificial Intelligence
    • /
    • v.5 no.1
    • /
    • pp.1-9
    • /
    • 2017
  • Recently, the Q-Learning algorithm, which is one kind of reinforcement learning, is mainly used to implement artificial intelligence system in combination with deep learning. Many research is going on to improve the performance of Q-Learning. Therefore, purpose of theory try to improve the performance of Q-Learning algorithm. This Theory apply Cross Entropy Error to the loss function of Q-Learning algorithm. Since the mean squared error used in Q-Learning is difficult to measure the exact error rate, the Cross Entropy Error, known to be highly accurate, is applied to the loss function. Experimental results show that the success rate of the Mean Squared Error used in the existing reinforcement learning was about 12% and the Cross Entropy Error used in the deep learning was about 36%. The success rate was shown.

Servo-Writing Method using Feedback Error Learning Neural Networks for HDD (피드백 오차 학습 신경회로망을 이용한 하드디스크 서보정보 기록 방식)

  • Kim, Su-Hwan;Chung, Chung-Choo;Shim, Jun-Seok
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.699-701
    • /
    • 2004
  • This paper proposes the algorithm of servo- writing based on feedback error learning neural networks. The controller consists of feedback controller using PID and feedforward controller using gaussian radial basis function network. Because the RBFNs are trained by on-line rule, the controller has adaptation capability. The performance of the proposed controller is compared to that of conventional PID controller. Proposed algorithm shows better performance than PID controller.

  • PDF

A new learning algorithm for multilayer neural networks (새로운 다층 신경망 학습 알고리즘)

  • 고진욱;이철희
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.1285-1288
    • /
    • 1998
  • In this paper, we propose a new learning algorithm for multilayer neural networks. In the error backpropagation that is widely used for training multilayer neural networks, weights are adjusted to reduce the error function that is sum of squared error for all the neurons in the output layer of the network. In the proposed learning algorithm, we consider each output of the output layer as a function of weights and adjust the weights directly so that the output neurons produce the desired outputs. Experiments show that the proposed algorithm outperforms the backpropagation learning algorithm.

  • PDF

Fuzzy control with auto-tuning scaling factor (스켈링 계수 자동조정을 통한 퍼지제어)

  • 정명환;정희태;전기준
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10a
    • /
    • pp.123-128
    • /
    • 1992
  • This paper presents an autotuning algorithm of scaling factor in order to improve system performance. We define the scaling factor of fuzzy controller as a function of error and error change. This function is tuned by the output of performance evaluation level utilizing the error of overshoot and rising time. Simulation results show that the proposed algorithm has good tuning performance for a system with parameter change.

  • PDF

An analysis of errors in problem solving of the function unit in the first grade highschool (고등학교 1학년 함수단원 문제해결에서의 오류에 대한 분석)

  • Mun, Hye-Young;Kim, Yung-Hwan
    • Journal of the Korean School Mathematics Society
    • /
    • v.14 no.3
    • /
    • pp.277-293
    • /
    • 2011
  • The purpose of mathematics education is to develop the ability of transforming various problems in general situations into mathematics problems and then solving the problem mathematically. Various teaching-learning methods for improving the ability of the mathematics problem-solving can be tried. However, it is necessary to choose an appropriate teaching-learning method after figuring out students' level of understanding the mathematics learning or their problem-solving strategies. The error analysis is helpful for mathematics learning by providing teachers more efficient teaching strategies and by letting students know the cause of failure and then find a correct way. The following subjects were set up and analyzed. First, the error classification pattern was set up. Second, the errors in the solving process of the function problems were analyzed according to the error classification pattern. For this study, the survey was conducted to 90 first grade students of ${\bigcirc}{\bigcirc}$high school in Chung-nam. They were asked to solve 8 problems in the function part. The following error classification patterns were set up by referring to the preceding studies about the error and the error patterns shown in the survey. (1)Misused Data, (2)Misinterpreted Language, (3)Logically Invalid Inference, (4)Distorted Theorem or Definition, (5)Unverified Solution, (6)Technical Errors, (7)Discontinuance of solving process The results of the analysis of errors due to the above error classification pattern were given below First, students don't understand the concept of the function completely. Even if they do, they lack in the application ability. Second, students make many mistakes when they interpret the mathematics problem into different types of languages such as equations, signals, graphs, and figures. Third, students misuse or ignore the data given in the problem. Fourth, students often give up or never try the solving process. The research on the error analysis should be done further because it provides the useful information for the teaching-learning process.

  • PDF

Research into Head-body Thermal Bending for High-accuracy Thermal Error Compensation (고정도 열변위보정을 위한 주축대의 열적굽힘에 대한 연구)

  • Kim, Tae-Weon;Hah, Jae-Yong;Ko, Tae-Jo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.19 no.1
    • /
    • pp.56-64
    • /
    • 2002
  • Machine tools are engineered to give high dimensional accuracy in machining operation. However, errors due to thermal effects degrade dimensional accuracy of machine tools considerably, and many machine tools are equipped with thermal error compensation function. In general, thermal errors can be generated in the angular directions as well as linear directions. Among them, thermal errors in the angular directions contribute a large amount of error components in the presence of offset distance as in the case of Abbe error. Because most of thermal error compensation function is based on a good correlation between temperature change and thermal deformation, angular thermal deformation is often to be the most difficult hurdle for enhancing compensation accuracy. In this regard, this paper investigates the effect of thermal bending to total thermal error and gives how to deal with thermally induced bending effects in thermal error compensation.

A Novel Unambiguous Correlation Function for Composite Binary Offset Carrier Signal Tracking (합성 이진 옵셋 반송파 신호 추적을 위한 새로운 비모호 상관함수)

  • Lee, Youngseok;Yoon, Seokho
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38A no.6
    • /
    • pp.512-519
    • /
    • 2013
  • In this paper, we propose a novel unambiguous correlation function for composite binary offset carrier (CBOC) signal tracking. First, we observe that a sub-carrier of CBOC signal is seen as a sum of four partial sub-carriers, and generate four partial-correlations composing the CBOC autocorrelation. Then, we obtain an unambiguous correlation function with a sharp main-peak by re-combining the partial correlations. From numerical results, we confirm that the proposed unambiguous correlation function offers a better tracking performance than the conventional correlation functions in terms of the tracking error standard deviation and multipath error envelope.

Deriving a New Divergence Measure from Extended Cross-Entropy Error Function

  • Oh, Sang-Hoon;Wakuya, Hiroshi;Park, Sun-Gyu;Noh, Hwang-Woo;Yoo, Jae-Soo;Min, Byung-Won;Oh, Yong-Sun
    • International Journal of Contents
    • /
    • v.11 no.2
    • /
    • pp.57-62
    • /
    • 2015
  • Relative entropy is a divergence measure between two probability density functions of a random variable. Assuming that the random variable has only two alphabets, the relative entropy becomes a cross-entropy error function that can accelerate training convergence of multi-layer perceptron neural networks. Also, the n-th order extension of cross-entropy (nCE) error function exhibits an improved performance in viewpoints of learning convergence and generalization capability. In this paper, we derive a new divergence measure between two probability density functions from the nCE error function. And the new divergence measure is compared with the relative entropy through the use of three-dimensional plots.