• Title/Summary/Keyword: Generalization Error

Search Result 109, Processing Time 0.031 seconds

Time Series Prediction Using a Multi-layer Neural Network with Low Pass Filter Characteristics (저주파 필터 특성을 갖는 다층 구조 신경망을 이용한 시계열 데이터 예측)

  • Min-Ho Lee
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.21 no.1
    • /
    • pp.66-70
    • /
    • 1997
  • In this paper a new learning algorithm for curvature smoothing and improved generalization for multi-layer neural networks is proposed. To enhance the generalization ability a constraint term of hidden neuron activations is added to the conventional output error, which gives the curvature smoothing characteristics to multi-layer neural networks. When the total cost consisted of the output error and hidden error is minimized by gradient-descent methods, the additional descent term gives not only the Hebbian learning but also the synaptic weight decay. Therefore it incorporates error back-propagation, Hebbian, and weight decay, and additional computational requirements to the standard error back-propagation is negligible. From the computer simulation of the time series prediction with Santafe competition data it is shown that the proposed learning algorithm gives much better generalization performance.

  • PDF

Comparative Study of Map Generalization Algorithms with Different Tolerances (임계치 설정에 따른 지도 일반화 기법의 성능 비교 연구)

  • Lee, Jae-Eun;Park, Woo-Jin;Yu, Ki-Yun
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2010.04a
    • /
    • pp.19-21
    • /
    • 2010
  • In this study, regarding to the generalization of the map, we analyze how the different tolerances influence on the performances of linear generalization operators. For the analysis, we apply the generalization operators, especially two simplification algorithms provided in the commercial GIS software, to 1:1000 digital topographic map for analyzing the aspect of the changes in positional error depending on the tolerances. And we evaluate the changes in positional error with the quantitative assessments. The results show that the analysis can be used as the criteria for determining proper tolerance in linear generalization.

  • PDF

Improving the Generalization Error Bound using Total margin in Support Vector Machines (서포트 벡터 기계에서 TOTAL MARGIN을 이용한 일반화 오차 경계의 개선)

  • Yoon, Min
    • The Korean Journal of Applied Statistics
    • /
    • v.17 no.1
    • /
    • pp.75-88
    • /
    • 2004
  • The Support Vector Machine(SVM) algorithm has paid attention on maximizing the shortest distance between sample points and discrimination hyperplane. This paper suggests the total margin algorithm which considers the distance between all data points and the separating hyperplane. The method extends existing support vector machine algorithm. In addition, this newly proposed method improves the generalization error bound. Numerical experiments show that the total margin algorithm provides good performance, comparing with the previous methods.

A Comparison between Methods of Generalization according to the Types of Pattern of Mathematically Gifted Students and Non-gifted Students in Elementary School (초등수학영재와 일반학생의 패턴의 유형에 따른 일반화 방법 비교)

  • Yu, Mi Gyeong;Ryu, Sung Rim
    • School Mathematics
    • /
    • v.15 no.2
    • /
    • pp.459-479
    • /
    • 2013
  • The Purpose of this study was to explore the methods of generalization and errors pattern generated by mathematically gifted students and non-gifted students in elementary school. In this research, 6 problems corresponding to the x+a, ax, ax+c, $ax^2$, $ax^2+c$, $a^x$ patterns were given to 156 students. Conclusions obtained through this study are as follows. First, both group were the best in symbolically generalizing ax pattern, whereas the number of students who generalized $a^x$ pattern symbolically was the least. Second, mathematically gifted students in elementary school were able to algebraically generalize more than 79% of in x+a, ax, ax+c, $ax^2$, $ax^2+c$, $a^x$ patterns. However, non-gifted students succeeded in algebraically generalizing more than 79% only in x+a, ax patterns. Third, students in both groups failed in finding commonness in phased numbers, so they solved problems arithmetically depending on to what extent it was increased when they failed in reaching generalization of formula. Fourth, as for the type of error that students make mistake, technical error was the highest with 10.9% among mathematically gifted students in elementary school, also technical error was the highest as 17.1% among non-gifted students. Fifth, as for the frequency of error against the types of all patterns, mathematically gifted students in elementary school marked 17.3% and non-gifted students were 31.2%, which means that a majority of mathematically gifted students in elementary school are able to do symbolic generalization to a certain degree, but many non-gifted students did not comprehend questions on patterns and failed in symbolic generalization.

  • PDF

3D Generalization and Logical Error Correction for Digital Map Update (수치지도 갱신을 위한 3차원 일반화와 논리적 오류수정)

  • Lee, Jin-Hyung;Lee, Dong-Cheon;Park, Ki-Suk;Park, Chung
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2009.04a
    • /
    • pp.29-34
    • /
    • 2009
  • Map update is required to provide up-to-date information. In update process, the most adequate generalization is to be applied to all scales of maps simultaneously. Most of existing maps are composed of 2D data and represented in 2D space. However, maps for next generation are to be generated with 3D spatial information including ortho-images and DEMs. Therefore, 3D generalization is necessary for 3D digital map update. This paper proposes methods for 3D generalization and correction for logical errors possibly accompanied with generalization.

  • PDF

Reliability Computation of Neuro-Fuzzy Models : A Comparative Study (뉴로-퍼지 모델의 신뢰도 계산 : 비교 연구)

  • 심현정;박래정;왕보현
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.4
    • /
    • pp.293-301
    • /
    • 2001
  • This paper reviews three methods to compute a pointwise confidence interval of neuro-fuzzy models and compares their estimation perfonnanee through simulations. The eOITl.putation methods under consideration include stacked generalization using cross-validation, predictive error bar in regressive models, and local reliability measure for the networks employing a local representation scheme. These methods implemented on the neuro-fuzzy models are applied to the problems of simple function approximation and chaotic time series prediction. The results of reliability estimation are compared both quantitatively and qualitatively.

  • PDF

Modified Error Back Propagation Algorithm using the Approximating of the Hidden Nodes in Multi-Layer Perceptron (다층퍼셉트론의 은닉노드 근사화를 이용한 개선된 오류역전파 학습)

  • Kwak, Young-Tae;Lee, young-Gik;Kwon, Oh-Seok
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.9
    • /
    • pp.603-611
    • /
    • 2001
  • This paper proposes a novel fast layer-by-layer algorithm that has better generalization capability. In the proposed algorithm, the weights of the hidden layer are updated by the target vector of the hidden layer obtained by least squares method. The proposed algorithm improves the learning speed that can occur due to the small magnitude of the gradient vector in the hidden layer. This algorithm was tested in a handwritten digits recognition problem. The learning speed of the proposed algorithm was faster than those of error back propagation algorithm and modified error function algorithm, and similar to those of Ooyen's method and layer-by-layer algorithm. Moreover, the simulation results showed that the proposed algorithm had the best generalization capability among them regardless of the number of hidden nodes. The proposed algorithm has the advantages of the learning speed of layer-by-layer algorithm and the generalization capability of error back propagation algorithm and modified error function algorithm.

  • PDF

Line Segmentation Method using Expansible Moving Window for Cartographic Linear Features (확장형 이동창을 이용한 지도 선형 개체의 분할 기법 연구)

  • Park, Woo-Jin;Lee, Jae-Eun;Yu, Ki-Yun
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2010.04a
    • /
    • pp.5-6
    • /
    • 2010
  • Needs for the methodology of segmentation of linear feature according to the shape characteristics of line feature are increasing in cartographic linear generalization. In this study, the line segmentation method using expansible moving window is presented. This method analyzes the generalization effect of line simplification algorithms depend on the line characters of linear feature and extracts the sections which show exclusively low positional error due to a specific algorithm. The description measurements of these segments are calculated and the target line data are segmented based on the measurements. For segmenting the linear feature to a homogeneous section, expansible moving window is applied. This segmentation method is expected to be used in the cartographic map generalization considering the shape characteristics of linear feature.

  • PDF

A Study on Training Ensembles of Neural Networks - A Case of Stock Price Prediction (신경망 학습앙상블에 관한 연구 - 주가예측을 중심으로 -)

  • 이영찬;곽수환
    • Journal of Intelligence and Information Systems
    • /
    • v.5 no.1
    • /
    • pp.95-101
    • /
    • 1999
  • In this paper, a comparison between different methods to combine predictions from neural networks will be given. These methods are bagging, bumping, and balancing. Those are based on the analysis of the ensemble generalization error into an ambiguity term and a term incorporating generalization performances of individual networks. Neural Networks and AI machine learning models are prone to overfitting. A strategy to prevent a neural network from overfitting, is to stop training in early stage of the learning process. The complete data set is spilt up into a training set and a validation set. Training is stopped when the error on the validation set starts increasing. The stability of the networks is highly dependent on the division in training and validation set, and also on the random initial weights and the chosen minimization procedure. This causes early stopped networks to be rather unstable: a small change in the data or different initial conditions can produce large changes in the prediction. Therefore, it is advisable to apply the same procedure several times starting from different initial weights. This technique is often referred to as training ensembles of neural networks. In this paper, we presented a comparison of three statistical methods to prevent overfitting of neural network.

  • PDF

Dropout Genetic Algorithm Analysis for Deep Learning Generalization Error Minimization

  • Park, Jae-Gyun;Choi, Eun-Soo;Kang, Min-Soo;Jung, Yong-Gyu
    • International Journal of Advanced Culture Technology
    • /
    • v.5 no.2
    • /
    • pp.74-81
    • /
    • 2017
  • Recently, there are many companies that use systems based on artificial intelligence. The accuracy of artificial intelligence depends on the amount of learning data and the appropriate algorithm. However, it is not easy to obtain learning data with a large number of entity. Less data set have large generalization errors due to overfitting. In order to minimize this generalization error, this study proposed DGA(Dropout Genetic Algorithm) which can expect relatively high accuracy even though data with a less data set is applied to machine learning based genetic algorithm to deep learning based dropout. The idea of this paper is to determine the active state of the nodes. Using Gradient about loss function, A new fitness function is defined. Proposed Algorithm DGA is supplementing stochastic inconsistency about Dropout. Also DGA solved problem by the complexity of the fitness function and expression range of the model about Genetic Algorithm As a result of experiments using MNIST data proposed algorithm accuracy is 75.3%. Using only Dropout algorithm accuracy is 41.4%. It is shown that DGA is better than using only dropout.