• Title/Summary/Keyword: Generalization Error

Search Result 109, Processing Time 0.027 seconds

The Design of Optimal Fuzzy-Neural networks Structure by Means of GA and an Aggregate Weighted Performance Index (유전자 알고리즘과 합성 성능지수에 의한 최적 퍼지-뉴럴 네트워크 구조의 설계)

  • Oh, Sung-Kwun;Yoon, Ki-Chan;Kim, Hyun-Ki
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.6 no.3
    • /
    • pp.273-283
    • /
    • 2000
  • In this paper we suggest an optimal design method of Fuzzy-Neural Networks(FNN) model for complex and nonlinear systems. The FNNs use the simplified inference as fuzzy inference method and Error Back Propagation Algorithm as learning rule. And we use a HCM(Hard C-Means) Clustering Algorithm to find initial parameters of the membership function. The parameters such as parameters of membership functions learning rates and momentum weighted value is proposed to achieve a sound balance between approximation and generalization abilities of the model. According to selection and adjustment of a weighting factor of an aggregate objective function which depends on the number of data and a certain degree of nonlinearity (distribution of I/O data we show that it is available and effective to design and optimal FNN model structure with a mutual balance and dependency between approximation and generalization abilities. This methodology sheds light on the role and impact of different parameters of the model on its performance (especially the mapping and predicting capabilities of the rule based computing). To evaluate the performance of the proposed model we use the time series data for gas furnace the data of sewage treatment process and traffic route choice process.

  • PDF

A Model Stacking Algorithm for Indoor Positioning System using WiFi Fingerprinting

  • JinQuan Wang;YiJun Wang;GuangWen Liu;GuiFen Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.4
    • /
    • pp.1200-1215
    • /
    • 2023
  • With the development of IoT and artificial intelligence, location-based services are getting more and more attention. For solving the current problem that indoor positioning error is large and generalization is poor, this paper proposes a Model Stacking Algorithm for Indoor Positioning System using WiFi fingerprinting. Firstly, we adopt a model stacking method based on Bayesian optimization to predict the location of indoor targets to improve indoor localization accuracy and model generalization. Secondly, Taking the predicted position based on model stacking as the observation value of particle filter, collaborative particle filter localization based on model stacking algorithm is realized. The experimental results show that the algorithm can control the position error within 2m, which is superior to KNN, GBDT, Xgboost, LightGBM, RF. The location accuracy of the fusion particle filter algorithm is improved by 31%, and the predicted trajectory is close to the real trajectory. The algorithm can also adapt to the application scenarios with fewer wireless access points.

A Pruning Algorithm of Neural Networks Using Impact Factors (임팩트 팩터를 이용한 신경 회로망의 연결 소거 알고리즘)

  • 이하준;정승범;박철훈
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.2
    • /
    • pp.77-86
    • /
    • 2004
  • In general, small-sized neural networks, even though they show good generalization performance, tend to fail to team the training data within a given error bound, whereas large-sized ones learn the training data easily but yield poor generalization. Therefore, a way of achieving good generalization is to find the smallest network that can learn the data, called the optimal-sized neural network. This paper proposes a new scheme for network pruning with ‘impact factor’ which is defined as a multiplication of the variance of a neuron output and the square of its outgoing weight. Simulation results of function approximation problems show that the proposed method is effective in regression.

A Modified Error Function to Improve the Error Back-Propagation Algorithm for Multi-Layer Perceptrons

  • Oh, Sang-Hoon;Lee, Young-Jik
    • ETRI Journal
    • /
    • v.17 no.1
    • /
    • pp.11-22
    • /
    • 1995
  • This paper proposes a modified error function to improve the error back-propagation (EBP) algorithm for multi-Layer perceptrons (MLPs) which suffers from slow learning speed. It can also suppress over-specialization for training patterns that occurs in an algorithm based on a cross-entropy cost function which markedly reduces learning time. In the similar way as the cross-entropy function, our new function accelerates the learning speed of the EBP algorithm by allowing the output node of the MLP to generate a strong error signal when the output node is far from the desired value. Moreover, it prevents the overspecialization of learning for training patterns by letting the output node, whose value is close to the desired value, generate a weak error signal. In a simulation study to classify handwritten digits in the CEDAR [1] database, the proposed method attained 100% correct classification for the training patterns after only 50 sweeps of learning, while the original EBP attained only 98.8% after 500 sweeps. Also, our method shows mean-squared error of 0.627 for the test patterns, which is superior to the error 0.667 in the cross-entropy method. These results demonstrate that our new method excels others in learning speed as well as in generalization.

  • PDF

A study on fatigue crack growth modelling by back propagation neural networks (역전파 신경회로망을 이용한 피로 균열성장 모델링에 관한 연구)

  • 주원식;조석수
    • Journal of Ocean Engineering and Technology
    • /
    • v.10 no.1
    • /
    • pp.65-74
    • /
    • 1996
  • Up to now, the existing crack growth modelling has used a mathematical approximation but an assumed function have a great influence on this method. Especially, crack growth behavior that shows very strong nonlinearity needed complicated function which has difficulty in setting parameter of it. The main characteristics of neural network modelling to engineering field are simple calculations and absence of assumed function. In this paper, after discussing learning and generalization of neural networks, we performed crack growth modelling on the basis of above learning algorithms. J'-da/dt relation predicted by neural networks shows that test condition with unlearned data is simulated well within estimated mean error(5%).

  • PDF

A Study on High Temperature Low Cycle Fatigue Crack Growth Modelling by Neural Networks (신경회로망을 이용한 고온 저사이클 피로균열성장 모델링에 관한 연구)

  • Ju, Won-Sik;Jo, Seok-Su
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.20 no.4
    • /
    • pp.2752-2759
    • /
    • 1996
  • This paper presents crack growth analysis approach on the basis of neural networks, a branch of cognitive science to high temperature low cycle fatigue that shows strong nonlinearity in material behavior. As the number of data patterns on crack growth increase, pattern classification occurs well and two point representation scheme with gradient of crack growth curve simulates crack growth rate better than one point representation scheme. Optimal number of learning data exists and excessive number of learning data increases estimated mean error with remarkable learning time J-da/dt relation predicted by neural networks shows that test condition with unlearned data is simulated well within estimated mean error(5%).

Generalization of error decision rules in a grammar checker using Korean WordNet, KorLex (명사 어휘의미망을 활용한 문법 검사기의 문맥 오류 결정 규칙 일반화)

  • So, Gil-Ja;Lee, Seung-Hee;Kwon, Hyuk-Chul
    • The KIPS Transactions:PartB
    • /
    • v.18B no.6
    • /
    • pp.405-414
    • /
    • 2011
  • Korean grammar checkers typically detect context-dependent errors by employing heuristic rules that are manually formulated by a language expert. These rules are appended each time a new error pattern is detected. However, such grammar checkers are not consistent. In order to resolve this shortcoming, we propose new method for generalizing error decision rules to detect the above errors. For this purpose, we use an existing thesaurus KorLex, which is the Korean version of Princeton WordNet. KorLex has hierarchical word senses for nouns, but does not contain any information about the relationships between cases in a sentence. Through the Tree Cut Model and the MDL(minimum description length) model based on information theory, we extract noun classes from KorLex and generalize error decision rules from these noun classes. In order to verify the accuracy of the new method in an experiment, we extracted nouns used as an object of the four predicates usually confused from a large corpus, and subsequently extracted noun classes from these nouns. We found that the number of error decision rules generalized from these noun classes has decreased to about 64.8%. In conclusion, the precision of our grammar checker exceeds that of conventional ones by 6.2%.

The Positional Accuracy Quality Assessment of Digital Map Generalization (수치지도 일반화 위치정확도 품질평가)

  • 박경식;임인섭;최석근
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.19 no.2
    • /
    • pp.173-181
    • /
    • 2001
  • It is very important to assess spatial data quality of a digital map produced through digital map generalization. In this study, as a aspect of spatial data quality maintenance, we examined the tolerate range of theoretical expectation accuracy and established the quality assessment standard in spatial data for the transformed digital map data do not act contrary to the digital map specifications and the digital map accuracy of the relational scale. And, transforming large scale digital map to small scale, if we reduce complexity through processes as simplification, smoothing, refinement and so on., the spatial position change may be always happened. thus, because it is very difficult to analyse the spatial accuracy of the transformed position, we used the buffering as assessment method of spatial accuracy in digital map generalization procedure. Although the tolerated range of generic positioning error for l/l, 000 and l/5, 000 scale is determined based on related law, because the algorithms adapted to each processing elements have different property each other, if we don't determine the suitable parameter and tolerance, we will not satisfy the result after generalization procedure with tolerated range of positioning error. The results of this study test which is about the parameters of each algorithm based on tolerated range showed that the parameter of the simplification algorithm and the positional accuracy are 0.2617 m, 0.4617 m respectively.

  • PDF

A Fuzzy-ARTMAP Equalizer for Compensating the Nonlinearity of Satellite Communication Channel

  • Lee, Jung-Sik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.26 no.8B
    • /
    • pp.1078-1084
    • /
    • 2001
  • In this paper, fuzzy-ARTMAP neural network is applied for compensating the nonlinearity of satellite communication channel. The fuzzy-ARTMAP is made of using fuzzy logic and ART neural network. By a match tracking process with vigilance parameter, fuzzy ARTMAP neural network achieves a minimax learning rule that minimizes predictive error and maximizes generalization. Thus, the system automatically learns a minimal number of recognition categories, or hidden units, to meet accuracy criteria. Simulation studies are performed over satellite nonlinear channels. The performance of proposed fuzzy-ARTMAP equalizer is compared with MLP-basis equalizers.

  • PDF

Deriving a New Divergence Measure from Extended Cross-Entropy Error Function

  • Oh, Sang-Hoon;Wakuya, Hiroshi;Park, Sun-Gyu;Noh, Hwang-Woo;Yoo, Jae-Soo;Min, Byung-Won;Oh, Yong-Sun
    • International Journal of Contents
    • /
    • v.11 no.2
    • /
    • pp.57-62
    • /
    • 2015
  • Relative entropy is a divergence measure between two probability density functions of a random variable. Assuming that the random variable has only two alphabets, the relative entropy becomes a cross-entropy error function that can accelerate training convergence of multi-layer perceptron neural networks. Also, the n-th order extension of cross-entropy (nCE) error function exhibits an improved performance in viewpoints of learning convergence and generalization capability. In this paper, we derive a new divergence measure between two probability density functions from the nCE error function. And the new divergence measure is compared with the relative entropy through the use of three-dimensional plots.