• Title/Summary/Keyword: generalization

Search Result 2,113, Processing Time 0.031 seconds

Improving Generalization Performance of Neural Networks using Natural Pruning and Bayesian Selection (자연 프루닝과 베이시안 선택에 의한 신경회로망 일반화 성능 향상)

  • 이현진;박혜영;이일병
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.3_4
    • /
    • pp.326-338
    • /
    • 2003
  • The objective of a neural network design and model selection is to construct an optimal network with a good generalization performance. However, training data include noises, and the number of training data is not sufficient, which results in the difference between the true probability distribution and the empirical one. The difference makes the teaming parameters to over-fit only to training data and to deviate from the true distribution of data, which is called the overfitting phenomenon. The overfilled neural network shows good approximations for the training data, but gives bad predictions to untrained new data. As the complexity of the neural network increases, this overfitting phenomenon also becomes more severe. In this paper, by taking statistical viewpoint, we proposed an integrative process for neural network design and model selection method in order to improve generalization performance. At first, by using the natural gradient learning with adaptive regularization, we try to obtain optimal parameters that are not overfilled to training data with fast convergence. By adopting the natural pruning to the obtained optimal parameters, we generate several candidates of network model with different sizes. Finally, we select an optimal model among candidate models based on the Bayesian Information Criteria. Through the computer simulation on benchmark problems, we confirm the generalization and structure optimization performance of the proposed integrative process of teaming and model selection.

The effect of perceived within-category variability through its examples on category-based inductive generalization (범주예시에 의해 지각된 범주내 변산성이 범주기반 귀납적 일반화에 미치는 효과)

  • Lee, Guk-Hee;Kim, ShinWoo;Li, Hyung-Chul O.
    • Korean Journal of Cognitive Science
    • /
    • v.25 no.3
    • /
    • pp.233-257
    • /
    • 2014
  • Category-based induction is one of major inferential reasoning methods used by humans. This research tested the effect of perceived within-category variability on the inductive generalization. Experiment 1 manipulated variability by directly presenting category exemplars. After displaying low variable (low variability condition) or highly variable exemplars (high variability condition) depending on condition, participants performed inductive generalization task about a category in question. The results showed that participants have greater confidence in generalization when category variability was low than when it was high. Rather than directly presenting category exemplars in Experiment 2, participants performed induction task after they formed category variability impression by categorization task of identifying category exemplars. Experiment 2 also found the tendency that participants have greater inductive confidence when category variability was low. The variability effect discovered in this research is distinct from the diversity effect in previous research and the category-based induction model proposed by Osherson et al. (1990) cannot fully account for the variability effect in this research. Test of variability effect in category-based induction is discussed in the general discussion section.

A GENERALIZATION OF GIESEKER’S LEMMA

  • Kim, Sung-Ock
    • Bulletin of the Korean Mathematical Society
    • /
    • v.37 no.4
    • /
    • pp.711-719
    • /
    • 2000
  • We generalize Gieseker\`s lemma and use it to compute Picard number of a complete intersection surface.

  • PDF