Abstract
The necessity and usefulness of higher-order neural networks have been well-known since early days of
neurocomputing. However the explosive number of terms has hampered the design and training of such networks.
In this paper we present an evolutionary learning method for efficiently constructing problem-specific higher-order
neural models. The crux of the method is the neural tree representation employing both sigma and pi units, in combination
with the use of an MDL-based fitness function for learning minimal models. We provide experimental
results in classification and prediction problems which demonstrate the effectiveness of the method.
I. Introduction topology employs one hidden layer with full connectivity
between neighboring layers. This structure has
One of the most popular neural network models been very successful for many applications. However,
used for supervised learning applications has been the they have some weaknesses. For instance, the fully
mutilayer feedforward network. A commonly adopted connected structure is not necessarily a good topology
unless the task contains a good predictor for the full
*d*dWs %BH%W* input space.
하이오더 신경망에 대한 필요성과 유용성에 대해서는 신경망 연구의 초기부터 잘 알려져 있다. 그러나 오더가 늘어남에 따라 항의 수가 급격히 증가하는 문제로 인하여 이러한 망을 설계하고 학습하는데 많은 어려움이 있었다. 본 논문에서는 문제에 적합한 하이오더 신경망 모델을 효율적으로 구성하기 위한 진화적 학습 방법을 제시한다. 이 방법에서는 시그마유닛과 파이유닛을 융합한 신경트리 표현을 사용한다. 또한 MDL기반의 적합도 분류 및 예측 문제에 있어서 제시된 방법의 유용성을 검증한다.