• Title/Summary/Keyword: optimal neuron number

Search Result 24, Processing Time 0.021 seconds

Genetically Optimized Fuzzy Polynomial Neural Networks Model and Its Application to Software Process (진화론적 최적 퍼지다항식 신경회로망 모델 및 소프트웨어 공정으로의 응용)

  • Lee, In-Tae;Park, Ho-Sung;Oh, Sung-Kwun;Ahn, Tae-Chon
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.337-339
    • /
    • 2004
  • In this paper, we discuss optimal design of Fuzzy Polynomial Neural Networks by means of Genetic Algorithms(GAs). Proceeding the layer, this model creates the optimal network architecture through the selection and the elimination of nodes by itself. So, there is characteristic of flexibility. We use a triangle and a Gaussian-like membership function in premise part of rules and design the consequent structure by constant and regression polynomial (linear, quadratic and modified quadratic) function between input and output variables. GAs is applied to improve the performance with optimal input variables and number of input variables and order. To evaluate the performance of the GAs-based FPNNs, the models are experimented with the use of Medical Imaging System(MIS) data.

  • PDF

Genetically Optimized Hybrid Fuzzy Set-based Polynomial Neural Networks with Polynomial and Fuzzy Polynomial Neurons

  • Oh Sung-Kwun;Roh Seok-Beom;Park Keon-Jun
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.4
    • /
    • pp.327-332
    • /
    • 2005
  • We investigatea new fuzzy-neural networks-Hybrid Fuzzy set based polynomial Neural Networks (HFSPNN). These networks consist of genetically optimized multi-layer with two kinds of heterogeneous neurons thatare fuzzy set based polynomial neurons (FSPNs) and polynomial neurons (PNs). We have developed a comprehensive design methodology to determine the optimal structure of networks dynamically. The augmented genetically optimized HFSPNN (namely gHFSPNN) results in a structurally optimized structure and comes with a higher level of flexibility in comparison to the one we encounter in the conventional HFPNN. The GA-based design procedure being applied at each layer of gHFSPNN leads to the selection leads to the selection of preferred nodes (FSPNs or PNs) available within the HFSPNN. In the sequel, the structural optimization is realized via GAs, whereas the ensuing detailed parametric optimization is carried out in the setting of a standard least square method-based learning. The performance of the gHFSPNN is quantified through experimentation where we use a number of modeling benchmarks synthetic and experimental data already experimented with in fuzzy or neurofuzzy modeling.

Advanced Self-Organizing Neural Networks Based on Competitive Fuzzy Polynomial Neurons (경쟁적 퍼지다항식 뉴런에 기초한 고급 자기구성 뉴럴네트워크)

  • 박호성;박건준;이동윤;오성권
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.53 no.3
    • /
    • pp.135-144
    • /
    • 2004
  • In this paper, we propose competitive fuzzy polynomial neurons-based advanced Self-Organizing Neural Networks(SONN) architecture for optimal model identification and discuss a comprehensive design methodology supporting its development. The proposed SONN dwells on the ideas of fuzzy rule-based computing and neural networks. And it consists of layers with activation nodes based on fuzzy inference rules and regression polynomial. Each activation node is presented as Fuzzy Polynomial Neuron(FPN) which includes either the simplified or regression polynomial fuzzy inference rules. As the form of the conclusion part of the rules, especially the regression polynomial uses several types of high-order polynomials such as linear, quadratic, and modified quadratic. As the premise part of the rules, both triangular and Gaussian-like membership (unction are studied and the number of the premise input variables used in the rules depends on that of the inputs of its node in each layer. We introduce two kinds of SONN architectures, that is, the basic and modified one with both the generic and the advanced type. Here the basic and modified architecture depend on the number of input variables and the order of polynomial in each layer. The number of the layers and the nodes in each layer of the SONN are not predetermined, unlike in the case of the popular multi-layer perceptron structure, but these are generated in a dynamic way. The superiority and effectiveness of the Proposed SONN architecture is demonstrated through two representative numerical examples.

A Design of Parallel Module Neural Network for Robot Manipulators having a fast Learning Speed (빠른 학습 속도를 갖는 로보트 매니퓰레이터의 병렬 모듈 신경제어기 설계)

  • 김정도;이택종
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.9
    • /
    • pp.1137-1153
    • /
    • 1995
  • It is not yet possible to solve the optimal number of neurons in hidden layer at neural networks. However, it has been proposed and proved by experiments that there is a limit in increasing the number of neuron in hidden layer, because too much incrememt will cause instability,local minima and large error. This paper proposes a module neural controller with pattern recognition ability to solve the above trade-off problems and to obtain fast learning convergence speed. The proposed neural controller is composed of several module having Multi-layer Perrceptron(MLP). Each module have the less neurons in hidden layer, because it learns only input patterns having a similar learning directions. Experiments with six joint robot manipulator have shown the effectiveness and the feasibility of the proposed the parallel module neural controller with pattern recognition perceptron.

  • PDF

Evolutionary Design Methodology of Fuzzy Set-based Polynomial Neural Networks with the Information Granule

  • Roh Seok-Beom;Ahn Tae-Chon;Oh Sung-Kwun
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2005.04a
    • /
    • pp.301-304
    • /
    • 2005
  • In this paper, we propose a new fuzzy set-based polynomial neuron (FSPN) involving the information granule, and new fuzzy-neural networks - Fuzzy Set based Polynomial Neural Networks (FSPNN). We have developed a design methodology (genetic optimization using Genetic Algorithms) to find the optimal structure for fuzzy-neural networks that expanded from Group Method of Data Handling (GMDH). It is the number of input variables, the order of the polynomial, the number of membership functions, and a collection of the specific subset of input variables that are the parameters of FSPNN fixed by aid of genetic optimization that has search capability to find the optimal solution on the solution space. We have been interested in the architecture of fuzzy rules that mimic the real world, namely sub-model (node) composing the fuzzy-neural networks. We adopt fuzzy set-based fuzzy rules as substitute for fuzzy relation-based fuzzy rules and apply the concept of Information Granulation to the proposed fuzzy set-based rules.

  • PDF

Neo Fuzzy Set-based Polynomial Neural Networks involving Information Granules and Genetic Optimization

  • Roh, Seok-Beom;Oh, Sung-Kwun;Ahn, Tae-Chon
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.3-5
    • /
    • 2005
  • In this paper. we introduce a new structure of fuzzy-neural networks Fuzzy Set-based Polynomial Neural Networks (FSPNN). The two underlying design mechanisms of such networks involve genetic optimization and information granulation. The resulting constructs are Fuzzy Polynomial Neural Networks (FPNN) with fuzzy set-based polynomial neurons (FSPNs) regarded as their generic processing elements. First, we introduce a comprehensive design methodology (viz. a genetic optimization using Genetic Algorithms) to determine the optimal structure of the FSPNNs. This methodology hinges on the extended Group Method of Data Handling (GMDH) and fuzzy set-based rules. It concerns FSPNN-related parameters such as the number of input variables, the order of the polynomial, the number of membership functions, and a collection of a specific subset of input variables realized through the mechanism of genetic optimization. Second, the fuzzy rules used in the networks exploit the notion of information granules defined over systems variables and formed through the process of information granulation. This granulation is realized with the aid of the hard C-Means clustering (HCM). The performance of the network is quantified through experimentation in which we use a number of modeling benchmarks already experimented with in the realm of fuzzy or neurofuzzy modeling.

  • PDF

Experimental Study on the Design Parameter Effects on the Flow-rate and the Noise level in a Cross-flow Fan (실험에 의한 직교류홴의 유량 및 소음 분석)

  • Ahn, Cheol-O;Rew, Ho-Seon
    • The KSFM Journal of Fluid Machinery
    • /
    • v.1 no.1 s.1
    • /
    • pp.41-48
    • /
    • 1998
  • This study was carried out to investigate the effect of design parameters on the volume flow-rate and the noise level and to finally find the optimal design variables. Eighteen cross-flow fans were designed by the method of orthogonal array, and the flow-rate and the noise level were measured. These data were analyzed by the neural network system. The effects of eight design variables(scroll exit angle, scroll arc length et al.) on the fan performance and the noise level were valuated and discussed. This experiment shows that the design solutions suggested by neural network system may increase its volume flow-rate and reduce noise simultaneously.

  • PDF

The Optimal Column Grouping Technique for the Compensation of Column Shortening (기둥축소량 보정을 위한 기둥의 최적그루핑기법)

  • Kim, Yeong-Min
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.24 no.2
    • /
    • pp.141-148
    • /
    • 2011
  • This study presents the optimal grouping technique of columns which groups together columns of similar shortening trends to improve the efficiency of column shortening compensation. Here, Kohonen's self-organizing feature map which can classify patterns of input data by itself with unsupervised learning was used as the optimal grouping algorithm. The Kohonen network applied in this study is composed of two input neurons and variable output neurons, here the number of output neuron is equal to the column groups to be classified. In input neurons the normalized mean and standard deviation of shortening of each columns are inputted and in the output neurons the classified column groups are presented. The applicability of the proposed algorithm was evaluated by applying it to the two buildings where column shortening analyses had already been performed. The proposed algorithm was able to classify columns with similar shortening trends as one group, and from this we were able to ascertain the field-applicability of the proposed algorithm as the optimal grouping of column shortening.

Genetically Optimized Self-Organizing Polynomial Neural Networks (진화론적 최적 자기구성 다항식 뉴럴 네트워크)

  • 박호성;박병준;장성환;오성권
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.53 no.1
    • /
    • pp.40-49
    • /
    • 2004
  • In this paper, we propose a new architecture of Genetic Algorithms(GAs)-based Self-Organizing Polynomial Neural Networks(SOPNN), discuss a comprehensive design methodology and carry out a series of numeric experiments. The conventional SOPNN is based on the extended Group Method of Data Handling(GMDH) method and utilized the polynomial order (viz. linear, quadratic, and modified quadratic) as well as the number of node inputs fixed (selected in advance by designer) at Polynomial Neurons (or nodes) located in each layer through a growth process of the network. Moreover it does not guarantee that the SOPNN generated through learning has the optimal network architecture. But the proposed GA-based SOPNN enable the architecture to be a structurally more optimized network, and to be much more flexible and preferable neural network than the conventional SOPNN. In order to generate the structurally optimized SOPNN, GA-based design procedure at each stage (layer) of SOPNN leads to the selection of preferred nodes (or PNs) with optimal parameters- such as the number of input variables, input variables, and the order of the polynomial-available within SOPNN. An aggregate performance index with a weighting factor is proposed in order to achieve a sound balance between approximation and generalization (predictive) abilities of the model. A detailed design procedure is discussed in detail. To evaluate the performance of the GA-based SOPNN, the model is experimented with using two time series data (gas furnace and NOx emission process data of gas turbine power plant). A comparative analysis shows that the proposed GA-based SOPNN is model with higher accuracy as well as more superb predictive capability than other intelligent models presented previously.

Speech Recognition and Its Learning by Neural Networks (신경회로망을 이용한 음성인식과 그 학습)

  • 이권현
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.16 no.4
    • /
    • pp.350-357
    • /
    • 1991
  • A speech recognition system based on a neural network, which can be used for telephon number services was tested. Because in Korea two different cardinal number systems, a koreanic one and a sinokoreanic one, are in use, it is necessary that the used systems is able to recognize 22 discret words. The structure of the neural network used had two layers, also a structure with 3 layers, one hidden layreformed of each 11, 22 and 44 hidden units was tested. During the learning phase of the system the so called BP-algorithm (back propagation) was applied. The process of learning can e influenced by using a different learning factor and also by the method of learning(for instance random or cycle). The optimal rate of speaker independent recognition by using a 2 layer neural network was 96%. A drop of recognition was observed by overtraining. This phenomen appeared more clearly if a 3 layer neural network was used. These phenomens are described in this paper in more detail. Especially the influence of the construction of the neural network and the several states during the learning phase are examined.

  • PDF