• Title/Summary/Keyword: optimal neuron number

Search Result 24, Processing Time 0.022 seconds

Reproducibility of Statistical Motor Unit Number Estimate in Amyotrophic Lateral Sclerosis: Comparisons between Size-and Number-Weighted Modifications (근위축성 측삭 경화증에서의 Statistical Motor Unit Number Estimate 재연성: Size-and Number-Weighted Modifications간의 비교)

  • Kwon, Oh Yun;Lee, Kwang-Woo
    • Annals of Clinical Neurophysiology
    • /
    • v.5 no.1
    • /
    • pp.27-33
    • /
    • 2003
  • Background: Motor unit number estimation (MUNE) can directly assess motor neuron populations in muscle and quantify the degree of physiologic and/or pathologic motor neuron degeneration. A high degree of reproducibility and reliability is required from a good quantitative tool. MUNE, in various ways, is being increasingly applied clinically and statistical MUNE has several advantages over alternative techniques. Nevertheless, the optimal method of applying statistical MUNE to improve reproducibility has not been established. Methods: We performed statistical MUNE by selecting the most compensated compound muscle action potential (CMAP) area as a test area and modified the results obtained by weighted mean surface-recorded motor unit potential (SMUP). Results: MUNE measures in amyotrophic lateral sclerosis (ALS) patients showed better reproducibility with sizeweighted modification. Conclusions: We suggest size-weighted MUNE testing of "neurogenically compensated"CMAP areas present an optimal method for statistical MUNE in ALS patients.

  • PDF

Neural Network Active Control of Structures with Earthquake Excitation

  • Cho Hyun Cheol;Fadali M. Sami;Saiidi M. Saiid;Lee Kwon Soon
    • International Journal of Control, Automation, and Systems
    • /
    • v.3 no.2
    • /
    • pp.202-210
    • /
    • 2005
  • This paper presents a new neural network control for nonlinear bridge systems with earthquake excitation. We design multi-layer neural network controllers with a single hidden layer. The selection of an optimal number of neurons in the hidden layer is an important design step for control performance. To select an optimal number of hidden neurons, we progressively add one hidden neuron and observe the change in a performance measure given by the weighted sum of the system error and the control force. The number of hidden neurons which minimizes the performance measure is selected for implementation. A neural network was trained for mitigating vibrations of bridge systems caused by El Centro earthquake. We applied the proposed control approach to a single-degree-of-freedom (SDOF) and a two-degree-of-freedom (TDOF) bridge system. We assessed the robustness of the control system using randomly generated earthquake excitations which were not used in training the neural network. Our results show that the neural network controller drastically mitigates the effect of the disturbance.

Optimal Identification of Nonlinear Process Data Using GAs-based Fuzzy Polynomial Neural Networks (유전자 알고리즘 기반 퍼지 다항식 뉴럴네트워크를 이용한 비선형 공정데이터의 최적 동정)

  • Lee, In-Tae;Kim, Wan-Su;Kim, Hyun-Ki;Oh, Sung-Kwun
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.6-8
    • /
    • 2005
  • In this paper, we discuss model identification of nonlinear data using GAs-based Fuzzy Polynomial Neural Networks(GAs-FPNN). Fuzzy Polynomial Neural Networks(FPNN) is proposed model based Group Method Data Handling(GMDH) and Neural Networks(NNs). Each node of FPNN is expressed Fuzzy Polynomial Neuron(FPN). Network structure of nonlinear data is created using Genetic Algorithms(GAs) of optimal search method. Accordingly, GAs-FPNN have more inflexible than the existing models (in)from structure selecting. The proposed model select and identify its for optimal search of Genetic Algorithms that are no. of input variables, input variable numbers and consequence structures. The GAs-FPNN model is select tuning to input variable number, number of input variable and the last part structure through optimal search of Genetic Algorithms. It is shown that nonlinear data model design using Genetic Algorithms based FPNN is more usefulness and effectiveness than the existing models.

  • PDF

Optimized Neural Network Weights and Biases Using Particle Swarm Optimization Algorithm for Prediction Applications

  • Ahmadzadeh, Ezat;Lee, Jieun;Moon, Inkyu
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1406-1420
    • /
    • 2017
  • Artificial neural networks (ANNs) play an important role in the fields of function approximation, prediction, and classification. ANN performance is critically dependent on the input parameters, including the number of neurons in each layer, and the optimal values of weights and biases assigned to each neuron. In this study, we apply the particle swarm optimization method, a popular optimization algorithm for determining the optimal values of weights and biases for every neuron in different layers of the ANN. Several regression models, including general linear regression, Fourier regression, smoothing spline, and polynomial regression, are conducted to evaluate the proposed method's prediction power compared to multiple linear regression (MLR) methods. In addition, residual analysis is conducted to evaluate the optimized ANN accuracy for both training and test datasets. The experimental results demonstrate that the proposed method can effectively determine optimal values for neuron weights and biases, and high accuracy results are obtained for prediction applications. Evaluations of the proposed method reveal that it can be used for prediction and estimation purposes, with a high accuracy ratio, and the designed model provides a reliable technique for optimization. The simulation results show that the optimized ANN exhibits superior performance to MLR for prediction purposes.

The nonlinear function approximation based on the neural network application

  • Sugisaka, Masanori;Itou, Minoru
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.462-462
    • /
    • 2000
  • In this paper, genetic algorithm (GA) is the technique to search for the optimal structures (i,e., the kind of neural network, the number of hidden neuron, ..) of the neural networks which are used approximating a given nonlinear function, In this paper, we used multi layer feed-forward neural network. The decision method of synapse weights of each neuron in each generation used back-propagation method. In this study, we simulated nonlinear function approximation in the temperature control system.

  • PDF

A Study on the Decision Feedback Equalizer using Neural Networks

  • Park, Sung-Hyun;Lee, Yeoung-Soo;Lee, Sang-Bae;Kim, Il;Tack, Han-Ho
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.10a
    • /
    • pp.474-478
    • /
    • 1998
  • A new approach for the decision feedback equalizer(DFE) based on the back-propagation neural networks is described. We propose the method of optimal structure for back-propagation neural networks model. In order to construct an the optimal structure, we first prescribe the bounds of learning procedure, and the, we employ the method of incrementing the number of input neuron by utilizing the derivative of the error with respect to an hidden neuron weights. The structure is applied to the problem of adaptive equalization in the presence of inter symbol interference(ISI), additive white Gaussian noise. From the simulation results, it is observed that the performance of the propose neural networks based decision feedback equalizer outperforms the other two in terms of bit-error rate(BER) and attainable MSE level over a signal ratio and channel nonlinearities.

  • PDF

Control of Nonlinear System by Multiplication and Combining Layer on Dynamic Neural Networks (동적 신경망의 층의 분열과 합성에 의한 비선형 시스템 제어)

  • Park, Seong-Wook;Lee, Jae-Kwan;Seo, Bo-Hyeok
    • The Transactions of the Korean Institute of Electrical Engineers A
    • /
    • v.48 no.4
    • /
    • pp.419-427
    • /
    • 1999
  • We propose an algorithm for obtaining the optimal node number of hidden units in dynamic neural networks. The dynamic nerual networks comprise of dynamic neural units and neural processor consisting of two dynamic neural units; one functioning as an excitatory neuron and the other as an inhibitory neuron. Starting out with basic network structure to solve the problem of control, we find optimal neural structure by multiplication and combining dynamic neural unit. Numerical examples are presented for nonlinear systems. Those case studies showed that the proposed is useful is practical sense.

  • PDF

A Component-wise Load Forecasting by Adaptable Artificial Neural Network (적응력을 갖는 신경회로망에 의한 성분별 부하 예측)

  • Lim, Jae-Yoon;Kim, Jin-Soo;Kim, Jung-Hoon
    • Proceedings of the KIEE Conference
    • /
    • 1994.11a
    • /
    • pp.21-23
    • /
    • 1994
  • The degree of forecast accuracy with BP-algorithm largely depends upon the neuron number in hidden layer. In order to construct the optimal structure, first, we prescribe the error bounds of learning procedure, and then, we provid the method of incrementing the number of hidden neurons by using the derivative of errors with respect to an output neuron weights. For the case study, we apply the proposed method to forecast the component-wise residential load, and compare this results to that of time series forecasting.

  • PDF

Optimal design of Self-Organizing Fuzzy Polynomial Neural Networks with evolutionarily optimized FPN (진화론적으로 최적화된 FPN에 의한 자기구성 퍼지 다항식 뉴럴 네트워크의 최적 설계)

  • Park, Ho-Sung;Oh, Sung-Kwun
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.12-14
    • /
    • 2005
  • In this paper, we propose a new architecture of Self-Organizing Fuzzy Polynomial Neural Networks(SOFPNN) by means of genetically optimized fuzzy polynomial neuron(FPN) and discuss its comprehensive design methodology involving mechanisms of genetic optimization, especially genetic algorithms(GAs). The conventional SOFPNNs hinges on an extended Group Method of Data Handling(GMDH) and exploits a fixed fuzzy inference type in each FPN of the SOFPNN as well as considers a fixed number of input nodes located in each layer. The design procedure applied in the construction of each layer of a SOFPNN deals with its structural optimization involving the selection of preferred nodes (or FPNs) with specific local characteristics (such as the number of input variables, the order of the polynomial of the consequent part of fuzzy rules, a collection of the specific subset of input variables, and the number of membership function) and addresses specific aspects of parametric optimization. Therefore, the proposed SOFPNN gives rise to a structurally optimized structure and comes with a substantial level of flexibility in comparison to the one we encounter in conventional SOFPNNs. To evaluate the performance of the genetically optimized SOFPNN, the model is experimented with using two time series data(gas furnace and chaotic time series).

  • PDF

A Study on the Hopfield Network for automatic weapon assignment (자동무장할당을 위한 홉필드망 설계연구)

  • 이양원;강민구;이봉기
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.1 no.2
    • /
    • pp.183-191
    • /
    • 1997
  • A neural network-based algorithm for the static weapon-target assignment (WTA) problem is Presented in this paper. An optimal WTA is one which allocates targets to weapon systems such that the total expected leakage value of targets surviving the defense is minimized. The proposed algorithm is based on a Hopfield and Tank's neural network model, and uses K x M processing elements called binary neuron, where M is the number of weapon platforms and K is the number of targets. From the software simulation results of example battle scenarios, it is shown that the proposed method has better performance in convergence speed than other method when the optimal initial values are used.

  • PDF