• 제목/요약/키워드: optimal neuron number

검색결과 24건 처리시간 0.108초

근위축성 측삭 경화증에서의 Statistical Motor Unit Number Estimate 재연성: Size-and Number-Weighted Modifications간의 비교 (Reproducibility of Statistical Motor Unit Number Estimate in Amyotrophic Lateral Sclerosis: Comparisons between Size-and Number-Weighted Modifications)

  • 권오현;이광우
    • Annals of Clinical Neurophysiology
    • /
    • 제5권1호
    • /
    • pp.27-33
    • /
    • 2003
  • Background: Motor unit number estimation (MUNE) can directly assess motor neuron populations in muscle and quantify the degree of physiologic and/or pathologic motor neuron degeneration. A high degree of reproducibility and reliability is required from a good quantitative tool. MUNE, in various ways, is being increasingly applied clinically and statistical MUNE has several advantages over alternative techniques. Nevertheless, the optimal method of applying statistical MUNE to improve reproducibility has not been established. Methods: We performed statistical MUNE by selecting the most compensated compound muscle action potential (CMAP) area as a test area and modified the results obtained by weighted mean surface-recorded motor unit potential (SMUP). Results: MUNE measures in amyotrophic lateral sclerosis (ALS) patients showed better reproducibility with sizeweighted modification. Conclusions: We suggest size-weighted MUNE testing of "neurogenically compensated"CMAP areas present an optimal method for statistical MUNE in ALS patients.

  • PDF

Neural Network Active Control of Structures with Earthquake Excitation

  • Cho Hyun Cheol;Fadali M. Sami;Saiidi M. Saiid;Lee Kwon Soon
    • International Journal of Control, Automation, and Systems
    • /
    • 제3권2호
    • /
    • pp.202-210
    • /
    • 2005
  • This paper presents a new neural network control for nonlinear bridge systems with earthquake excitation. We design multi-layer neural network controllers with a single hidden layer. The selection of an optimal number of neurons in the hidden layer is an important design step for control performance. To select an optimal number of hidden neurons, we progressively add one hidden neuron and observe the change in a performance measure given by the weighted sum of the system error and the control force. The number of hidden neurons which minimizes the performance measure is selected for implementation. A neural network was trained for mitigating vibrations of bridge systems caused by El Centro earthquake. We applied the proposed control approach to a single-degree-of-freedom (SDOF) and a two-degree-of-freedom (TDOF) bridge system. We assessed the robustness of the control system using randomly generated earthquake excitations which were not used in training the neural network. Our results show that the neural network controller drastically mitigates the effect of the disturbance.

유전자 알고리즘 기반 퍼지 다항식 뉴럴네트워크를 이용한 비선형 공정데이터의 최적 동정 (Optimal Identification of Nonlinear Process Data Using GAs-based Fuzzy Polynomial Neural Networks)

  • 이인태;김완수;김현기;오성권
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2005년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.6-8
    • /
    • 2005
  • In this paper, we discuss model identification of nonlinear data using GAs-based Fuzzy Polynomial Neural Networks(GAs-FPNN). Fuzzy Polynomial Neural Networks(FPNN) is proposed model based Group Method Data Handling(GMDH) and Neural Networks(NNs). Each node of FPNN is expressed Fuzzy Polynomial Neuron(FPN). Network structure of nonlinear data is created using Genetic Algorithms(GAs) of optimal search method. Accordingly, GAs-FPNN have more inflexible than the existing models (in)from structure selecting. The proposed model select and identify its for optimal search of Genetic Algorithms that are no. of input variables, input variable numbers and consequence structures. The GAs-FPNN model is select tuning to input variable number, number of input variable and the last part structure through optimal search of Genetic Algorithms. It is shown that nonlinear data model design using Genetic Algorithms based FPNN is more usefulness and effectiveness than the existing models.

  • PDF

Optimized Neural Network Weights and Biases Using Particle Swarm Optimization Algorithm for Prediction Applications

  • Ahmadzadeh, Ezat;Lee, Jieun;Moon, Inkyu
    • 한국멀티미디어학회논문지
    • /
    • 제20권8호
    • /
    • pp.1406-1420
    • /
    • 2017
  • Artificial neural networks (ANNs) play an important role in the fields of function approximation, prediction, and classification. ANN performance is critically dependent on the input parameters, including the number of neurons in each layer, and the optimal values of weights and biases assigned to each neuron. In this study, we apply the particle swarm optimization method, a popular optimization algorithm for determining the optimal values of weights and biases for every neuron in different layers of the ANN. Several regression models, including general linear regression, Fourier regression, smoothing spline, and polynomial regression, are conducted to evaluate the proposed method's prediction power compared to multiple linear regression (MLR) methods. In addition, residual analysis is conducted to evaluate the optimized ANN accuracy for both training and test datasets. The experimental results demonstrate that the proposed method can effectively determine optimal values for neuron weights and biases, and high accuracy results are obtained for prediction applications. Evaluations of the proposed method reveal that it can be used for prediction and estimation purposes, with a high accuracy ratio, and the designed model provides a reliable technique for optimization. The simulation results show that the optimized ANN exhibits superior performance to MLR for prediction purposes.

The nonlinear function approximation based on the neural network application

  • Sugisaka, Masanori;Itou, Minoru
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2000년도 제15차 학술회의논문집
    • /
    • pp.462-462
    • /
    • 2000
  • In this paper, genetic algorithm (GA) is the technique to search for the optimal structures (i,e., the kind of neural network, the number of hidden neuron, ..) of the neural networks which are used approximating a given nonlinear function, In this paper, we used multi layer feed-forward neural network. The decision method of synapse weights of each neuron in each generation used back-propagation method. In this study, we simulated nonlinear function approximation in the temperature control system.

  • PDF

A Study on the Decision Feedback Equalizer using Neural Networks

  • Park, Sung-Hyun;Lee, Yeoung-Soo;Lee, Sang-Bae;Kim, Il;Tack, Han-Ho
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 1998년도 추계학술대회 학술발표 논문집
    • /
    • pp.474-478
    • /
    • 1998
  • A new approach for the decision feedback equalizer(DFE) based on the back-propagation neural networks is described. We propose the method of optimal structure for back-propagation neural networks model. In order to construct an the optimal structure, we first prescribe the bounds of learning procedure, and the, we employ the method of incrementing the number of input neuron by utilizing the derivative of the error with respect to an hidden neuron weights. The structure is applied to the problem of adaptive equalization in the presence of inter symbol interference(ISI), additive white Gaussian noise. From the simulation results, it is observed that the performance of the propose neural networks based decision feedback equalizer outperforms the other two in terms of bit-error rate(BER) and attainable MSE level over a signal ratio and channel nonlinearities.

  • PDF

동적 신경망의 층의 분열과 합성에 의한 비선형 시스템 제어 (Control of Nonlinear System by Multiplication and Combining Layer on Dynamic Neural Networks)

  • 박성욱;이재관;서보혁
    • 대한전기학회논문지:전력기술부문A
    • /
    • 제48권4호
    • /
    • pp.419-427
    • /
    • 1999
  • We propose an algorithm for obtaining the optimal node number of hidden units in dynamic neural networks. The dynamic nerual networks comprise of dynamic neural units and neural processor consisting of two dynamic neural units; one functioning as an excitatory neuron and the other as an inhibitory neuron. Starting out with basic network structure to solve the problem of control, we find optimal neural structure by multiplication and combining dynamic neural unit. Numerical examples are presented for nonlinear systems. Those case studies showed that the proposed is useful is practical sense.

  • PDF

적응력을 갖는 신경회로망에 의한 성분별 부하 예측 (A Component-wise Load Forecasting by Adaptable Artificial Neural Network)

  • 임재윤;김진수;김정훈
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1994년도 추계학술대회 논문집 학회본부
    • /
    • pp.21-23
    • /
    • 1994
  • The degree of forecast accuracy with BP-algorithm largely depends upon the neuron number in hidden layer. In order to construct the optimal structure, first, we prescribe the error bounds of learning procedure, and then, we provid the method of incrementing the number of hidden neurons by using the derivative of errors with respect to an output neuron weights. For the case study, we apply the proposed method to forecast the component-wise residential load, and compare this results to that of time series forecasting.

  • PDF

진화론적으로 최적화된 FPN에 의한 자기구성 퍼지 다항식 뉴럴 네트워크의 최적 설계 (Optimal design of Self-Organizing Fuzzy Polynomial Neural Networks with evolutionarily optimized FPN)

  • 박호성;오성권
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2005년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.12-14
    • /
    • 2005
  • In this paper, we propose a new architecture of Self-Organizing Fuzzy Polynomial Neural Networks(SOFPNN) by means of genetically optimized fuzzy polynomial neuron(FPN) and discuss its comprehensive design methodology involving mechanisms of genetic optimization, especially genetic algorithms(GAs). The conventional SOFPNNs hinges on an extended Group Method of Data Handling(GMDH) and exploits a fixed fuzzy inference type in each FPN of the SOFPNN as well as considers a fixed number of input nodes located in each layer. The design procedure applied in the construction of each layer of a SOFPNN deals with its structural optimization involving the selection of preferred nodes (or FPNs) with specific local characteristics (such as the number of input variables, the order of the polynomial of the consequent part of fuzzy rules, a collection of the specific subset of input variables, and the number of membership function) and addresses specific aspects of parametric optimization. Therefore, the proposed SOFPNN gives rise to a structurally optimized structure and comes with a substantial level of flexibility in comparison to the one we encounter in conventional SOFPNNs. To evaluate the performance of the genetically optimized SOFPNN, the model is experimented with using two time series data(gas furnace and chaotic time series).

  • PDF

자동무장할당을 위한 홉필드망 설계연구 (A Study on the Hopfield Network for automatic weapon assignment)

  • 이양원;강민구;이봉기
    • 한국정보통신학회논문지
    • /
    • 제1권2호
    • /
    • pp.183-191
    • /
    • 1997
  • 동시 다발적으로 공격해 오는 위협 표적을 방어하기는 매우 어려우며, 특히 방어용 무장수보다 표적의 수가 많을 경우에는 전체 표적 격추 기대 확률이 최대가 될 수 있도록 유지하는 방법으로서 본 논문에서는 홉필드 신경망 기법을 무장 할당 알고리즘으로 이용하는 방안을 제안하였다. 본 연구는 자동무장할당 알고리즘을 설계함에 있어서 할당변수를 생성하는데 필요한 신경망 학습 횟수를 단축하도록 설계하였으며 컴퓨터 시뮬레이션 결과 watcholder의 방법보다 수렴성이 뛰어남을 확인하였다.

  • PDF