• Title/Summary/Keyword: GA-Neural Network

Search Result 200, Processing Time 0.04 seconds

Neural Network Structure and Parameter Optimization via Genetic Algorithms (유전알고리즘을 이용한 신경망 구조 및 파라미터 최적화)

  • 한승수
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.3
    • /
    • pp.215-222
    • /
    • 2001
  • Neural network based models of semiconductor manufacturing processes have been shown to offer advantages in both accuracy and generalization over traditional methods. However, model development is often complicated by the fact that back-propagation neural networks contain several adjustable parameters whose optimal values unknown during training. These include learning rate, momentum, training tolerance, and the number of hidden layer neurOnS. This paper presents an investigation of the use of genetic algorithms (GAs) to determine the optimal neural network parameters for the modeling of plasma-enhanced chemical vapor deposition (PECVD) of silicon dioxide films. To find an optimal parameter set for the neural network PECVD models, a performance index was defined and used in the GA objective function. This index was designed to account for network prediction error as well as training error, with a higher emphasis on reducing prediction error. The results of the genetic search were compared with the results of a similar search using the simplex algorithm.

  • PDF

A Study on the Development Java Package for Function Optimization based on Genetic Algorithms (유전 알고리즘 기반의 함수 최적화를 위한 자바 패키지 개발에 관한 연구)

  • 강환수;강환일;송영기
    • Proceedings of the IEEK Conference
    • /
    • 2000.06c
    • /
    • pp.27-30
    • /
    • 2000
  • Many human inventions were inspired by nature. The artificial neural network is one example. Another example is Genetic Algorithms(GA). GAs search by simulating evolution, starting from an initial set of solutions or hypotheses, and generating successive "generations" of solutions. This particular branch of AI was inspired by the way living things evolved into more successful organisms in nature. To simulate the process of GA in a computer, we must simulate many times according to varying many GA parameters. This paper describes the implementation of Java Package for efficient applications on Genetic Algorithms, called "JavaGA". The JavaGA used as a application program as well as applet provides graphical user interface of assigning major GA parameters.

  • PDF

Neuro-Fuzzy Modeling of Complex Nonlinear System Using a mGA (mGA를 사용한 복잡한 비선형 시스템의 뉴로-퍼지 모델링)

  • Choi, Jong-Il;Lee, Yeun-Woo;Joo, Young-Hoon;Park, Jin-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.2305-2307
    • /
    • 2000
  • In this paper we propose a Neuro-Fuzzy modeling method using mGA for complex nonlinear system. mGA has more effective and adaptive structure than sGA with respect to using the changeable-length string. This paper suggest a new coding method for applying the model's input and output data to the number of optimul rules of fuzzy models and the structure and parameter identifications of membership function simultaneously. The proposed method realize optimal fuzzy inference system using the learning ability of Neural network. For fine-tune of the identified parameter by mGA, back-propagation algorithm used for optimulize the parameter of fuzzy set. The proposed fuzzy modeling method is applied to a nonlinear system to prove the superiority of the proposed approach through compare with ANFIS.

  • PDF

Neuro-Fuzzy Modeling for Nonlinear System Using VmGA (VmGA를 이용한 비선형 시스템의 뉴로-퍼지 모델링)

  • Choi, Jong-Il;Lee, Yeun-Woo;Joo, Young-Hoon;Park, Jin-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2001.07d
    • /
    • pp.1952-1954
    • /
    • 2001
  • In this paper, we propose the neuro-fuzzy modeling method using VmGA (Virus messy Genetic Algorithm) for the complex nonlinear system. VmGA has more effective and adaptive structure than sGA. in this paper, we suggest a new coding method for applying the model's input and output data to the optimal number of rules in fuzzy models and the structure and parameter identification of membership functions simultaneously. The proposed method realizes the optimal fuzzy inference system using the learning ability of neural network. For fine-tune of parameters identified by VmGA, back- propagation algorithm is used for optimizing the parameter of fuzzy set. The proposed fuzzy modeling method is applied to a nonlinear system to prove the superiority of the proposed approach through comparing with ANFIS.

  • PDF

GA-based Normalization Approach in Back-propagation Neural Network for Bankruptcy Prediction Modeling (유전자알고리즘을 기반으로 하는 정규화 기법에 관한 연구 : 역전파 알고리즘을 이용한 부도예측 모형을 중심으로)

  • Tai, Qiu-Yue;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.1-14
    • /
    • 2010
  • The back-propagation neural network (BPN) has long been successfully applied in bankruptcy prediction problems. Despite its wide application, some major issues must be considered before its use, such as the network topology, learning parameters and normalization methods for the input and output vectors. Previous studies on bankruptcy prediction with BPN have shown that many researchers are interested in how to optimize the network topology and learning parameters to improve the prediction performance. In many cases, however, the benefits of data normalization are often overlooked. In this study, a genetic algorithm (GA)-based normalization transform, which is defined as a linearly weighted combination of several different normalization transforms, will be proposed. GA is used to extract the optimal weight for the generalization. From the results of an experiment, the proposed method was evaluated and compared with other methods to demonstrate the advantage of the proposed method.

Neural Network Weight Optimization using the GA (GA를 이용한 신경망의 가중치 최적화)

  • 문상우;공성곤
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.10a
    • /
    • pp.374-378
    • /
    • 1998
  • 신경망은 복잡하게 나타나는 비선형성을 가지는 실제의 다양한 문제들에 적용이 가능할 뿐만 아니라, 정보들이 가중치에 분산되어 저장됨으로서 강인성을 가지고 있다. 그러나 전방향 다층 신경망 구조를 학습할 수 있는 역전파 알고리즘은 초기 가중치의 영향에 의하여 학습된 결과가 지역 최소점에 빠지기 쉬운 경향이 있다. 본 논문에서는 이러한 문제점을 해결하기 위한 한가지 방법으로서 유전자 알고리즘을 이용하여 전방향 다층 신경망의 가중치를 학습하여, 지역 최소점에 빠지지 않고 학습이 이루어짐을 보인다.

  • PDF

Modeling of Charge Density of Thin Film Charge Density by Using Neural Network and Genetic Algorithm (유전자 알고리즘과 일반화된 회귀 신경망을 이용한 박막 전하밀도 예측모델)

  • Kwon, Sang-Hee;Kim, Byung-Whan
    • Proceedings of the KIEE Conference
    • /
    • 2007.07a
    • /
    • pp.1805-1806
    • /
    • 2007
  • Silicon nitride (SiN) 박막을 플라즈마 응용화학기상법을 이용하여 증착하였다. SiN박막의 전하밀도는 일반화된 회귀 신경망 (GRNN)을 이용하여 모델링하였다. PECVD 공정은 Box Wilson 실험계획표를 이용하여 수행하였다. GRNN 모델의 예측수행은 유전자 알고리즘 (GA)을 이용하여 최적화하였다. 최적화한 GA-GRNN 모델은 종래의 GRNN 모델과 비교하여, 약55%정도의 예측성능의 향상을 보였다.

  • PDF

A Study on Fuzzy Neural Network Modeling Using Genetic Algorithm (유전 알고리듬을 이용한 퍼지신경망 모델링에 관한 연구)

  • Kwon, Ok-Kook;Chang, Wook;Joo, Young-Hoon;Choi, Yoon-Ho;Park, Jin-Bae
    • Proceedings of the KIEE Conference
    • /
    • 1997.07b
    • /
    • pp.390-393
    • /
    • 1997
  • Fuzzy logic and neural networks are complemetary technologies in the design of intelligent system. Fuzzy neural network(FNN) as an auto-tuning method is actually known to an excellent method for the adjustment of the fuzzy rule. However, this has a weak point, because the convergence to the optimum depends on the initial condition. In this paper we develop a coding format to determine a FNN model by chromosome in GA and present systematic approach to identify the parameters and structure of FNN. The proposed hybrid tuning method realizes to construct minimal and optimal structure of the fuzzy mode simultaneously and automatically. This paper shows effectiveness of the tuning system by simulations compared with conventional methods.

  • PDF

Leak flow prediction during loss of coolant accidents using deep fuzzy neural networks

  • Park, Ji Hun;An, Ye Ji;Yoo, Kwae Hwan;Na, Man Gyun
    • Nuclear Engineering and Technology
    • /
    • v.53 no.8
    • /
    • pp.2547-2555
    • /
    • 2021
  • The frequency of reactor coolant leakage is expected to increase over the lifetime of a nuclear power plant owing to degradation mechanisms, such as flow-acceleration corrosion and stress corrosion cracking. When loss of coolant accidents (LOCAs) occur, several parameters change rapidly depending on the size and location of the cracks. In this study, leak flow during LOCAs is predicted using a deep fuzzy neural network (DFNN) model. The DFNN model is based on fuzzy neural network (FNN) modules and has a structure where the FNN modules are sequentially connected. Because the DFNN model is based on the FNN modules, the performance factors are the number of FNN modules and the parameters of the FNN module. These parameters are determined by a least-squares method combined with a genetic algorithm; the number of FNN modules is determined automatically by cross checking a fitness function using the verification dataset output to prevent an overfitting problem. To acquire the data of LOCAs, an optimized power reactor-1000 was simulated using a modular accident analysis program code. The predicted results of the DFNN model are found to be superior to those predicted in previous works. The leak flow prediction results obtained in this study will be useful to check the core integrity in nuclear power plant during LOCAs. This information is also expected to reduce the workload of the operators.

CNN-LSTM based Autonomous Driving Technology (CNN-LSTM 기반의 자율주행 기술)

  • Ga-Eun Park;Chi Un Hwang;Lim Se Ryung;Han Seung Jang
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1259-1268
    • /
    • 2023
  • This study proposes a throttle and steering control technology using visual sensors based on deep learning's convolutional and recurrent neural networks. It collects camera image and control value data while driving a training track in clockwise and counterclockwise directions, and generates a model to predict throttle and steering through data sampling and preprocessing for efficient learning. Afterward, the model was validated on a test track in a different environment that was not used for training to find the optimal model and compare it with a CNN (Convolutional Neural Network). As a result, we found that the proposed deep learning model has excellent performance.