• Title/Summary/Keyword: Training algorithm

Search Result 1,853, Processing Time 0.024 seconds

Slime mold and four other nature-inspired optimization algorithms in analyzing the concrete compressive strength

  • Yinghao Zhao;Hossein Moayedi;Loke Kok Foong;Quynh T. Thi
    • Smart Structures and Systems
    • /
    • v.33 no.1
    • /
    • pp.65-91
    • /
    • 2024
  • The use of five optimization techniques for the prediction of a strength-based concrete mixture's best-fit model is examined in this work. Five optimization techniques are utilized for this purpose: Slime Mold Algorithm (SMA), Black Hole Algorithm (BHA), Multi-Verse Optimizer (MVO), Vortex Search (VS), and Whale Optimization Algorithm (WOA). MATLAB employs a hybrid learning strategy to train an artificial neural network that combines least square estimation with backpropagation. Thus, 72 samples are utilized as training datasets and 31 as testing datasets, totaling 103. The multi-layer perceptron (MLP) is used to analyze all data, and results are verified by comparison. For training datasets in the best-fit models of SMA-MLP, BHA-MLP, MVO-MLP, VS-MLP, and WOA-MLP, the statistical indices of coefficient of determination (R2) in training phase are 0.9603, 0.9679, 0.9827, 0.9841 and 0.9770, and in testing phase are 0.9567, 0.9552, 0.9594, 0.9888 and 0.9695 respectively. In addition, the best-fit structures for training for SMA, BHA, MVO, VS, and WOA (all combined with multilayer perceptron, MLP) are achieved when the term population size was modified to 450, 500, 250, 150, and 500, respectively. Among all the suggested options, VS could offer a stronger prediction network for training MLP.

Noise Robust Speech Recognition Based on Noisy Speech Acoustic Model Adaptation (잡음음성 음향모델 적응에 기반한 잡음에 강인한 음성인식)

  • Chung, Yongjoo
    • Phonetics and Speech Sciences
    • /
    • v.6 no.2
    • /
    • pp.29-34
    • /
    • 2014
  • In the Vector Taylor Series (VTS)-based noisy speech recognition methods, Hidden Markov Models (HMM) are usually trained with clean speech. However, better performance is expected by training the HMM with noisy speech. In a previous study, we could find that Minimum Mean Square Error (MMSE) estimation of the training noisy speech in the log-spectrum domain produce improved recognition results, but since the proposed algorithm was done in the log-spectrum domain, it could not be used for the HMM adaptation. In this paper, we modify the previous algorithm to derive a novel mathematical relation between test and training noisy speech in the cepstrum domain and the mean and covariance of the Multi-condition TRaining (MTR) trained noisy speech HMM are adapted. In the noisy speech recognition experiments on the Aurora 2 database, the proposed method produced 10.6% of relative improvement in Word Error Rates (WERs) over the MTR method while the previous MMSE estimation of the training noisy speech produced 4.3% of relative improvement, which shows the superiority of the proposed method.

Radial basis function network design for chaotic time series prediction (혼돈 시계열의 예측을 위한 Radial Basis 함수 회로망 설계)

  • 신창용;김택수;최윤호;박상희
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.45 no.4
    • /
    • pp.602-611
    • /
    • 1996
  • In this paper, radial basis function networks with two hidden layers, which employ the K-means clustering method and the hierarchical training, are proposed for improving the short-term predictability of chaotic time series. Furthermore the recursive training method of radial basis function network using the recursive modified Gram-Schmidt algorithm is proposed for the purpose. In addition, the radial basis function networks trained by the proposed training methods are compared with the X.D. He A Lapedes's model and the radial basis function network by nonrecursive training method. Through this comparison, an improved radial basis function network for predicting chaotic time series is presented. (author). 17 refs., 8 figs., 3 tabs.

  • PDF

The Comparison of Neural Network Learning Paradigms: Backpropagation, Simulated Annealing, Genetic Algorithm, and Tabu Search

  • Chen Ming-Kuen
    • Proceedings of the Korean Society for Quality Management Conference
    • /
    • 1998.11a
    • /
    • pp.696-704
    • /
    • 1998
  • Artificial neural networks (ANN) have successfully applied into various areas. But, How to effectively established network is the one of the critical problem. This study will focus on this problem and try to extensively study. Firstly, four different learning algorithms ANNs were constructed. The learning algorithms include backpropagation, simulated annealing, genetic algorithm, and tabu search. The experimental results of the above four different learning algorithms were tested by statistical analysis. The training RMS, training time, and testing RMS were used as the comparison criteria.

  • PDF

Bead Visualization Using Spline Algorithm (스플라인 알고리즘을 이용한 비드 가시화)

  • Koo, Chang-Dae;Yang, Hyeong-Seok;Kim, Maeng-Nam
    • Journal of Welding and Joining
    • /
    • v.34 no.1
    • /
    • pp.54-58
    • /
    • 2016
  • In this research paper, suggest method of generate same bead as an actual measurement data in virtual welding conditions, exploit morphology information of the bead that acquired through robot welding. It has many multiple risk factors to Beginners welding training, by we make possible to train welding in virtual reality, we can reduce welding training risk and welding material to exploit bead visualization algorithm that we suggest so it will be expected to achieve educational, environmental and economical effect. The proposed method is acquire data to each case performing robot welding by set the voltage, current, working angle, process angle, speed and arc length of welding condition value. As Welding condition value is most important thing in decide bead form, we would selected one of baseline each item and then acquired metal followed another factors change. Welding type is FCAW, SMAW and TIG. When welding trainee perform the training, it's difficult to save all of changed information into database likewise working angle, process angle, speed and arc length. So not saving data into database are applying the method to infer the form of bead using a neural network algorithm. The way of bead's visualization is applying the spline algorithm. To accurately represent Morphological information of the bead, requires much of morphological information, so it can occur problem to save into database that is why we using the spline algorithm. By applying the spline algorithm, it can make simplified data and generate accurate bead shape. Through the research paper, the shape of bead generated by the virtual reality was able to improve the accuracy when compared using the form of bead generated by the robot welding to using the morphological information of the bead generated through the robot welding. By express the accurate shape of bead and so can reduce the difference of the actual welding training and virtual welding, it was confirmed that it can be performed safety and high effective virtual welding education.

Efficient Incremental Learning using the Preordered Training Data (미리 순서가 매겨진 학습 데이타를 이용한 효과적인 증가학습)

  • Lee, Sun-Young;Bang, Sung-Yang
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.2
    • /
    • pp.97-107
    • /
    • 2000
  • Incremental learning generally reduces training time and increases the generalization of a neural network by selecting training data incrementally during the training. However, the existing methods of incremental learning repeatedly evaluate the importance of training data every time they select additional data. In this paper, an incremental learning algorithm is proposed for pattern classification problems. It evaluates the importance of each piece of data only once before starting the training. The importance of the data depends on how close they are to the decision boundary. The current paper presents an algorithm which orders the data according to their distance to the decision boundary by using clustering. Experimental results of two artificial and real world classification problems show that this proposed incremental learning method significantly reduces the size of the training set without decreasing generalization performance.

  • PDF

Autonomous-Driving Vehicle Learning Environments using Unity Real-time Engine and End-to-End CNN Approach (유니티 실시간 엔진과 End-to-End CNN 접근법을 이용한 자율주행차 학습환경)

  • Hossain, Sabir;Lee, Deok-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.122-130
    • /
    • 2019
  • Collecting a rich but meaningful training data plays a key role in machine learning and deep learning researches for a self-driving vehicle. This paper introduces a detailed overview of existing open-source simulators which could be used for training self-driving vehicles. After reviewing the simulators, we propose a new effective approach to make a synthetic autonomous vehicle simulation platform suitable for learning and training artificial intelligence algorithms. Specially, we develop a synthetic simulator with various realistic situations and weather conditions which make the autonomous shuttle to learn more realistic situations and handle some unexpected events. The virtual environment is the mimics of the activity of a genuine shuttle vehicle on a physical world. Instead of doing the whole experiment of training in the real physical world, scenarios in 3D virtual worlds are made to calculate the parameters and training the model. From the simulator, the user can obtain data for the various situation and utilize it for the training purpose. Flexible options are available to choose sensors, monitor the output and implement any autonomous driving algorithm. Finally, we verify the effectiveness of the developed simulator by implementing an end-to-end CNN algorithm for training a self-driving shuttle.

An Incremental Rule Extraction Algorithm Based on Recursive Partition Averaging (재귀적 분할 평균에 기반한 점진적 규칙 추출 알고리즘)

  • Han, Jin-Chul;Kim, Sang-Kwi;Yoon, Chung-Hwa
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.1
    • /
    • pp.11-17
    • /
    • 2007
  • One of the popular methods used for pattern classification is the MBR (Memory-Based Reasoning) algorithm. Since it simply computes distances between a test pattern and training patterns or hyperplanes stored in memory, and then assigns the class of the nearest training pattern, it cannot explain how the classification result is obtained. In order to overcome this problem, we propose an incremental teaming algorithm based on RPA (Recursive Partition Averaging) to extract IF-THEN rules that describe regularities inherent in training patterns. But rules generated by RPA eventually show an overfitting phenomenon, because they depend too strongly on the details of given training patterns. Also RPA produces more number of rules than necessary, due to over-partitioning of the pattern space. Consequently, we present the IREA (Incremental Rule Extraction Algorithm) that overcomes overfitting problem by removing useless conditions from rules and reduces the number of rules at the same time. We verify the performance of proposed algorithm using benchmark data sets from UCI Machine Learning Repository.

Enhanced MCTS Algorithm for Generating AI Agents in General Video Games (일반적인 비디오 게임의 AI 에이전트 생성을 위한 개선된 MCTS 알고리즘)

  • Oh, Pyeong;Kim, Ji-Min;Kim, Sun-Jeong;Hong, Seokmin
    • The Journal of Information Systems
    • /
    • v.25 no.4
    • /
    • pp.23-36
    • /
    • 2016
  • Purpose Recently, many researchers have paid much attention to the Artificial Intelligence fields of GVGP, PCG. The paper suggests that the improved MCTS algorithm to apply for the framework can generate better AI agent. Design/methodology/approach As noted, the MCTS generate magnificent performance without an advanced training and in turn, fit applying to the field of GVGP which does not need prior knowledge. The improved and modified MCTS shows that the survival rate is increased interestingly and the search can be done in a significant way. The study was done with 2 different sets. Findings The results showed that the 10 training set which was not given any prior knowledge and the other training set which played a role as validation set generated better performance than the existed MCTS algorithm. Besed upon the results, the further study was suggested.

Discriminative Training of Predictive Neural Network Models (예측신경회로망 모델의 변별력 있는 학습)

  • Na, Kyung-Min;Rheem, Jae-Yeol;Ann, Sou-Guil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.1E
    • /
    • pp.64-70
    • /
    • 1994
  • Predictive neural network models are powerful speech recognition models based on a nonlinear pattern prediction. But those models suffer from poor discrimination between acoustically similar words. In this paper we propose an discriminative training algorithm for predictive neural network models. This algorithm is derived from GPD (Generalized Probabilistic Descent) algorithm coupled with MCEF(Minimum Classification Error Formulation). It allows direct minimization of a recognition error rate. Evaluation of our training algoritym on ten Korean digits shows its effectiveness by 30% reduction of recognition error.

  • PDF