• Title/Summary/Keyword: neural network training

Search Result 1,742, Processing Time 0.026 seconds

Forecasting Water Levels Of Bocheong River Using Neural Network Model

  • Kim, Ji-tae;Koh, Won-joon;Cho, Won-cheol
    • Water Engineering Research
    • /
    • v.1 no.2
    • /
    • pp.129-136
    • /
    • 2000
  • Predicting water levels is a difficult task because a lot of uncertainties are included. Therefore the neural network which is appropriate to such a problem, is introduced. One day ahead forecasting of river stage in the Bocheong River is carried out by using the neural network model. Historical water levels at Snagye gauging point which is located at the downstream of the Bocheong River and average rainfall of the Bocheong River basin are selected as training data sets. With these data sets, the training process has been done by using back propagation algorithm. Then waters levels in 1997 and 1998 are predicted with the trained algorithm. To improve the accuracy, a filtering method is introduced as predicting scheme. It is shown that predicted results are in a good agreement with observed water levels and that a filtering method can overcome the lack of training patterns.

  • PDF

Input Dimension Reduction based on Continuous Word Vector for Deep Neural Network Language Model (Deep Neural Network 언어모델을 위한 Continuous Word Vector 기반의 입력 차원 감소)

  • Kim, Kwang-Ho;Lee, Donghyun;Lim, Minkyu;Kim, Ji-Hwan
    • Phonetics and Speech Sciences
    • /
    • v.7 no.4
    • /
    • pp.3-8
    • /
    • 2015
  • In this paper, we investigate an input dimension reduction method using continuous word vector in deep neural network language model. In the proposed method, continuous word vectors were generated by using Google's Word2Vec from a large training corpus to satisfy distributional hypothesis. 1-of-${\left|V\right|}$ coding discrete word vectors were replaced with their corresponding continuous word vectors. In our implementation, the input dimension was successfully reduced from 20,000 to 600 when a tri-gram language model is used with a vocabulary of 20,000 words. The total amount of time in training was reduced from 30 days to 14 days for Wall Street Journal training corpus (corpus length: 37M words).

Speech and Noise Recognition System by Neural Network (신경회로망에 의한 음성 및 잡음 인식 시스템)

  • Choi, Jae-Sung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.5 no.4
    • /
    • pp.357-362
    • /
    • 2010
  • This paper proposes the speech and noise recognition system by using a neural network in order to detect the speech and noise sections at each frame. The proposed neural network consists of a layered neural network training by back-propagation algorithm. First, a power spectrum obtained by fast Fourier transform and linear predictive coefficients are used as the input to the neural network for each frame, then the neural network is trained using these power spectrum and linear predictive coefficients. Therefore, the proposed neural network can train using clean speech and noise. The performance of the proposed recognition system was evaluated based on the recognition rate using various speeches and white, printer, road, and car noises. In this experiment, the recognition rates were 92% or more for such speech and noise when training data and evaluation data were the different.

A Study on Unsupervised Learning Method of RAM-based Neural Net (RAM 기반 신경망의 비지도 학습에 관한 연구)

  • Park, Sang-Moo;Kim, Seong-Jin;Lee, Dong-Hyung;Lee, Soo-Dong;Ock, Cheol-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.1
    • /
    • pp.31-38
    • /
    • 2011
  • A RAM-based Neural Net is a weightless neural network based on binary neural network. 3-D neural network using this paper is binary neural network with multiful information bits and store counts of training. Recognition method by MRD technique is based on the supervised learning. Therefore neural network by itself can not distinguish between the categories and well-separated categories of training data can achieve only through the performance. In this paper, unsupervised learning algorithm is proposed which is trained existing 3-D neural network without distinction of data, to distinguish between categories depending on the only input training patterns. The training data for proposed unsupervised learning provided by the NIST handwritten digits of MNIST which is consist of 0 to 9 multi-pattern, a randomly materials are used as training patterns. Through experiments, neural network is to determine the number of discriminator which each have an idea of the handwritten digits that can be interpreted.

A Study on Input Pattern Generation of Neural-Networks for Character Recognition (문자인식 시스템을 위한 신경망 입력패턴 생성에 관한 연구)

  • Shin, Myong-Jun;Kim, Sung-Jong;Son, Young-Ik
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.129-131
    • /
    • 2006
  • The performances of neural network systems mainly depend on the kind and the number of input patterns for its training. Hence, the kind of input patterns as well as its number is very important for the character recognition system using back-propagation network. The more input patters are used, the better the system recognizes various characters. However, training is not always successful as the number of input patters increases. Moreover, there exists a limit to consider many input patterns of the recognition system for cursive script characters. In this paper we present a new character recognition system using the back-propagation neural networks. By using an additional neural network, an input pattern generation method is provided for increasing the recognition ratio and a successful training. We firstly introduce the structure of the proposed system. Then, the character recognition system is investigated through some experiments.

  • PDF

Modeling Differential Global Positioning System Pseudorange Correction

  • Mohasseb, M.;El-Rabbany, A.;El-Alim, O. Abd;Rashad, R.
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • v.1
    • /
    • pp.21-26
    • /
    • 2006
  • This paper focuses on modeling and predicting differential GPS corrections transmitted by marine radio-beacon systems using artificial neural networks. Various neural network structures with various training algorithms were examined, including Linear, Radial Biases, and Feedforward. Matlab Neural Network toolbox is used for this purpose. Data sets used in building the model are the transmitted pseudorange corrections and broadcast navigation message. Model design is passed through several stages, namely data collection, preprocessing, model building, and finally model validation. It is found that feedforward neural network with automated regularization is the most suitable for our data. In training the neural network, different approaches are used to take advantage of the pseudorange corrections history while taking into account the required time for prediction and storage limitations. Three data structures are considered in training the neural network, namely all round, compound, and average. Of the various data structures examined, it is found that the average data structure is the most suitable. It is shown that the developed model is capable of predicting the differential correction with an accuracy level comparable to that of beacon-transmitted real-time DGPS correction.

  • PDF

Performance Comparison between Neural Network and Genetic Programming Using Gas Furnace Data

  • Bae, Hyeon;Jeon, Tae-Ryong;Kim, Sung-Shin
    • Journal of information and communication convergence engineering
    • /
    • v.6 no.4
    • /
    • pp.448-453
    • /
    • 2008
  • This study describes design and development techniques of estimation models for process modeling. One case study is undertaken to design a model using standard gas furnace data. Neural networks (NN) and genetic programming (GP) are each employed to model the crucial relationships between input factors and output responses. In the case study, two models were generated by using 70% training data and evaluated by using 30% testing data for genetic programming and neural network modeling. The model performance was compared by using RMSE values, which were calculated based on the model outputs. The average RMSE for training and testing were 0.8925 (training) and 0.9951 (testing) for the NN model, and 0.707227 (training) and 0.673150 (testing) for the GP model, respectively. As concern the results, the NN model has a strong advantage in model training (using the all data for training), and the GP model appears to have an advantage in model testing (using the separated data for training and testing). The performance reproducibility of the GP model is good, so this approach appears suitable for modeling physical fabrication processes.

The Parallel ANN(Artificial Neural Network) Simulator using Mobile Agent (이동 에이전트를 이용한 병렬 인공신경망 시뮬레이터)

  • Cho, Yong-Man;Kang, Tae-Won
    • The KIPS Transactions:PartB
    • /
    • v.13B no.6 s.109
    • /
    • pp.615-624
    • /
    • 2006
  • The objective of this paper is to implement parallel multi-layer ANN(Artificial Neural Network) simulator based on the mobile agent system which is executed in parallel in the virtual parallel distributed computing environment. The Multi-Layer Neural Network is classified by training session, training data layer, node, md weight in the parallelization-level. In this study, We have developed and evaluated the simulator with which it is feasible to parallel the ANN in the training session and training data parallelization because these have relatively few network traffic. In this results, we have verified that the performance of parallelization is high about 3.3 times in the training session and training data. The great significance of this paper is that the performance of ANN's execution on virtual parallel computer is similar to that of ANN's execution on existing super-computer. Therefore, we think that the virtual parallel computer can be considerably helpful in developing the neural network because it decreases the training time which needs extra-time.

Tuning Learning Rate in Neural Network Using Fuzzy Model (퍼지 모델을 이용한 신경망의 학습률 조정)

  • 라혁주;서재용;김성주;전홍태
    • Proceedings of the IEEK Conference
    • /
    • 2003.07d
    • /
    • pp.1239-1242
    • /
    • 2003
  • The neural networks are a famous model to learn the nonlinear function or nonlinear system. The main point of neural network is that the difference actual output from desired output is used to update weights. Usually, the gradient descent method is used for the learning process. On training process, if learning rate is too large, neural networks hardly guarantee convergence of neural networks. On the other hand, if learning rate is too small, the training spends much time. Therefore, one major problem in use of neural networks are to decrease the teaming time while neural networks are guaranteed convergence. In this paper, we suggest the model of fuzzy logic to neural networks to calibrate learning rate. This method is to tune learning rate dynamically according to error and demonstrates the optimization of training.

  • PDF

Study on Fast-Changing Mixed-Modulation Recognition Based on Neural Network Algorithms

  • Jing, Qingfeng;Wang, Huaxia;Yang, Liming
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.12
    • /
    • pp.4664-4681
    • /
    • 2020
  • Modulation recognition (MR) plays a key role in cognitive radar, cognitive radio, and some other civilian and military fields. While existing methods can identify the signal modulation type by extracting the signal characteristics, the quality of feature extraction has a serious impact on the recognition results. In this paper, an end-to-end MR method based on long short-term memory (LSTM) and the gated recurrent unit (GRU) is put forward, which can directly predict the modulation type from a sampled signal. Additionally, the sliding window method is applied to fast-changing mixed-modulation signals for which the signal modulation type changes over time. The recognition accuracy on training datasets in different SNR ranges and the proportion of each modulation method in misclassified samples are analyzed, and it is found to be reasonable to select the evenly-distributed and full range of SNR data as the training data. With the improvement of the SNR, the recognition accuracy increases rapidly. When the length of the training dataset increases, the neural network recognition effect is better. The loss function value of the neural network decreases with the increase of the training dataset length, and then tends to be stable. Moreover, when the fast-changing period is less than 20ms, the error rate is as high as 50%. As the fast-changing period is increased to 30ms, the error rates of the GRU and LSTM neural networks are less than 5%.