• 제목/요약/키워드: neural network training

검색결과 1,742건 처리시간 0.028초

Forecasting Water Levels Of Bocheong River Using Neural Network Model

  • Kim, Ji-tae;Koh, Won-joon;Cho, Won-cheol
    • Water Engineering Research
    • /
    • 제1권2호
    • /
    • pp.129-136
    • /
    • 2000
  • Predicting water levels is a difficult task because a lot of uncertainties are included. Therefore the neural network which is appropriate to such a problem, is introduced. One day ahead forecasting of river stage in the Bocheong River is carried out by using the neural network model. Historical water levels at Snagye gauging point which is located at the downstream of the Bocheong River and average rainfall of the Bocheong River basin are selected as training data sets. With these data sets, the training process has been done by using back propagation algorithm. Then waters levels in 1997 and 1998 are predicted with the trained algorithm. To improve the accuracy, a filtering method is introduced as predicting scheme. It is shown that predicted results are in a good agreement with observed water levels and that a filtering method can overcome the lack of training patterns.

  • PDF

Deep Neural Network 언어모델을 위한 Continuous Word Vector 기반의 입력 차원 감소 (Input Dimension Reduction based on Continuous Word Vector for Deep Neural Network Language Model)

  • 김광호;이동현;임민규;김지환
    • 말소리와 음성과학
    • /
    • 제7권4호
    • /
    • pp.3-8
    • /
    • 2015
  • In this paper, we investigate an input dimension reduction method using continuous word vector in deep neural network language model. In the proposed method, continuous word vectors were generated by using Google's Word2Vec from a large training corpus to satisfy distributional hypothesis. 1-of-${\left|V\right|}$ coding discrete word vectors were replaced with their corresponding continuous word vectors. In our implementation, the input dimension was successfully reduced from 20,000 to 600 when a tri-gram language model is used with a vocabulary of 20,000 words. The total amount of time in training was reduced from 30 days to 14 days for Wall Street Journal training corpus (corpus length: 37M words).

신경회로망에 의한 음성 및 잡음 인식 시스템 (Speech and Noise Recognition System by Neural Network)

  • 최재승
    • 한국전자통신학회논문지
    • /
    • 제5권4호
    • /
    • pp.357-362
    • /
    • 2010
  • 본 논문에서는 음성 및 잡음 구간을 검출하기 위하여 신경회로망에 의한 음성 및 잡음 인식시스템을 제안한다. 제안하는 신경회로망은 오차역전파알고리즘에 의하여 학습되는 네트워크이다. 먼저, 고속 푸리에변환에 의한 전력스펙트럼 및 선형예측계수가 각 프레임에서 신경회로망의 입력으로 사용되어 네트워크가 학습된다. 따라서 제안된 신경회로망은 잡음이 중첩되지 않은 음성 및 잡음을 사용하여 학습된다. 제안한 인식시스템의 성능은 다양한 음성 및 백색, 프린터, 도로, 자동차 잡음 들을 사용하여 인식율에 의하여 평가된다. 본 실험에서는 신경회로망의 학습 데이터 및 평가 데이터가 다를 경우에도 이러한 음성 및 잡음에 대하여 92% 이상의 인식율을 구할 수 있었다.

RAM 기반 신경망의 비지도 학습에 관한 연구 (A Study on Unsupervised Learning Method of RAM-based Neural Net)

  • 박상무;김성진;이동형;이수동;옥철영
    • 한국컴퓨터정보학회논문지
    • /
    • 제16권1호
    • /
    • pp.31-38
    • /
    • 2011
  • RAM 기반 3-D 신경망은 2진 신경망(Binary Neural Network, BNN)에 복수개의 정보 저장 비트를 두어 교육의 반복 횟수를 누적하도록 구성된 가중치를 가지지 않는 신경회로망으로서 한 번의 교육만으로 학습이 이루어지는 효율성이 뛰어난 신경회로망이다. MRD(Maximum Response Detector) 기법을 이용한 3-D 신경망의 인식 방법은 지도 학습에 기반을 둔 것으로서 학습을 통해 신경망 스스로가 범주를 구분할 수 없으며 잘 구분된 범주의 학습 데이터를 통해서만 성능을 발휘할 수 있다. 본 논문에서는 기존 3-D 신경 회로망에 학습 데이터의 구분 없이 신경망 자체가 입력 패턴에 따라 학습하여 범주를 구분하는 비지도 학습 알고리즘을 제안한다. 제안된 비지도 학습 알고리즘에 의해 신경회로망은 판별자의 수를 스스로 조절할 수 있는 구조를 가지게 되며 이는 망의 유연한 확장성을 보장한다. 0에서 9까지의 다중 패턴으로 구성된 오프라인 필기체 숫자를 무작위로 추출하여 학습 패턴으로 인식 실험을 수행하였으며 실험을 통해 신경망이 스스로 비지도 학습에 의해 판별자의 수를 결정하게 되며 이것은 신경망이 각각의 필기체 숫자에 대한 개념을 가지게 되는 것으로 해석할 수 있다.

문자인식 시스템을 위한 신경망 입력패턴 생성에 관한 연구 (A Study on Input Pattern Generation of Neural-Networks for Character Recognition)

  • 신명준;김성종;손영익
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2006년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.129-131
    • /
    • 2006
  • The performances of neural network systems mainly depend on the kind and the number of input patterns for its training. Hence, the kind of input patterns as well as its number is very important for the character recognition system using back-propagation network. The more input patters are used, the better the system recognizes various characters. However, training is not always successful as the number of input patters increases. Moreover, there exists a limit to consider many input patterns of the recognition system for cursive script characters. In this paper we present a new character recognition system using the back-propagation neural networks. By using an additional neural network, an input pattern generation method is provided for increasing the recognition ratio and a successful training. We firstly introduce the structure of the proposed system. Then, the character recognition system is investigated through some experiments.

  • PDF

Modeling Differential Global Positioning System Pseudorange Correction

  • Mohasseb, M.;El-Rabbany, A.;El-Alim, O. Abd;Rashad, R.
    • 한국항해항만학회:학술대회논문집
    • /
    • 한국항해항만학회 2006년도 International Symposium on GPS/GNSS Vol.1
    • /
    • pp.21-26
    • /
    • 2006
  • This paper focuses on modeling and predicting differential GPS corrections transmitted by marine radio-beacon systems using artificial neural networks. Various neural network structures with various training algorithms were examined, including Linear, Radial Biases, and Feedforward. Matlab Neural Network toolbox is used for this purpose. Data sets used in building the model are the transmitted pseudorange corrections and broadcast navigation message. Model design is passed through several stages, namely data collection, preprocessing, model building, and finally model validation. It is found that feedforward neural network with automated regularization is the most suitable for our data. In training the neural network, different approaches are used to take advantage of the pseudorange corrections history while taking into account the required time for prediction and storage limitations. Three data structures are considered in training the neural network, namely all round, compound, and average. Of the various data structures examined, it is found that the average data structure is the most suitable. It is shown that the developed model is capable of predicting the differential correction with an accuracy level comparable to that of beacon-transmitted real-time DGPS correction.

  • PDF

Performance Comparison between Neural Network and Genetic Programming Using Gas Furnace Data

  • Bae, Hyeon;Jeon, Tae-Ryong;Kim, Sung-Shin
    • Journal of information and communication convergence engineering
    • /
    • 제6권4호
    • /
    • pp.448-453
    • /
    • 2008
  • This study describes design and development techniques of estimation models for process modeling. One case study is undertaken to design a model using standard gas furnace data. Neural networks (NN) and genetic programming (GP) are each employed to model the crucial relationships between input factors and output responses. In the case study, two models were generated by using 70% training data and evaluated by using 30% testing data for genetic programming and neural network modeling. The model performance was compared by using RMSE values, which were calculated based on the model outputs. The average RMSE for training and testing were 0.8925 (training) and 0.9951 (testing) for the NN model, and 0.707227 (training) and 0.673150 (testing) for the GP model, respectively. As concern the results, the NN model has a strong advantage in model training (using the all data for training), and the GP model appears to have an advantage in model testing (using the separated data for training and testing). The performance reproducibility of the GP model is good, so this approach appears suitable for modeling physical fabrication processes.

이동 에이전트를 이용한 병렬 인공신경망 시뮬레이터 (The Parallel ANN(Artificial Neural Network) Simulator using Mobile Agent)

  • 조용만;강태원
    • 정보처리학회논문지B
    • /
    • 제13B권6호
    • /
    • pp.615-624
    • /
    • 2006
  • 이 논문은 이동 에이전트 시스템에 기반을 둔 가상의 병렬분산 컴퓨팅 환경에서 병렬로 수행되는 다층 인공신경망 시뮬레이터를 구현하는 것을 목적으로 한다. 다층 신경망은 학습세션, 학습데이터, 계층, 노드, 가중치 수준에서 병렬화가 이루어진다. 이 논문에서는 네트워크의 통신량이 상대적으로 적은 학습세션 및 학습데이터 수준의 병렬화가 가능한 신경망 시뮬레이터를 개발하고 평가하였다. 평가결과, 학습세션 병렬화와 학습데이터 병렬화 성능분석에서 약 3.3배의 학습 수행 성능 향상을 확인할 수 있었다. 가상의 병렬 컴퓨터에서 신경망을 병렬로 구현하여 기존의 전용병렬컴퓨터에서 수행한 신경망의 병렬처리와 비슷한 성능을 발휘한다는 점에서 이 논문의 의의가 크다고 할 수 있다. 따라서 가상의 병렬 컴퓨터를 이용하여 신경망을 개발하는데 있어서, 비교적 시간이 많이 소요되는 학습시간을 줄임으로서 신경망 개발에 상당한 도움을 줄 수 있다고 본다.

퍼지 모델을 이용한 신경망의 학습률 조정 (Tuning Learning Rate in Neural Network Using Fuzzy Model)

  • 라혁주;서재용;김성주;전홍태
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 Ⅲ
    • /
    • pp.1239-1242
    • /
    • 2003
  • The neural networks are a famous model to learn the nonlinear function or nonlinear system. The main point of neural network is that the difference actual output from desired output is used to update weights. Usually, the gradient descent method is used for the learning process. On training process, if learning rate is too large, neural networks hardly guarantee convergence of neural networks. On the other hand, if learning rate is too small, the training spends much time. Therefore, one major problem in use of neural networks are to decrease the teaming time while neural networks are guaranteed convergence. In this paper, we suggest the model of fuzzy logic to neural networks to calibrate learning rate. This method is to tune learning rate dynamically according to error and demonstrates the optimization of training.

  • PDF

Study on Fast-Changing Mixed-Modulation Recognition Based on Neural Network Algorithms

  • Jing, Qingfeng;Wang, Huaxia;Yang, Liming
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권12호
    • /
    • pp.4664-4681
    • /
    • 2020
  • Modulation recognition (MR) plays a key role in cognitive radar, cognitive radio, and some other civilian and military fields. While existing methods can identify the signal modulation type by extracting the signal characteristics, the quality of feature extraction has a serious impact on the recognition results. In this paper, an end-to-end MR method based on long short-term memory (LSTM) and the gated recurrent unit (GRU) is put forward, which can directly predict the modulation type from a sampled signal. Additionally, the sliding window method is applied to fast-changing mixed-modulation signals for which the signal modulation type changes over time. The recognition accuracy on training datasets in different SNR ranges and the proportion of each modulation method in misclassified samples are analyzed, and it is found to be reasonable to select the evenly-distributed and full range of SNR data as the training data. With the improvement of the SNR, the recognition accuracy increases rapidly. When the length of the training dataset increases, the neural network recognition effect is better. The loss function value of the neural network decreases with the increase of the training dataset length, and then tends to be stable. Moreover, when the fast-changing period is less than 20ms, the error rate is as high as 50%. As the fast-changing period is increased to 30ms, the error rates of the GRU and LSTM neural networks are less than 5%.