• Title/Summary/Keyword: 순환인공신경망

Search Result 4, Processing Time 0.023 seconds

Study on Q-value prediction ahead of tunnel excavation face using recurrent neural network (순환인공신경망을 활용한 터널굴착면 전방 Q값 예측에 관한 연구)

  • Hong, Chang-Ho;Kim, Jin;Ryu, Hee-Hwan;Cho, Gye-Chun
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.22 no.3
    • /
    • pp.239-248
    • /
    • 2020
  • Exact rock classification helps suitable support patterns to be installed. Face mapping is usually conducted to classify the rock mass using RMR (Rock Mass Ration) or Q values. There have been several attempts to predict the grade of rock mass using mechanical data of jumbo drills or probe drills and photographs of excavation surfaces by using deep learning. However, they took long time, or had a limitation that it is impossible to grasp the rock grade in ahead of the tunnel surface. In this study, a method to predict the Q value ahead of excavation surface is developed using recurrent neural network (RNN) technique and it is compared with the Q values from face mapping for verification. Among Q values from over 4,600 tunnel faces, 70% of data was used for learning, and the rests were used for verification. Repeated learnings were performed in different number of learning and number of previous excavation surfaces utilized for learning. The coincidence between the predicted and actual Q values was compared with the root mean square error (RMSE). RMSE value from 600 times repeated learning with 2 prior excavation faces gives a lowest values. The results from this study can vary with the input data sets, the results can help to understand how the past ground conditions affect the future ground conditions and to predict the Q value ahead of the tunnel excavation face.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

Application of recurrent neural network for inflow prediction into multi-purpose dam basin (다목적댐 유입량 예측을 위한 Recurrent Neural Network 모형의 적용 및 평가)

  • Park, Myung Ky;Yoon, Yung Suk;Lee, Hyun Ho;Kim, Ju Hwan
    • Journal of Korea Water Resources Association
    • /
    • v.51 no.12
    • /
    • pp.1217-1227
    • /
    • 2018
  • This paper aims to evaluate the applicability of dam inflow prediction model using recurrent neural network theory. To achieve this goal, the Artificial Neural Network (ANN) model and the Elman Recurrent Neural Network(RNN) model were applied to hydro-meteorological data sets for the Soyanggang dam and the Chungju dam basin during dam operation period. For the model training, inflow, rainfall, temperature, sunshine duration, wind speed were used as input data and daily inflow of dam for 10 days were used for output data. The verification was carried out through dam inflow prediction between July, 2016 and June, 2018. The results showed that there was no significant difference in prediction performance between ANN model and the Elman RNN model in the Soyanggang dam basin but the prediction results of the Elman RNN model are comparatively superior to those of the ANN model in the Chungju dam basin. Consequently, the Elman RNN prediction performance is expected to be similar to or better than the ANN model. The prediction performance of Elman RNN was notable during the low dam inflow period. The performance of the multiple hidden layer structure of Elman RNN looks more effective in prediction than that of a single hidden layer structure.

Traffic Congestion Estimation by Adopting Recurrent Neural Network (순환인공신경망(RNN)을 이용한 대도시 도심부 교통혼잡 예측)

  • Jung, Hee jin;Yoon, Jin su;Bae, Sang hoon
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.16 no.6
    • /
    • pp.67-78
    • /
    • 2017
  • Traffic congestion cost is increasing annually. Specifically congestion caused by the CDB traffic contains more than a half of the total congestion cost. Recent advancement in the field of Big Data, AI paved the way to industry revolution 4.0. And, these new technologies creates tremendous changes in the traffic information dissemination. Eventually, accurate and timely traffic information will give a positive impact on decreasing traffic congestion cost. This study, therefore, focused on developing both recurrent and non-recurrent congestion prediction models on urban roads by adopting Recurrent Neural Network(RNN), a tribe in machine learning. Two hidden layers with scaled conjugate gradient backpropagation algorithm were selected, and tested. Result of the analysis driven the authors to 25 meaningful links out of 33 total links that have appropriate mean square errors. Authors concluded that RNN model is a feasible model to predict congestion.