• Title/Summary/Keyword: Dynamic Neurons

Search Result 85, Processing Time 0.022 seconds

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

Speaker-Adaptive Speech Synthesis based on Fuzzy Vector Quantizer Mapping and Neural Networks (퍼지 벡터 양자화기 사상화와 신경망에 의한 화자적응 음성합성)

  • Lee, Jin-Yi;Lee, Gwang-Hyeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.1
    • /
    • pp.149-160
    • /
    • 1997
  • This paper is concerned with the problem of speaker-adaptive speech synthes is method using a mapped codebook designed by fuzzy mapping on FLVQ (Fuzzy Learning Vector Quantization). The FLVQ is used to design both input and reference speaker's codebook. This algorithm is incorporated fuzzy membership function into the LVQ(learning vector quantization) networks. Unlike the LVQ algorithm, this algorithm minimizes the network output errors which are the differences of clas s membership target and actual membership values, and results to minimize the distances between training patterns and competing neurons. Speaker Adaptation in speech synthesis is performed as follow;input speaker's codebook is mapped a reference speaker's codebook in fuzzy concepts. The Fuzzy VQ mapping replaces a codevector preserving its fuzzy membership function. The codevector correspondence histogram is obtained by accumulating the vector correspondence along the DTW optimal path. We use the Fuzzy VQ mapping to design a mapped codebook. The mapped codebook is defined as a linear combination of reference speaker's vectors using each fuzzy histogram as a weighting function with membership values. In adaptive-speech synthesis stage, input speech is fuzzy vector-quantized by the mapped codcbook, and then FCM arithmetic is used to synthesize speech adapted to input speaker. The speaker adaption experiments are carried out using speech of males in their thirties as input speaker's speech, and a female in her twenties as reference speaker's speech. Speeches used in experiments are sentences /anyoung hasim nika/ and /good morning/. As a results of experiments, we obtained a synthesized speech adapted to input speaker.

  • PDF

Dynamic Changes in the Bridging Collaterals of the Basal Ganglia Circuitry Control Stress-Related Behaviors in Mice

  • Lee, Young;Han, Na-Eun;Kim, Wonju;Kim, Jae Gon;Lee, In Bum;Choi, Su Jeong;Chun, Heejung;Seo, Misun;Lee, C. Justin;Koh, Hae-Young;Kim, Joung-Hun;Baik, Ja-Hyun;Bear, Mark F.;Choi, Se-Young;Yoon, Bong-June
    • Molecules and Cells
    • /
    • v.43 no.4
    • /
    • pp.360-372
    • /
    • 2020
  • The basal ganglia network has been implicated in the control of adaptive behavior, possibly by integrating motor learning and motivational processes. Both positive and negative reinforcement appear to shape our behavioral adaptation by modulating the function of the basal ganglia. Here, we examined a transgenic mouse line (G2CT) in which synaptic transmissions onto the medium spiny neurons (MSNs) of the basal ganglia are depressed. We found that the level of collaterals from direct pathway MSNs in the external segment of the globus pallidus (GPe) ('bridging collaterals') was decreased in these mice, and this was accompanied by behavioral inhibition under stress. Furthermore, additional manipulations that could further decrease or restore the level of the bridging collaterals resulted in an increase in behavioral inhibition or active behavior in the G2CT mice, respectively. Collectively, our data indicate that the striatum of the basal ganglia network integrates negative emotions and controls appropriate coping responses in which the bridging collateral connections in the GPe play a critical regulatory role.

A Correction of East Asian Summer Precipitation Simulated by PNU/CME CGCM Using Multiple Linear Regression (다중 선형 회귀를 이용한 PNU/CME CGCM의 동아시아 여름철 강수예측 보정 연구)

  • Hwang, Yoon-Jeong;Ahn, Joong-Bae
    • Journal of the Korean earth science society
    • /
    • v.28 no.2
    • /
    • pp.214-226
    • /
    • 2007
  • Because precipitation is influenced by various atmospheric variables, it is highly nonlinear. Although precipitation predicted by a dynamic model can be corrected by using a nonlinear Artificial Neural Network, this approach has limits such as choices of the initial weight, local minima and the number of neurons, etc. In the present paper, we correct simulated precipitation by using a multiple linear regression (MLR) method, which is simple and widely used. First of all, Ensemble hindcast is conducted by the PNU/CME Coupled General Circulation Model (CGCM) (Park and Ahn, 2004) for the period from April to August in 1979-2005. MLR is applied to precipitation simulated by PNU/CME CGCM for the months of June (lead 2), July (lead 3), August (lead 4) and seasonal mean JJA (from June to August) of the Northeast Asian region including the Korean Peninsula $(110^{\circ}-145^{\circ}E,\;25-55^{\circ}N)$. We build the MLR model using a linear relationship between observed precipitation and the hindcasted results from the PNU/CME CGCM. The predictor variables selected from CGCM are precipitation, 500 hPa vertical velocity, 200 hPa divergence, surface air temperature and others. After performing a leave-oneout cross validation, the results are compared with the PNU/CME CGCM's. The results including Heidke skill scores demonstrate that the MLR corrected results have better forecasts than the direct CGCM result for rainfall.

Development for Estimation Improvement Model of Wind Velocity using Deep Neural Network (심층신경망을 활용한 풍속 예측 개선 모델 개발)

  • Ku, SungKwan;Hong, SeokMin;Kim, Ki-Young;Kwon, Jaeil
    • Journal of Advanced Navigation Technology
    • /
    • v.23 no.6
    • /
    • pp.597-604
    • /
    • 2019
  • Artificial neural networks are algorithms that simulate learning through interaction and experience in neurons in the brain and that are a method that can be used to produce accurate results through learning that reflects the characteristics of data. In this study, a model using deep neural network was presented to improve the predicted wind speed values in the meteorological dynamic model. The wind speed prediction improvement model using the deep neural network presented in the study constructed a model to recalibrate the predicted values of the meteorological dynamics model and carried out the verification and testing process and Separate data confirm that the accuracy of the predictions can be increased. In order to improve the prediction of wind speed, an in-depth neural network was established using the predicted values of general weather data such as time, temperature, air pressure, humidity, atmospheric conditions, and wind speed. Some of the data in the entire data were divided into data for checking the adequacy of the model, and the separate accuracy was checked rather than being used for model building and learning to confirm the suitability of the methods presented in the study.