• Title/Summary/Keyword: Memory and Learning Training

Search Result 168, Processing Time 0.027 seconds

Pattern Selection Using the Bias and Variance of Ensemble (앙상블의 편기와 분산을 이용한 패턴 선택)

  • Shin, Hyunjung;Cho, Sungzoon
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.28 no.1
    • /
    • pp.112-127
    • /
    • 2002
  • A useful pattern is a pattern that contributes much to learning. For a classification problem those patterns near the class boundary surfaces carry more information to the classifier. For a regression problem the ones near the estimated surface carry more information. In both cases, the usefulness is defined only for those patterns either without error or with negligible error. Using only the useful patterns gives several benefits. First, computational complexity in memory and time for learning is decreased. Second, overfitting is avoided even when the learner is over-sized. Third, learning results in more stable learners. In this paper, we propose a pattern 'utility index' that measures the utility of an individual pattern. The utility index is based on the bias and variance of a pattern trained by a network ensemble. In classification, the pattern with a low bias and a high variance gets a high score. In regression, on the other hand, the one with a low bias and a low variance gets a high score. Based on the distribution of the utility index, the original training set is divided into a high-score group and a low-score group. Only the high-score group is then used for training. The proposed method is tested on synthetic and real-world benchmark datasets. The proposed approach gives a better or at least similar performance.

Analysis of streamflow prediction performance by various deep learning schemes

  • Le, Xuan-Hien;Lee, Giha
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.131-131
    • /
    • 2021
  • Deep learning models, especially those based on long short-term memory (LSTM), have presented their superiority in addressing time series data issues recently. This study aims to comprehensively evaluate the performance of deep learning models that belong to the supervised learning category in streamflow prediction. Therefore, six deep learning models-standard LSTM, standard gated recurrent unit (GRU), stacked LSTM, bidirectional LSTM (BiLSTM), feed-forward neural network (FFNN), and convolutional neural network (CNN) models-were of interest in this study. The Red River system, one of the largest river basins in Vietnam, was adopted as a case study. In addition, deep learning models were designed to forecast flowrate for one- and two-day ahead at Son Tay hydrological station on the Red River using a series of observed flowrate data at seven hydrological stations on three major river branches of the Red River system-Thao River, Da River, and Lo River-as the input data for training, validation, and testing. The comparison results have indicated that the four LSTM-based models exhibit significantly better performance and maintain stability than the FFNN and CNN models. Moreover, LSTM-based models may reach impressive predictions even in the presence of upstream reservoirs and dams. In the case of the stacked LSTM and BiLSTM models, the complexity of these models is not accompanied by performance improvement because their respective performance is not higher than the two standard models (LSTM and GRU). As a result, we realized that in the context of hydrological forecasting problems, simple architectural models such as LSTM and GRU (with one hidden layer) are sufficient to produce highly reliable forecasts while minimizing computation time because of the sequential data nature.

  • PDF

Performance Evaluation of Concrete Drying Shrinkage Prediction Using DNN and LSTM (DNN과 LSTM을 활용한 콘크리트의 건조수축량 예측성능 평가)

  • Han, Jun-Hui;Lim, Gun-Su;Lee, Hyeon-Jik;Park, Jae-Woong;Kim, Jong;Han, Min-Cheol
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2023.05a
    • /
    • pp.179-180
    • /
    • 2023
  • In this study, the performance of the prediction model was compared and analyzed using DNN and LSTM learning models to predict the amount of dry shrinkage of the concrete. As a result of the analysis, DNN model had a high error rate of about 51%, indicating overfitting to the training data. But, the LSTM learning model showed a relatively higher accuracy with an error rate of 12% compared to the DNN model. Also, the Pre_LSTM model which preprocess data, showed the performance with an error rate of 9% and a coefficient of determination of 0.887 in the LSTM learning model.

  • PDF

Quantitative EEG research by the brain activities on the various fields of the English education (영어학습 유형별 뇌기능 활성화에 대한 정량뇌파연구)

  • Kwon, Hyung-Kyu
    • Journal of the Korean Data and Information Science Society
    • /
    • v.20 no.3
    • /
    • pp.541-550
    • /
    • 2009
  • This research attempted to find out any implications for strategies to design and develop the connections between the activities of the brain function and the fields of English learning (dictation, word level, speaking, word memory, listening). Thus, in developing the brain based learning model for the English education, attempts need to be made to help learners to keep the whole brain toward learning. On this point, this study indicated the significant results for the exclusive brain location and the brainwaves on the each English learning field by the quantitative EEG analysis. The results of this study presented the guidelines for the balanced development of the left brain and the right brain to train the specific site of the brain connected to the English learning fields. In addition, whole brain training model is developed by the quantitative EEG data not by the theoretical learning methods focused on the right brain training.

  • PDF

An Enhancement of Learning Speed of the Error - Backpropagation Algorithm (오류 역전도 알고리즘의 학습속도 향상기법)

  • Shim, Bum-Sik;Jung, Eui-Yong;Yoon, Chung-Hwa;Kang, Kyung-Sik
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.7
    • /
    • pp.1759-1769
    • /
    • 1997
  • The Error BackPropagation (EBP) algorithm for multi-layered neural networks is widely used in various areas such as associative memory, speech recognition, pattern recognition and robotics, etc. Nevertheless, many researchers have continuously published papers about improvements over the original EBP algorithm. The main reason for this research activity is that EBP is exceeding slow when the number of neurons and the size of training set is large. In this study, we developed new learning speed acceleration methods using variable learning rate, variable momentum rate and variable slope for the sigmoid function. During the learning process, these parameters should be adjusted continuously according to the total error of network, and it has been shown that these methods significantly reduced learning time over the original EBP. In order to show the efficiency of the proposed methods, first we have used binary data which are made by random number generator and showed the vast improvements in terms of epoch. Also, we have applied our methods to the binary-valued Monk's data, 4, 5, 6, 7-bit parity checker and real-valued Iris data which are famous benchmark training sets for machine learning.

  • PDF

Mental Exercises for Cognitive Function: Clinical Evidence

  • Kawashima, Ryuta
    • Journal of Preventive Medicine and Public Health
    • /
    • v.46 no.sup1
    • /
    • pp.22-27
    • /
    • 2013
  • The purpose of this study was to examine the beneficial effects of a new cognitive intervention program designed for the care and prevention of dementia, namely Learning Therapy. The training program used systematized basic problems in arithmetic and Japanese language as training tasks. In study 1, 16 individuals in the experimental group and 16 in the control group were recruited from a nursing home. In both groups, all individuals were clinically diagnosed with senile dementia of the Alzheimer type. In study 2, we performed a single-blind, randomized controlled trial in our cognitive intervention program of 124 community-dwelling seniors. In both studies, the daily training program using reading and arithmetic tasks was carried out approximately 5 days a week, for 15 to 20 minutes a day in the intervention groups. Neuropsychological measures were determined simultaneously in the groups both prior to and after six months of the intervention. The results of our investigations indicate that our cognitive intervention using reading and arithmetic problems demonstrated a transfer effect and they provide convincing evidence that cognitive training maintains and improves the cognitive functions of dementia patients and healthy seniors.

A New Incremental Instance-Based Learning Using Recursive Partitioning (재귀분할을 이용한 새로운 점진적 인스턴스 기반 학습기법)

  • Han Jin-Chul;Kim Sang-Kwi;Yoon Chung-Hwa
    • The KIPS Transactions:PartB
    • /
    • v.13B no.2 s.105
    • /
    • pp.127-132
    • /
    • 2006
  • K-NN (k-Nearest Neighbors), which is a well-known instance-based learning algorithm, simply stores entire training patterns in memory, and uses a distance function to classify a test pattern. K-NN is proven to show satisfactory performance, but it is notorious formemory usage and lengthy computation. Various studies have been found in the literature in order to minimize memory usage and computation time, and NGE (Nested Generalized Exemplar) theory is one of them. In this paper, we propose RPA (Recursive Partition Averaging) and IRPA (Incremental RPA) which is an incremental version of RPA. RPA partitions the entire pattern space recursively, and generates representatives from each partition. Also, due to the fact that RPA is prone to produce excessive number of partitions as the number of features in a pattern increases, we present IRPA which reduces the number of representative patterns by processing the training set in an incremental manner. Our proposed methods have been successfully shown to exhibit comparable performance to k-NN with a lot less number of patterns and better result than EACH system which implements the NGE theory.

The Relationship between Neurocognitive Functioning and Emotional Recognition in Chronic Schizophrenic Patients (만성 정신분열병 환자들의 인지 기능과 정서 인식 능력의 관련성)

  • Hwang, Hye-Li;Hwang, Tae-Yeon;Lee, Woo-Kyung;Han, Eun-Sun
    • Korean Journal of Biological Psychiatry
    • /
    • v.11 no.2
    • /
    • pp.155-164
    • /
    • 2004
  • Objective:The present study examined the association between basic neurocognitive functions and emotional recognition in chronic schizophrenia. Furthermore, to Investigate cognitive variable related to emotion recognition in Schizophrenia. Methods:Forty eight patients from the Yongin Psychiatric Rehabilitation Center were evaluated for neurocognitive function, and Emotional Recognition Test which has four subscales finding emotional clue, discriminating emotions, understanding emotional context and emotional capacity. Measures of neurocognitive functioning were selected based on hypothesized relationships to perception of emotion. These measures included:1) Letter Number Sequencing Test, a measure of working memory;2) Word Fluency and Block Design, a measure of executive function;3) Hopkins Verbal Learning Test-Korean version, a measure of verbal memory;4) Digit Span, a measure of immediate memory;5) Span of Apprehension Task, a measure of early visual processing, visual scanning;6) Continuous Performance Test, a measure of sustained attention functioning. Correlation analyses between specific neurocognitive measures and emotional recognition test were made. To examine the degree to which neurocognitive performance predicting emotional recognition, hierarchical regression analyses were also made. Results:Working memory, and verbal memory were closely related with emotional discrimination. Working memory, Span of Apprehension and Digit Span were closely related with contextual recognition. Among cognitive measures, Span of Apprehension, Working memory, Digit Span were most important variables in predicting emotional capacity. Conclusion:These results are relevant considering that emotional information processing depends, in part, on the abilities to scan the context and to use immediate working memory. These results indicated that mul- tifaceted cognitive training program added with Emotional Recognition Task(Cognitive Behavioral Rehabilitation Therapy added with Emotional Management Program) are promising.

  • PDF

A New Memory-based Learning using Dynamic Partition Averaging (동적 분할 평균을 이용한 새로운 메모리 기반 학습기법)

  • Yih, Hyeong-Il
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.4
    • /
    • pp.456-462
    • /
    • 2008
  • The classification is that a new data is classified into one of given classes and is one of the most generally used data mining techniques. Memory-Based Reasoning (MBR) is a reasoning method for classification problem. MBR simply keeps many patterns which are represented by original vector form of features in memory without rules for reasoning, and uses a distance function to classify a test pattern. If training patterns grows in MBR, as well as size of memory great the calculation amount for reasoning much have. NGE, FPA, and RPA methods are well-known MBR algorithms, which are proven to show satisfactory performance, but those have serious problems for memory usage and lengthy computation. In this paper, we propose DPA (Dynamic Partition Averaging) algorithm. it chooses partition points by calculating GINI-Index in the entire pattern space, and partitions the entire pattern space dynamically. If classes that are included to a partition are unique, it generates a representative pattern from partition, unless partitions relevant partitions repeatedly by same method. The proposed method has been successfully shown to exhibit comparable performance to k-NN with a lot less number of patterns and better result than EACH system which implements the NGE theory and FPA, and RPA.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.