• Title/Summary/Keyword: Memory and Learning Training

Search Result 166, Processing Time 0.026 seconds

An Incremental Rule Extraction Algorithm Based on Recursive Partition Averaging (재귀적 분할 평균에 기반한 점진적 규칙 추출 알고리즘)

  • Han, Jin-Chul;Kim, Sang-Kwi;Yoon, Chung-Hwa
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.1
    • /
    • pp.11-17
    • /
    • 2007
  • One of the popular methods used for pattern classification is the MBR (Memory-Based Reasoning) algorithm. Since it simply computes distances between a test pattern and training patterns or hyperplanes stored in memory, and then assigns the class of the nearest training pattern, it cannot explain how the classification result is obtained. In order to overcome this problem, we propose an incremental teaming algorithm based on RPA (Recursive Partition Averaging) to extract IF-THEN rules that describe regularities inherent in training patterns. But rules generated by RPA eventually show an overfitting phenomenon, because they depend too strongly on the details of given training patterns. Also RPA produces more number of rules than necessary, due to over-partitioning of the pattern space. Consequently, we present the IREA (Incremental Rule Extraction Algorithm) that overcomes overfitting problem by removing useless conditions from rules and reduces the number of rules at the same time. We verify the performance of proposed algorithm using benchmark data sets from UCI Machine Learning Repository.

Time Series Crime Prediction Using a Federated Machine Learning Model

  • Salam, Mustafa Abdul;Taha, Sanaa;Ramadan, Mohamed
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.4
    • /
    • pp.119-130
    • /
    • 2022
  • Crime is a common social problem that affects the quality of life. As the number of crimes increases, it is necessary to build a model to predict the number of crimes that may occur in a given period, identify the characteristics of a person who may commit a particular crime, and identify places where a particular crime may occur. Data privacy is the main challenge that organizations face when building this type of predictive models. Federated learning (FL) is a promising approach that overcomes data security and privacy challenges, as it enables organizations to build a machine learning model based on distributed datasets without sharing raw data or violating data privacy. In this paper, a federated long short- term memory (LSTM) model is proposed and compared with a traditional LSTM model. Proposed model is developed using TensorFlow Federated (TFF) and the Keras API to predict the number of crimes. The proposed model is applied on the Boston crime dataset. The proposed model's parameters are fine tuned to obtain minimum loss and maximum accuracy. The proposed federated LSTM model is compared with the traditional LSTM model and found that the federated LSTM model achieved lower loss, better accuracy, and higher training time than the traditional LSTM model.

The Effects of Metamemory Enhancing Program on Memory Performances in Elderly Women (메타기억 증진 프로그램이 여성노인의 기억수행에 미치는 효과)

  • Min, Hye-Sook
    • The Korean Journal of Rehabilitation Nursing
    • /
    • v.5 no.2
    • /
    • pp.205-216
    • /
    • 2002
  • This quasi-experimental study was done to test the effects of meta-memory enhancing program for elderly women. Data were collected 12 to 30, August 2002 from 34elderly women over 65 years living in Busan city. Subjects were 15 of experimental group and 19 of control group. The metamemory enhancing program was developed by five sessions composing of 1.5-2.0 hours one session. In experiment group, this program was performed for three weeks, twice per week. The degrees of four memory performance tasks were measured using instrument of Elderly Verbal Learning Test(Choi Kyung Mi, 1988) and Face Recognition Instrument(Min Hye Sook, 1999) and the metamemory were measured using MIA questionnaire(Dixon et al., 1988). Research results are as following. 1. After participating in five times memory training programs, experimental group has the significant increase of metamemory in comparison with control group.(t=59.58, p< 0.0001). In particular, the concepts of strategy(t=20.44, p< 0.0001), achievement (t=21.94, p< 0.0001), and locus degree (t=59.58, p< 0.0001) among sub-concepts of the metamemory are increasing significantly. 2. After participating in five time memory training programs, the degree of immediate word recall(t=17.25, p< 0.0001) and face recognition(t=16.69, p< 0.0001) among four memory tasks in experimental group are increasing significantly compared with those measures of control group. Considering this results, this metamemory enhancing program was found as an effective nursing program for metamemory improvement of elderly women's memory.

  • PDF

Strain-dependent Differences of Locomotor Activity and Hippocampus-dependent Learning and Memory in Mice

  • Kim, Joong-Sun;Yang, Mi-Young;Son, Yeong-Hoon;Kim, Sung-Ho;Kim, Jong-Choon;Kim, Seung-Joon;Lee, Yong-Duk;Shin, Tae-Kyun;Moon, Chang-Jong
    • Toxicological Research
    • /
    • v.24 no.3
    • /
    • pp.183-188
    • /
    • 2008
  • The behavioral phenotypes of out-bred ICR mice were compared with those of in-bred C57BL/6 and BALB/c mice. In particular, this study examined the locomotor activity and two forms of hippocampus-dependent learning paradigms, passive avoidance and object recognition memory. The basal open-field activity of the ICR strain was greater than that of the C57BL/6 and BALB/c strains. In the passive avoidance task, all the mice showed a significant increase in the cross-over latency when tested 24 hours after training. The strength of memory retention in the ICR mice was relatively weak and measurable, as indicated by the shorter cross-over latency than the C57BL/6 and BALB/c mice. In the object recognition memory test, all strains had a significant preference for the novel object during testing. The index for the preference of a novel object was lower for the ICR and BALB/c mice. Nevertheless, the variance and the standard deviation in these strains were comparable. Overall, these results confirm the strain differences on locomotor activity and hippocampus-dependent learning and memory in mice.

Self-Supervised Long-Short Term Memory Network for Solving Complex Job Shop Scheduling Problem

  • Shao, Xiaorui;Kim, Chang Soo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.8
    • /
    • pp.2993-3010
    • /
    • 2021
  • The job shop scheduling problem (JSSP) plays a critical role in smart manufacturing, an effective JSSP scheduler could save time cost and increase productivity. Conventional methods are very time-consumption and cannot deal with complicated JSSP instances as it uses one optimal algorithm to solve JSSP. This paper proposes an effective scheduler based on deep learning technology named self-supervised long-short term memory (SS-LSTM) to handle complex JSSP accurately. First, using the optimal method to generate sufficient training samples in small-scale JSSP. SS-LSTM is then applied to extract rich feature representations from generated training samples and decide the next action. In the proposed SS-LSTM, two channels are employed to reflect the full production statues. Specifically, the detailed-level channel records 18 detailed product information while the system-level channel reflects the type of whole system states identified by the k-means algorithm. Moreover, adopting a self-supervised mechanism with LSTM autoencoder to keep high feature extraction capacity simultaneously ensuring the reliable feature representative ability. The authors implemented, trained, and compared the proposed method with the other leading learning-based methods on some complicated JSSP instances. The experimental results have confirmed the effectiveness and priority of the proposed method for solving complex JSSP instances in terms of make-span.

Performance of Exercise Posture Correction System Based on Deep Learning (딥러닝 기반 운동 자세 교정 시스템의 성능)

  • Hwang, Byungsun;Kim, Jeongho;Lee, Ye-Ram;Kyeong, Chanuk;Seon, Joonho;Sun, Young-Ghyu;Kim, Jin-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.5
    • /
    • pp.177-183
    • /
    • 2022
  • Recently, interesting of home training is getting bigger due to COVID-19. Accordingly, research on applying HAR(human activity recognition) technology to home training has been conducted. However, existing paper of HAR proposed static activity instead of dynamic activity. In this paper, the deep learning model where dynamic exercise posture can be analyzed and the accuracy of the user's exercise posture can be shown is proposed. Fitness images of AI-hub are analyzed by blaze pose. The experiment is compared with three types of deep learning model: RNN(recurrent neural network), LSTM(long short-term memory), CNN(convolution neural network). In simulation results, it was shown that the f1-score of RNN, LSTM and CNN is 0.49, 0.87 and 0.98, respectively. It was confirmed that CNN is more suitable for human activity recognition than other models from simulation results. More exercise postures can be analyzed using a variety learning data.

Estrogen Replacement Effect of Korean Ginseng Saponin on Learning and Memory of Ovariectomized Mice

  • Jung, Jae-Won;Hyewhon Rhim;Bae, Eun-He;Lee, Bong-Hee;Park, Chan-Woong
    • Journal of Ginseng Research
    • /
    • v.24 no.1
    • /
    • pp.8-17
    • /
    • 2000
  • Estrogen can influence on the expression of behaviors not associated directly with reproduction, including learning and memory. Recently estrogen has received considerable attention for its effects on neuroprotection and neural circuits in brain areas associated with cognition. Although estrogen replacement therapy may be helpful to postmenopausal women, it also results in a number of harmful side effects. Ginseng also has steroidal qualities and contains several ginsenoside components which have similar backbone structure to estrogen. The objectives of this experiment were 1) to examine the effects of estrogen and 2) to investigate the effects of ginsenosides as estrogenic agent on learning and memory using the Morris water maze, a traditional experimental task for spatial memory. In the experiments designed here, ovariectomized mice were implanted subcutaneously with Sila, itic capsules containing 17${\beta}$-estradiol (100∼250 $\mu\textrm{g}$/$m\ell$), panaxadiol (PD) and panaxatriol (PT) saponins (15∼100 $\mu\textrm{g}$/$m\ell$) diluted with sesame oil. In the first set of experiment, the effects of estradiol on learning and memory during the Morris water maze was examined. When estradiol was delivered via Silastic capsules following training improved spatial memory performance in ovariectomized female mice. In the second set of experiment, three different PD and PT saponin concentrations were delivered via Silastic implants to ovariectomized female mice and their effects were compared with estrogenic effects. Results of three separate experiments demonstrated that estradiol, PD and PT administrated by Silastic implants for 2 weeks prior to water maze training significantly improved spatial memory performance compared to ovariectomized (OVX) mice, as indicated by lower escape latency over trial. The positive effect of estradiol suggests that estrogen can affect performance on learning and memory. In addition, the positive effect of PD and PT saponins suggest that ginsenosides have an estrogen-like effects in mediating learning and memory related behavior action.

  • PDF

Study of Fall Detection System of Long Short-term Memory Using Yolo-pose (Yolo-pose를 이용한 장단기 메모리의 낙상감지 시스템 연구)

  • Jeong, Seung Su;Kim, Nam Ho;Yu, Yun Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.123-125
    • /
    • 2022
  • In this paper, we introduce a system applied to long short-term memory using Yolo-pose. Using Yolo-pose from image data, data divided into daily life and falls are extracted and applied to LSTM for learning. In order to prevent overfitting, training is performed 8 to 2 validation and is represented by a confusion matrix. The result of Yolo-pose recorded 100% of both sensitivity and specificity, confirming that daily life and falls were well distinguished.

  • PDF

Num Worker Tuner: An Automated Spawn Parameter Tuner for Multi-Processing DataLoaders

  • Synn, DoangJoo;Kim, JongKook
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.446-448
    • /
    • 2021
  • In training a deep learning model, it is crucial to tune various hyperparameters and gain speed and accuracy. While hyperparameters that mathematically induce convergence impact training speed, system parameters that affect host-to-device transfer are also crucial. Therefore, it is important to properly tune and select parameters that influence the data loader as a system parameter in overall time acceleration. We propose an automated framework called Num Worker Tuner (NWT) to address this problem. This method finds the appropriate number of multi-processing subprocesses through the search space and accelerates the learning through the number of subprocesses. Furthermore, this method allows memory efficiency and speed-up by tuning the system-dependent parameter, the number of multi-process spawns.

Water Level Forecasting based on Deep Learning: A Use Case of Trinity River-Texas-The United States (딥러닝 기반 침수 수위 예측: 미국 텍사스 트리니티강 사례연구)

  • Tran, Quang-Khai;Song, Sa-kwang
    • Journal of KIISE
    • /
    • v.44 no.6
    • /
    • pp.607-612
    • /
    • 2017
  • This paper presents an attempt to apply Deep Learning technology to solve the problem of forecasting floods in urban areas. We employ Recurrent Neural Networks (RNNs), which are suitable for analyzing time series data, to learn observed data of river water and to predict the water level. To test the model, we use water observation data of a station in the Trinity river, Texas, the U.S., with data from 2013 to 2015 for training and data in 2016 for testing. Input of the neural networks is a 16-record-length sequence of 15-minute-interval time-series data, and output is the predicted value of the water level at the next 30 minutes and 60 minutes. In the experiment, we compare three Deep Learning models including standard RNN, RNN trained with Back Propagation Through Time (RNN-BPTT), and Long Short-Term Memory (LSTM). The prediction quality of LSTM can obtain Nash Efficiency exceeding 0.98, while the standard RNN and RNN-BPTT also provide very high accuracy.