• Title/Summary/Keyword: Effective e-Learning System

Search Result 151, Processing Time 0.03 seconds

Rehmannia glutinosa Ameliorates Scopolamine-Induced Learning and Memory Impairment in Rats

  • Lee, Bom-Bi;Shim, In-Sop;Lee, Hye-Jung;Hahm, Dae-Hyun
    • Journal of Microbiology and Biotechnology
    • /
    • v.21 no.8
    • /
    • pp.874-883
    • /
    • 2011
  • Many studies have shown that the steamed root of Rehmannia glutinosa (SRG), which is widely used in the treatment of various neurodegenerative diseases in the context of Korean traditional medicine, is effective for improving cognitive and memory impairments. The purpose of this study was to examine whether SRG extracts improved memory defects caused by administering scopolamine (SCO) into the brains of rats. The effects of SRG on the acetylcholinergic system and proinflammatory cytokines in the hippocampus were also investigated. Male rats were administered daily doses of SRG (50, 100, and 200 mg/kg, i.p.) for 14 days, 1 h before scopolamine injection (2 mg/kg, i.p.). After inducing cognitive impairment via scopolamine administration, we conducted a passive avoidance test (PAT) and the Morris water maze (MWM) test as behavioral assessments. Changes in cholinergic system reactivity were also examined by measuring the immunoreactive neurons of choline acetyltransferase (ChAT) and the reactivity of acetylcholinesterase (AchE) in the hippocampus. Daily administration of SRG improved memory impairment according to the PAT, and reduced the escape latency for finding the platform in the MWM. The administration of SRG consistently significantly alleviated memory-associated decreases in cholinergic immunoreactivity and decreased interleukin-$1{\beta}$ (IL-$1{\beta}$) and tumor necrosis factor-${\alpha}$ (TNF-${\alpha}$) mRNA expression in the hippocampus. The results demonstrated that SRG had a significant neuroprotective effect against the neuronal impairment and memory dysfunction caused by scopolamine in rats. These results suggest that SRG may be useful for improving cognitive functioning by stimulating cholinergic enzyme activities and alleviating inflammatory responses.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

Reinforcement of Long-term Care Service Specialization Need Analysis for Curriculum Development: Focused on Activity Theory (장기요양서비스 종사자 교육과정개발을 위한 요구분석 : 활동이론(Activity Theory)을 중심으로)

  • Suh, Yong-Wan;Choi, Dong-Yeon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.4
    • /
    • pp.428-436
    • /
    • 2020
  • The purpose of this study is to analyze the needs for developing a curriculum for strengthening the long-term care service expertise and job competency. Specifically, the researchers analyzed previous studies on national long-term care services and national policy data, and conducted focus group interviews with 14 experts from related agencies. Activity theory was applied as a framework for analysis and a questionnaire about the importance and difficulty of subjects from 25 long-term service employees was administered for validating the results of the qualitative data analysis. The upper part of the subject-goal-tool of the activity system was considered the main area of action, and the following rule-community-division was divided into contextual parts for action, and the implications for demand analysis and future operation of the online curriculum are summarized. In total, six courses were required for development. These courses could be applied to as a learner-centered flip learning for long-term care service workers and various educational methods of collective education and supplementary education have been proposed. Based on the study results, implications in the educational field for effective management of courses were suggested at the end of the study.

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.

Design Strategy for Improving the Effect of Educational Contents for Public Institutions (공공기관 교육용 콘텐츠의 학습효과 증진을 위한 디자인 전략)

  • Park, Sung-Euk
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.3
    • /
    • pp.444-453
    • /
    • 2010
  • The Knowledge and information are rapidly built through networks; and, through this, epochal changes and developments are reoccurring in diverse societal economic cultural ways. Especially, digital-education emerging together with educational environment changes is taking its place as an educational system which will be able to replace traditional education methods by overcoming the limitations of time and space held by present education methods. The role of GUI(Graphic User Interface) design, which adds user cognitive power and convenience as a method of purveying innumerous information, is growing. Consequently in this research, through the analysis of educational contents utilized in public organizations, research is performed regarding an educational content design for a more effective education of learners.

Prediction of ocean surface current: Research status, challenges, and opportunities. A review

  • Ittaka Aldini;Adhistya E. Permanasari;Risanuri Hidayat;Andri Ramdhan
    • Ocean Systems Engineering
    • /
    • v.14 no.1
    • /
    • pp.85-99
    • /
    • 2024
  • Ocean surface currents have an essential role in the Earth's climate system and significantly impact the marine ecosystem, weather patterns, and human activities. However, predicting ocean surface currents remains challenging due to the complexity and variability of the oceanic processes involved. This review article provides an overview of the current research status, challenges, and opportunities in the prediction of ocean surface currents. We discuss the various observational and modelling approaches used to study ocean surface currents, including satellite remote sensing, in situ measurements, and numerical models. We also highlight the major challenges facing the prediction of ocean surface currents, such as data assimilation, model-observation integration, and the representation of sub-grid scale processes. In this article, we suggest that future research should focus on developing advanced modeling techniques, such as machine learning, and the integration of multiple observational platforms to improve the accuracy and skill of ocean surface current predictions. We also emphasize the need to address the limitations of observing instruments, such as delays in receiving data, versioning errors, missing data, and undocumented data processing techniques. Improving data availability and quality will be essential for enhancing the accuracy of predictions. The future research should focus on developing methods for effective bias correction, a series of data preprocessing procedures, and utilizing combined models and xAI models to incorporate data from various sources. Advancements in predicting ocean surface currents will benefit various applications such as maritime operations, climate studies, and ecosystem management.

Standardization Strategy on 3D Animation Contents (3D 애니메이션 콘텐츠의 SCORM 기반 표준화 전략)

  • Jang, Jae-Kyung;Kim, Sun-Hye;Kim, Ho-Sung
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.11a
    • /
    • pp.218-222
    • /
    • 2006
  • In making 3D animation with digital technology, it is necessary to increase productivity and reusability by managing production pipeline systematically through standardization of animation content. For this purpose, we try to develop the animation content management system that can manage all kind of information on the production pipeline, based on SCORM of e-teaming by considering production, publication and re-editing. A scene as the unit of visual semantics is standardize into an object that contains meta-data of place, cast, weather, season, time and viewpoint about the scene. The meta-data of content includes a lot of information of copyright, publication, description, etc, so that it plays an important role on the management and the publication. If an effective management system of meta-data such as ontology will be implemented, it is possible to search multimedia contents powerfully. Hence, it will bring on production and publication of UCC. Using the meta-data of content object, user and producer can easily search and reuse the contents. Hence, they can choose the contents object according to their preference and reproduce their own creative animation by reorganizing and packaging the selected objects.

  • PDF

An Empirical Study on the Effect of CRM System on the Performance of Pharmaceutical Companies (고객관계관리 시스템의 수준이 BSC 관점에서의 기업성과에 미치는 영향 : 제약회사를 중심으로)

  • Kim, Hyun-Jung;Park, Jong-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.43-65
    • /
    • 2010
  • Facing a complex environment driven by a decade, many companies are adopting new strategic frameworks such as Customer Relationship Management system to achieve sustainable profitability as well as overcome serious competition for survival. In many business areas, CRM system advanced a great deal in a matter of continuous compensating the defect and overall integration. However, pharmaceutical companies in Korea were slow to accept them for usesince they still have a tendency of holding fast to traditional way of sales and marketing based on individual networks of sales representatives. In the circumstance, this article tried to empirically address current status of CRM system as well as the effects of the system on the performance of pharmaceutical companies by applying BSC method's four perspectives, from financial, customer, learning and growth and internal process. Survey by e-mail and post to employers and employees who were working in pharma firms were undergone for the purpose. Total 113 cases among collected 140 ones were used for the statistical analysis by SPSS ver. 15 package. Reliability, Factor analysis, regression were done. This study revealed that CRM system had a significant effect on improving financial and non-financial performance of pharmaceutical companies as expected. Proposed regression model fits well and among them, CRM marketing information system shed the light on substantial impact on companies' outcome given profitability, growth and investment. Useful analytical information by CRM marketing information system appears to enable pharmaceutical firms to set up effective marketing and sales strategies, these result in favorable financial performance by enhancing values for stakeholderseventually, not to mention short-term profit and/or mid-term potential to growth. CRM system depicted its influence on not only financial performance, but also non-financial fruit of pharmaceutical companies. Further analysis for each component showed that CRM marketing information system were able to demonstrate statistically significant effect on the performance like the result of financial outcome. CRM system is believed to provide the companies with efficient way of customers managing by valuable standardized business process prompt coping with specific customers' needs. It consequently induces customer satisfaction and retentionto improve performance for long period. That is, there is a virtuous circle for creating value as the cornerstone for sustainable growth. However, the research failed to put forward to evidence to support hypothesis regarding favorable influence of CRM sales representative's records assessment system and CRM customer analysis system on the management performance. The analysis is regarded to reflect the lack of understanding of sales people and respondents between actual work duties and far-sighted goal in strategic analysis framework. Ordinary salesmen seem to dedicate short-term goal for the purpose of meeting sales target, receiving incentive bonus in a manner-of-fact style, as such, they tend to avail themselves of personal network and sales and promotional expense rather than CRM system. The study finding proposed a link between CRM information system and performance. It empirically indicated that pharmaceutical companies had been implementing CRM system as an effective strategic business framework in order for more balanced achievements based on the grounded understanding of both CRM system and integrated performance. It suggests a positive impact of supportive CRM system on firm performance, especially for pharmaceutical industry through the initial empirical evidence. Also, it brings out unmet needs for more practical system design, improvement of employees' awareness, increase of system utilization in the field. On the basis of the insight from this exploratory study, confirmatory research by more appropriate measurement tool and increased sample size should be further examined.

Construction of the Scientific Effective Learning System on Web-Based Instruction (웹을 이용한 효과적인 과학학습체제의 구축 및 태도 변화 -중학교 지구과학 내용을 중심으로 -)

  • Choi, Young-Hee;Yoon, Ill-Hee
    • Journal of the Korean earth science society
    • /
    • v.21 no.3
    • /
    • pp.208-218
    • /
    • 2000
  • The popularization and the common use of internet are realizing these days thanks to the development of web. Using the web as a teaching instrument is the Web Based Instruction(WBI). WBI, along with constructivism, has a great influence on the present educational society. However, many problems were found in learners' using the numerous WBI programs in cyber space. Above all, the interaction between teachers and learners is not enough, and searching for the materials on web is not easy. It's very difficult for a teacher to design and construct a WBI program in school teaching. But It is not so difficult to design and construct a WBI program based on interaction. Therefore, if a teacher familiar to students in school makes a web page and provides a cyberspace, the students will interact very easily them in the cyberspace. They can use the e-mail, bulletin board or discuss and chat. The cyberspace will make up for the insufficient interaction in school life. But because this program needs all students have proper material condition, there will be another kind of estranged learners. Thus, this problem can be solved by the cheap-popular PC offering movement backed by the government, and other types of projects systematically.

  • PDF

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.