• Title/Summary/Keyword: rule learning

Search Result 650, Processing Time 0.031 seconds

Practical Knowledge of Geography Teacher in Process of Performance Assessment (수행평가 과정을 통해서 본 지리교사의 실천적 지식)

  • Ma, Kyeng-Muk
    • Journal of the Korean Geographical Society
    • /
    • v.42 no.1 s.118
    • /
    • pp.96-120
    • /
    • 2007
  • The purpose of this study is to look into practical knowledge of geography teacher that lead the teacher's conduct in performance assessment situation. In Classroom all activity of teachers is their unique creature and the behavior which express teacher's knowledge and competency as expert. Practical knowledge can be seen as a system of understanding that guides the teacher s decision, which involves the construction of contents to teach, methods of instruction, resources to use etc. Therefore if we fully read the teacher's instruction, we have to understand the practical knowledge of teacher. As an ordinary activity of teaming and teaching, performance assessment is conducted on active learning and teaching situation and has intention to advance learning. Thus All evaluating behavior conducted by teacher can be understood through the practical knowledge of teacher. For this purpose a series of performance assessment scenes conducted by teacher were selected observed and captured the imagery, principles and rules of practical knowledge through the qualitative research method. The result supposed that practical knowledge influence the whole process of geography teacher's performance assessment activity.

Knowledge based Genetic Algorithm for the Prediction of Peptides binding to HLA alleles common in Koreans (지식기반 유전자알고리즘을 이용한 한국인 빈발 HLA 대립유전자에 대한 결합 펩타이드 예측)

  • Cho, Yeon-Jin;Oh, Heung-Bum;Kim, Hyeon-Cheol
    • Journal of Internet Computing and Services
    • /
    • v.13 no.4
    • /
    • pp.45-52
    • /
    • 2012
  • T cells induce immune responses and thereby eliminate infected micro-organisms when peptides from the microbial proteins are bound to HLAs in the host cell surfaces, It is known that the more stable the binding of peptide to HLA is, the stronger the T cell response gets to remove more effectively the source of infection. Accordingly, if peptides (HLA binder) which can be bound stably to a certain HLA are found, those peptieds are utilized to the development of peptide vaccine to prevent infectious diseases or even to cancer. However, HLA is highly polymorphic so that HLA has a large number of alleles with some frequencies even in one population. Therefore, it is very inefficient to find the peptides stably bound to a number of HLAs by testing random possible peptides for all the various alleles frequent in the population. In order to solve this problem, computational methods have recently been developed to predict peptides which are stably bound to a certain HLA. These methods could markedly decrease the number of candidate peptides to be examined by biological experiments. Accordingly, this paper not only introduces a method of machine learning to predict peptides binding to an HLA, but also suggests a new prediction model so called 'knowledge-based genetic algorithm' that has never been tried for HLA binding peptide prediction. Although based on genetic algorithm (GA). it showed more enhanced performance than GA by incorporating expert knowledge in the process of the algorithm. Furthermore, it could extract rules predicting the binding peptide of the HLA alleles common in Koreans.

Prediction of Target Motion Using Neural Network for 4-dimensional Radiation Therapy (신경회로망을 이용한 4차원 방사선치료에서의 조사 표적 움직임 예측)

  • Lee, Sang-Kyung;Kim, Yong-Nam;Park, Kyung-Ran;Jeong, Kyeong-Keun;Lee, Chang-Geol;Lee, Ik-Jae;Seong, Jin-Sil;Choi, Won-Hoon;Chung, Yoon-Sun;Park, Sung-Ho
    • Progress in Medical Physics
    • /
    • v.20 no.3
    • /
    • pp.132-138
    • /
    • 2009
  • Studies on target motion in 4-dimensional radiotherapy are being world-widely conducted to enhance treatment record and protection of normal organs. Prediction of tumor motion might be very useful and/or essential for especially free-breathing system during radiation delivery such as respiratory gating system and tumor tracking system. Neural network is powerful to express a time series with nonlinearity because its prediction algorithm is not governed by statistic formula but finds a rule of data expression. This study intended to assess applicability of neural network method to predict tumor motion in 4-dimensional radiotherapy. Scaled Conjugate Gradient algorithm was employed as a learning algorithm. Considering reparation data for 10 patients, prediction by the neural network algorithms was compared with the measurement by the real-time position management (RPM) system. The results showed that the neural network algorithm has the excellent accuracy of maximum absolute error smaller than 3 mm, except for the cases in which the maximum amplitude of respiration is over the range of respiration used in the learning process of neural network. It indicates the insufficient learning of the neural network for extrapolation. The problem could be solved by acquiring a full range of respiration before learning procedure. Further works are programmed to verify a feasibility of practical application for 4-dimensional treatment system, including prediction performance according to various system latency and irregular patterns of respiration.

  • PDF

Interpreting Bounded Rationality in Business and Industrial Marketing Contexts: Executive Training Case Studies (집행관배훈안례연구(阐述工商业背景下的有限合理性):집행관배훈안례연구(执行官培训案例研究))

  • Woodside, Arch G.;Lai, Wen-Hsiang;Kim, Kyung-Hoon;Jung, Deuk-Keyo
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.3
    • /
    • pp.49-61
    • /
    • 2009
  • This article provides training exercises for executives into interpreting subroutine maps of executives' thinking in processing business and industrial marketing problems and opportunities. This study builds on premises that Schank proposes about learning and teaching including (1) learning occurs by experiencing and the best instruction offers learners opportunities to distill their knowledge and skills from interactive stories in the form of goal.based scenarios, team projects, and understanding stories from experts. Also, (2) telling does not lead to learning because learning requires action-training environments should emphasize active engagement with stories, cases, and projects. Each training case study includes executive exposure to decision system analysis (DSA). The training case requires the executive to write a "Briefing Report" of a DSA map. Instructions to the executive trainee in writing the briefing report include coverage in the briefing report of (1) details of the essence of the DSA map and (2) a statement of warnings and opportunities that the executive map reader interprets within the DSA map. The length maximum for a briefing report is 500 words-an arbitrary rule that works well in executive training programs. Following this introduction, section two of the article briefly summarizes relevant literature on how humans think within contexts in response to problems and opportunities. Section three illustrates the creation and interpreting of DSA maps using a training exercise in pricing a chemical product to different OEM (original equipment manufacturer) customers. Section four presents a training exercise in pricing decisions by a petroleum manufacturing firm. Section five presents a training exercise in marketing strategies by an office furniture distributer along with buying strategies by business customers. Each of the three training exercises is based on research into information processing and decision making of executives operating in marketing contexts. Section six concludes the article with suggestions for use of this training case and for developing additional training cases for honing executives' decision-making skills. Todd and Gigerenzer propose that humans use simple heuristics because they enable adaptive behavior by exploiting the structure of information in natural decision environments. "Simplicity is a virtue, rather than a curse". Bounded rationality theorists emphasize the centrality of Simon's proposition, "Human rational behavior is shaped by a scissors whose blades are the structure of the task environments and the computational capabilities of the actor". Gigerenzer's view is relevant to Simon's environmental blade and to the environmental structures in the three cases in this article, "The term environment, here, does not refer to a description of the total physical and biological environment, but only to that part important to an organism, given its needs and goals." The present article directs attention to research that combines reports on the structure of task environments with the use of adaptive toolbox heuristics of actors. The DSA mapping approach here concerns the match between strategy and an environment-the development and understanding of ecological rationality theory. Aspiration adaptation theory is central to this approach. Aspiration adaptation theory models decision making as a multi-goal problem without aggregation of the goals into a complete preference order over all decision alternatives. The three case studies in this article permit the learner to apply propositions in aspiration level rules in reaching a decision. Aspiration adaptation takes the form of a sequence of adjustment steps. An adjustment step shifts the current aspiration level to a neighboring point on an aspiration grid by a change in only one goal variable. An upward adjustment step is an increase and a downward adjustment step is a decrease of a goal variable. Creating and using aspiration adaptation levels is integral to bounded rationality theory. The present article increases understanding and expertise of both aspiration adaptation and bounded rationality theories by providing learner experiences and practice in using propositions in both theories. Practice in ranking CTSs and writing TOP gists from DSA maps serves to clarify and deepen Selten's view, "Clearly, aspiration adaptation must enter the picture as an integrated part of the search for a solution." The body of "direct research" by Mintzberg, Gladwin's ethnographic decision tree modeling, and Huff's work on mapping strategic thought are suggestions on where to look for research that considers both the structure of the environment and the computational capabilities of the actors making decisions in these environments. Such research on bounded rationality permits both further development of theory in how and why decisions are made in real life and the development of learning exercises in the use of heuristics occurring in natural environments. The exercises in the present article encourage learning skills and principles of using fast and frugal heuristics in contexts of their intended use. The exercises respond to Schank's wisdom, "In a deep sense, education isn't about knowledge or getting students to know what has happened. It is about getting them to feel what has happened. This is not easy to do. Education, as it is in schools today, is emotionless. This is a huge problem." The three cases and accompanying set of exercise questions adhere to Schank's view, "Processes are best taught by actually engaging in them, which can often mean, for mental processing, active discussion."

  • PDF

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

Improved Sentence Boundary Detection Method for Web Documents (웹 문서를 위한 개선된 문장경계인식 방법)

  • Lee, Chung-Hee;Jang, Myung-Gil;Seo, Young-Hoon
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.6
    • /
    • pp.455-463
    • /
    • 2010
  • In this paper, we present an approach to sentence boundary detection for web documents that builds on statistical-based methods and uses rule-based correction. The proposed system uses the classification model learned offline using a training set of human-labeled web documents. The web documents have many word-spacing errors and frequently no punctuation mark that indicates the end of sentence boundary. As sentence boundary candidates, the proposed method considers every Ending Eomis as well as punctuation marks. We optimize engine performance by selecting the best feature, the best training data, and the best classification algorithm. For evaluation, we made two test sets; Set1 consisting of articles and blog documents and Set2 of web community documents. We use F-measure to compare results on a large variety of tasks, Detecting only periods as sentence boundary, our basis engine showed 96.5% in Set1 and 56.7% in Set2. We improved our basis engine by adapting features and the boundary search algorithm. For the final evaluation, we compared our adaptation engine with our basis engine in Set2. As a result, the adaptation engine obtained improvements over the basis engine by 39.6%. We proved the effectiveness of the proposed method in sentence boundary detection.

Development of Intelligent Internet Shopping Mall Supporting Tool Based on Software Agents and Knowledge Discovery Technology (소프트웨어 에이전트 및 지식탐사기술 기반 지능형 인터넷 쇼핑몰 지원도구의 개발)

  • 김재경;김우주;조윤호;김제란
    • Journal of Intelligence and Information Systems
    • /
    • v.7 no.2
    • /
    • pp.153-177
    • /
    • 2001
  • Nowadays, product recommendation is one of the important issues regarding both CRM and Internet shopping mall. Generally, a recommendation system tracks past actions of a group of users to make a recommendation to individual members of the group. The computer-mediated marketing and commerce have grown rapidly and thereby automatic recommendation methodologies have got great attentions. But the researches and commercial tools for product recommendation so far, still have many aspects that merit further considerations. To supplement those aspects, we devise a recommendation methodology by which we can get further recommendation effectiveness when applied to Internet shopping mall. The suggested methodology is based on web log information, product taxonomy, association rule mining, and decision tree learning. To implement this we also design and intelligent Internet shopping mall support system based on agent technology and develop it as a prototype system. We applied this methodology and the prototype system to a leading Korean Internet shopping mall and provide some experimental results. Through the experiment, we found that the suggested methodology can perform recommendation tasks both effectively and efficiently in real world problems. Its systematic validity issues are also discussed.

  • PDF

A Study on Programming Concepts of Programming Education Experts through Delphi and Conceptual Metaphor Analysis

  • Kim, Dong-Man;Lee, Tae-Wuk
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.11
    • /
    • pp.277-286
    • /
    • 2020
  • In this paper, we propose a new educational approach to help learners form concepts by identifying the properties of programming concepts targeting a group of experts in programming education. Therefore, we confirmed the typical properties of concepts by programming education experts for programming learning elements through conceptual metaphor analysis, which is a qualitative research method, and confirmed the validity through the delphi method. As a result of this study, we identified 17 typical properties of programming concepts that learners should form in programming education. The conclusions of this study are that need to compose the educational content more specifically for the conceptualization of learners' programming as follows: 1)the concept of a variable is to understand how to store data, how to set a name, what an address has, how to change a value, various types of variables, and the meaning of the size of a variable, 2)the concept of operator is to understand how to operate the four rules, how to deal with it logically, how to connect according to priority, meaning of operation symbols, and how to compare, 3)the concept of the control structure is to understand how to control the execution flow, how to make a logical judgment, how to set an execution rule, meaning of sequential execution, and how to repeat executing.

PDA Personalized Agent System (PDA용 개인화 에이전트 시스템)

  • 표석진;박영택
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2002.11a
    • /
    • pp.345-352
    • /
    • 2002
  • 무선 인터넷을 이용하는 사용자는 정보의 양의 따른 시간적 통신비용의 증가 문제로 개인화 에이전트가 사용자의 관심에 따라 서비스를 제공하는 기능과 맞춤화된 정보를 제공하는 기능, 지식 기반 방식으로 정보를 예측하는 기능을 가지기를 바라고 있다. 본 논문에서는 이와 같이 무선 인터넷을 사용하는 사용자를 위한 PDA 개인화 에이전트 시스템을 구축하고자 한다. PDA 개인화 에이전트 시스템 구축을 위해 프로파일 기반의 에이전트 엔진과 사용자 프로파일을 이용한 지식기반 방식을 사용한다. 사용자가 웹페이지에서 행하는 행위들을 모니터링하여 사용자가 관심 가지는 문서를 파악하고 정보 검색을 통해 얻어진 문서를 분석하여 사용자 각각의 관심 문서로 나누어 서비스하게 된다. 모니터링 되어진 문서를 효과적으로 분석하기 위해 unsupervised clustering 기계학습 방식인 Cobweb을 이용한다. unsupervised 기계 학습은 conceptual 방식을 이용하여 검색되어진 정보를 사용자의 관심 분야별로 clustering한다. 클러스터링을 통해 얻어진 결과를 다시 기계학습을 통해 사용자 관심문서에 대한 프로파일을 생성하게 된다. 이렇게 만들어진 프로파일을 룰(Rule)로 만들어 이를 기반으로 사용자에게 서비스하게 된다. 이러한 룰은 사용자의 모니터링 결과로 얻어지기 때문에 주기적으로 업데이트하게 된다. 제안하는 시스템은 인터넷신문이나 웹진 등에서 사용자들에게 뉴스를 전달하기 위한 목적으로 생성하는 뉴스문서를 특정 대상으로 선정하였고 사용자 정보를 이용한 검색을 실시하고 결과로 얻어진 정보를 정보 분류를 통해 PDA나 휴대폰을 통해 사용자에게 제공한다. 상품을 검색하기 위한 검색노력을 줄이고, 검색된 대안들로부터 구매자와 시스템이 웹상에서 서로 상호작용(interactivity) 하여 해를 찾고, 제약조건과 규칙들에 의해 적합한 해를 찾아가는 방법을 제시한다. 본 논문은 구성기반 예로서 컴퓨터 부품조립을 사용해서 Template-based reasoning 예를 보인다 본 방법론은 검색노력을 줄이고, 검색에 있어 Feasibility와 Admissibility를 보장한다.매김할 수 있는 중요한 계기가 될 것이다.재무/비재무적 지표를 고려한 인공신경망기법의 예측적중률이 높은 것으로 나타났다. 즉, 로지스틱회귀 분석의 재무적 지표모형은 훈련, 시험용이 84.45%, 85.10%인 반면, 재무/비재무적 지표모형은 84.45%, 85.08%로서 거의 동일한 예측적중률을 가졌으나 인공신경망기법 분석에서는 재무적 지표모형이 92.23%, 85.10%인 반면, 재무/비재무적 지표모형에서는 91.12%, 88.06%로서 향상된 예측적중률을 나타내었다.ting LMS according to increasing the step-size parameter $\mu$ in the experimentally computed. learning curve. Also we find that convergence speed of proposed algorithm is increased by (B+1) time proportional to B which B is the number of recycled data buffer without complexity of compu

  • PDF