• Title/Summary/Keyword: Decision -making Tree

Search Result 197, Processing Time 0.024 seconds

A Study on the Influence Diagrams for the Application to Containment Performance Analysis (격납용기 성능해석을 위한 영향도에 관한 연구)

  • Park, Joon-Won;Jae, Moon-Sung;Chun, Moon-Hyun
    • Nuclear Engineering and Technology
    • /
    • v.28 no.2
    • /
    • pp.129-136
    • /
    • 1996
  • Influence diagram method is applied to containment performance analysis of Young-Gwang 3&4 in an effort to overcome some drawbacks of current containment performance analysis method. Event tee methodology has been adopted as a containment performance analysis method. There are, however, some drawbacks on event tree methodology. This study is to overcome three major drawbacks of the current containment performance analysis method : 1) Event tree cannot express dependency between events explicitly. 2) Accident Progression Event Tree (APET) cannot represent entire containment system. 3) It is difficult to consider decision making problem. To resolve these problems, influence diagrams, is proposed. In the present ok, the applicability of the influence diagrams has been demonstrated for YGN 3&4 containment performance analysis and accident management strategy assessments of this study are in good agreement with those of YGN 3&4 IPE. Sensitivity analysis has been peformed to identify relative important variables for each early containment failure, late containment and basemat melt-though. In addition, influence diagrams are used to assess two accident management strategies : 1) RCS depressurization, 2) cavity flooding. It is shown that influence diagrams can be applied to the containment performance analysis.

  • PDF

Development of Thinning Effect Analysis Model (TEAM) Using Individual-Tree Distance-Independent Growth Model of Pinus koraiensis Stands (잣나무 임분의 개체목 거리독립생장모델을 이용한 간벌효과 분석모델 개발)

  • Kwon, Soonduk;Kim, Seonyoung;Chung, Joosang;Kim, Hyung-Ho
    • Journal of Korean Society of Forest Science
    • /
    • v.96 no.6
    • /
    • pp.742-749
    • /
    • 2007
  • The objective of this study was to develop thinning effect analysis model (TEAM) using individual-tree distance-independent growth model of Pinus koraiensis Stands. The TEAM was designed to analyze thinning effects associated with such thinning prescriptions as the number, timing, intensity, and method of thinnings. To testing TEAM application, stand growth effects were compared with seven scenarios according to thinning prescription plan. In the results, it was possible to estimate the number of trees, height, volume with diameter (DBH) class of individual trees, and average diameter growth, height growth, the number of trees and volume growth per ha of stands. The result of sensitivity analysis on one Pinus koraiensis stand, it was not sure to expect the much more volume at the rotation age by stand density control applying thinning prescription. In the case of thinning, total yield volume has much more $40{\sim}75m^3$ per ha, within 5 cm in average diameter growth and within 1 m in average height growth than thats of non-thinning over increasing stand age. TEAM, as decision making support system, can be used for selecting the thinning prescription trial and determining one of some thinning prescription plan in different site specific stand environments.

Forming Shop Analysis with Adaptive Systems Approach (적응시스템 접근법을 이용한 조선소 가공공장 분석)

  • Dong-Hun Shin;Jong-Hun Woo;Jang-Hyun Lee;Jong-Gye Shin
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.39 no.3
    • /
    • pp.75-80
    • /
    • 2002
  • In these days of severe struggle for existence, the world has changed a great deal to global and digital oriented period. The enterprises try to introduce new management and production system to adapt such a change. But, if the only new technologies are applied to an enterprise without definite analysis about manufacturing, failure fellows as a logical consequence. Hence, enterprise must analyze manufacturing system definitely and needs new methodologies to mitigate risk. This study suggests that the new approach, which is systems approach for process improvement, is organized to systems analysis, systems diagnosis, and systems verification. Systems analysis analyzes manufacturing systems with object-oriented methodology-UML(Unified Modeling language) from a point of product, process, and resource view. Systems diagnosis identifies the constraints to optimize the system through scientific management or TOC(Theory of constraints). Systems verification shows the solution with virtual manufacturing technique applied to the core problem which emerged from systems diagnosis. This research shows the artifacts to improve the productivity with the above methodology applied to forming shop. UML provides the definite tool for analysis and re-usability to adapt itself to environment easily. The logical tree of TOC represents logical tool to optimize the forming shop. Discrete event simulator-QUEST suggests the tool for making a decision to verify the optimized forming shop.

A Study on Classification of Crown Classes and Selection of Thinned Trees for Major Conifers Using Machine Learning Techniques (머신러닝 기법을 활용한 주요 침엽수종의 수관급 분류와 간벌목 선정 연구)

  • Lee, Yong-Kyu;Lee, Jung-Soo;Park, Jin-Woo
    • Journal of Korean Society of Forest Science
    • /
    • v.111 no.2
    • /
    • pp.302-310
    • /
    • 2022
  • Here we aimed to classify the major coniferous tree species (Pinus densiflora, Pinus koraiensis, and Larix kaempferi) by tree measurement information and machine learning algorithms to establish an efficient forest management plan. We used national forest monitoring information amassed over nine years for the measurement information of trees, and random forest (RF), XGBoost (XGB), and light GBM (LGBM) as machine learning algorithms. We compared and evaluated the accuracy of the algorithm through performance evaluation using the accuracy, precision, recall, and F1 score of the algorithm. The RF algorithm had the highest performance evaluation score for all tree species, and highest scores for Pinus densiflora, with an accuracy of about 65%, a precision of about 72%, a recall of about 60%, and an F1 score of about 66%. The classification accuracy for the dominant trees was higher than about 80% in the crown classes, but that of the co-dominant trees, the intermediate trees, and the overtopper trees was evaluated as low. We consider that the results of this study can be used as reference data for decision-making in the selection of thinning trees for forest management.

Development of Sentiment Analysis Model for the hot topic detection of online stock forums (온라인 주식 포럼의 핫토픽 탐지를 위한 감성분석 모형의 개발)

  • Hong, Taeho;Lee, Taewon;Li, Jingjing
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.187-204
    • /
    • 2016
  • Document classification based on emotional polarity has become a welcomed emerging task owing to the great explosion of data on the Web. In the big data age, there are too many information sources to refer to when making decisions. For example, when considering travel to a city, a person may search reviews from a search engine such as Google or social networking services (SNSs) such as blogs, Twitter, and Facebook. The emotional polarity of positive and negative reviews helps a user decide on whether or not to make a trip. Sentiment analysis of customer reviews has become an important research topic as datamining technology is widely accepted for text mining of the Web. Sentiment analysis has been used to classify documents through machine learning techniques, such as the decision tree, neural networks, and support vector machines (SVMs). is used to determine the attitude, position, and sensibility of people who write articles about various topics that are published on the Web. Regardless of the polarity of customer reviews, emotional reviews are very helpful materials for analyzing the opinions of customers through their reviews. Sentiment analysis helps with understanding what customers really want instantly through the help of automated text mining techniques. Sensitivity analysis utilizes text mining techniques on text on the Web to extract subjective information in the text for text analysis. Sensitivity analysis is utilized to determine the attitudes or positions of the person who wrote the article and presented their opinion about a particular topic. In this study, we developed a model that selects a hot topic from user posts at China's online stock forum by using the k-means algorithm and self-organizing map (SOM). In addition, we developed a detecting model to predict a hot topic by using machine learning techniques such as logit, the decision tree, and SVM. We employed sensitivity analysis to develop our model for the selection and detection of hot topics from China's online stock forum. The sensitivity analysis calculates a sentimental value from a document based on contrast and classification according to the polarity sentimental dictionary (positive or negative). The online stock forum was an attractive site because of its information about stock investment. Users post numerous texts about stock movement by analyzing the market according to government policy announcements, market reports, reports from research institutes on the economy, and even rumors. We divided the online forum's topics into 21 categories to utilize sentiment analysis. One hundred forty-four topics were selected among 21 categories at online forums about stock. The posts were crawled to build a positive and negative text database. We ultimately obtained 21,141 posts on 88 topics by preprocessing the text from March 2013 to February 2015. The interest index was defined to select the hot topics, and the k-means algorithm and SOM presented equivalent results with this data. We developed a decision tree model to detect hot topics with three algorithms: CHAID, CART, and C4.5. The results of CHAID were subpar compared to the others. We also employed SVM to detect the hot topics from negative data. The SVM models were trained with the radial basis function (RBF) kernel function by a grid search to detect the hot topics. The detection of hot topics by using sentiment analysis provides the latest trends and hot topics in the stock forum for investors so that they no longer need to search the vast amounts of information on the Web. Our proposed model is also helpful to rapidly determine customers' signals or attitudes towards government policy and firms' products and services.

A Development of Defeat Prediction Model Using Machine Learning in Polyurethane Foaming Process for Automotive Seat (머신러닝을 활용한 자동차 시트용 폴리우레탄 발포공정의 불량 예측 모델 개발)

  • Choi, Nak-Hun;Oh, Jong-Seok;Ahn, Jong-Rok;Kim, Key-Sun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.6
    • /
    • pp.36-42
    • /
    • 2021
  • With recent developments in the Fourth Industrial Revolution, the manufacturing industry has changed rapidly. Through key aspects of Fourth Industrial Revolution super-connections and super-intelligence, machine learning will be able to make fault predictions during the foam-making process. Polyol and isocyanate are components in polyurethane foam. There has been a lot of research that could affect the characteristics of the products, depending on the specific mixture ratio and temperature. Based on these characteristics, this study collects data from each factor during the foam-making process and applies them to machine learning in order to predict faults. The algorithms used in machine learning are the decision tree, kNN, and an ensemble algorithm, and these algorithms learn from 5,147 cases. Based on 1,000 pieces of data for validation, the learning results show up to 98.5% accuracy using the ensemble algorithm. Therefore, the results confirm the faults of currently produced parts by collecting real-time data from each factor during the foam-making process. Furthermore, control of each of the factors may improve the fault rate.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

The effective management of length of stay for patients with acute myocardial infarction in the era of digital hospital (디지털 병원시대의 급성심근경색증 환자 재원일수의 효율적 관리 방안)

  • Choi, Hee-Sun;Lim, Ji-Hye;Kim, Won-Joong;Kang, Sung-Hong
    • Journal of Digital Convergence
    • /
    • v.10 no.1
    • /
    • pp.413-422
    • /
    • 2012
  • In this study, we developed the severity-adjusted length of stay (LOS) model for acute myocardial infarction patients using data from the hospital discharge survey and proposed management of medical quality and development of policy. The dataset was taken from 2,309 database of the hospital discharge survey from 2004 to 2006. The severity-adjusted LOS model for the acute myocardial infarction (AMI) patients was developed by data mining analysis. From decision making tree model, the main reasons for LOS of AMI patients were CABG and comorbidity. The difference between severity-adjusted LOS from the ensemble model and real LOS was compared and it was confirmed that insurance type and location of hospital were statistically associated with LOS. And to conclude, hospitals should develop the severity-adjusted LOS model for frequent diseases to manage LOS variations efficiently and apply it into the medical information system.

A Case Study on Machine Learning Applications and Performance Improvement in Learning Algorithm (기계학습 응용 및 학습 알고리즘 성능 개선방안 사례연구)

  • Lee, Hohyun;Chung, Seung-Hyun;Choi, Eun-Jung
    • Journal of Digital Convergence
    • /
    • v.14 no.2
    • /
    • pp.245-258
    • /
    • 2016
  • This paper aims to present the way to bring about significant results through performance improvement of learning algorithm in the research applying to machine learning. Research papers showing the results from machine learning methods were collected as data for this case study. In addition, suitable machine learning methods for each field were selected and suggested in this paper. As a result, SVM for engineering, decision-making tree algorithm for medical science, and SVM for other fields showed their efficiency in terms of their frequent use cases and classification/prediction. By analyzing cases of machine learning application, general characterization of application plans is drawn. Machine learning application has three steps: (1) data collection; (2) data learning through algorithm; and (3) significance test on algorithm. Performance is improved in each step by combining algorithm. Ways of performance improvement are classified as multiple machine learning structure modeling, $+{\alpha}$ machine learning structure modeling, and so forth.

A New Latent Class Model for Analysis of Purchasing and Browsing Histories on EC Sites

  • Goto, Masayuki;Mikawa, Kenta;Hirasawa, Shigeichi;Kobayashi, Manabu;Suko, Tota;Horii, Shunsuke
    • Industrial Engineering and Management Systems
    • /
    • v.14 no.4
    • /
    • pp.335-346
    • /
    • 2015
  • The electronic commerce site (EC site) has become an important marketing channel where consumers can purchase many kinds of products; their access logs, including purchase records and browsing histories, are saved in the EC sites' databases. These log data can be utilized for the purpose of web marketing. The customers who purchase many product items are good customers, whereas the other customers, who do not purchase many items, must not be good customers even if they browse many items. If the attributes of good customers and those of other customers are clarified, such information is valuable as input for making a new marketing strategy. Regarding the product items, the characteristics of good items that are bought by many users are valuable information. It is necessary to construct a method to efficiently analyze such characteristics. This paper proposes a new latent class model to analyze both purchasing and browsing histories to make latent item and user clusters. By applying the proposal, an example of data analysis on an EC site is demonstrated. Through the clusters obtained by the proposed latent class model and the classification rule by the decision tree model, new findings are extracted from the data of purchasing and browsing histories.