• Title/Summary/Keyword: Decision-tree technique

Search Result 209, Processing Time 0.029 seconds

Comparison of Hospital Standardized Mortality Ratio Using National Hospital Discharge Injury Data (퇴원손상심층조사 자료를 이용한 의료기관 중증도 보정 사망비 비교)

  • Park, Jong-Ho;Kim, Yoo-Mi;Kim, Sung-Soo;Kim, Won-Joong;Kang, Sung-Hong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.4
    • /
    • pp.1739-1750
    • /
    • 2012
  • This study was to develop the assessment of medical service outcome using administration data through compared with hospital standardized mortality ratios(HSMR) in various hospitals. This study analyzed 63,664 cases of Hospital Discharge Injury Data of 2007 and 2008, provided by Korea Centers for Disease Control and Prevention. We used data mining technique and compared decision tree and logistic regression for developing risk-adjustment model of in-hospital mortality. Our Analysis shows that gender, length of stay, Elixhauser comorbidity index, hospitalization path, and primary diagnosis are main variables which influence mortality ratio. By comparing hospital standardized mortality ratios(HSMR) with standardized variables, we found concrete differences (55.6-201.6) of hospital standardized mortality ratios(HSMR) among hospitals. This proves that there are quality-gaps of medical service among hospitals. This study outcome should be utilized more to achieve the improvement of the quality of medical service.

A Study on the Prediction Model of the Elderly Depression

  • SEO, Beom-Seok;SUH, Eung-Kyo;KIM, Tae-Hyeong
    • The Journal of Industrial Distribution & Business
    • /
    • v.11 no.7
    • /
    • pp.29-40
    • /
    • 2020
  • Purpose: In modern society, many urban problems are occurring, such as aging, hollowing out old city centers and polarization within cities. In this study, we intend to apply big data and machine learning methodologies to predict depression symptoms in the elderly population early on, thus contributing to solving the problem of elderly depression. Research design, data and methodology: Machine learning techniques used random forest and analyzed the correlation between CES-D10 and other variables, which are widely used worldwide, to estimate important variables. Dependent variables were set up as two variables that distinguish normal/depression from moderate/severe depression, and a total of 106 independent variables were included, including subjective health conditions, cognitive abilities, and daily life quality surveys, as well as the objective characteristics of the elderly as well as the subjective health, health, employment, household background, income, consumption, assets, subjective expectations, and quality of life surveys. Results: Studies have shown that satisfaction with residential areas and quality of life and cognitive ability scores have important effects in classifying elderly depression, satisfaction with living quality and economic conditions, and number of outpatient care in living areas and clinics have been important variables. In addition, the results of a random forest performance evaluation, the accuracy of classification model that classify whether elderly depression or not was 86.3%, the sensitivity 79.5%, and the specificity 93.3%. And the accuracy of classification model the degree of elderly depression was 86.1%, sensitivity 93.9% and specificity 74.7%. Conclusions: In this study, the important variables of the estimated predictive model were identified using the random forest technique and the study was conducted with a focus on the predictive performance itself. Although there are limitations in research, such as the lack of clear criteria for the classification of depression levels and the failure to reflect variables other than KLoSA data, it is expected that if additional variables are secured in the future and high-performance predictive models are estimated and utilized through various machine learning techniques, it will be able to consider ways to improve the quality of life of senior citizens through early detection of depression and thus help them make public policy decisions.

Taxonomy of Performance Shaping Factors for Human Error Analysis of Railway Accidents (철도사고의 인적오류 분석을 위한 수행도 영향인자 분류)

  • Baek, Dong-Hyun;Koo, Lock-Jo;Lee, Kyung-Sun;Kim, Dong-San;Shin, Min-Ju;Yoon, Wan-Chul;Jung, Myung-Chul
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.31 no.1
    • /
    • pp.41-48
    • /
    • 2008
  • Enhanced machine reliability has dramatically reduced the rate and number of railway accidents but for further reduction human error should be considered together that accounts for about 20% of the accidents. Therefore, the objective of this study was to suggest a new taxonomy of performance shaping factors (PSFs) that could be utilized to identify the causes of a human error associated with railway accidents. Four categories of human factor, task factor, environment factor, and organization factor and 14 sub-categories of physical state, psychological state, knowledge/experience/ability, information/communication, regulation/procedure, specific character of task, infrastructure, device/MMI, working environment, external environment, education, direction/management, system/atmosphere, and welfare/opportunity along with 131 specific factors was suggested by carefully reviewing 8 representative published taxonomy of Casualty Analysis Methodology for Maritime Operations (CASMET), Cognitive Reliability and Error Analysis Method (CREAM), Human Factors Analysis and Classification System (HFACS), Integrated Safety Investigation Methodology (ISIM), Korea-Human Performance Enhancement System (K-HPES), Rail safety and Standards Board (RSSB), $TapRoot^{(R)}$, and Technique for Retrospective and Predictive Analysis of Cognitive Errors (TRACEr). Then these were applied to the case of the railway accident occurred between Komo and Kyungsan stations in 2003 for verification. Both cause decision chart and why-because tree were developed and modified to aid the analyst to find causal factors from the suggested taxonomy. The taxonomy was well suited so that eight causes were found to explain the driver's error in the accident. The taxonomy of PSFs suggested in this study could cover from latent factors to direct causes of human errors related with railway accidents with systematic categorization.

A Study on Empirical Model for the Prevention and Protection of Technology Leakage through SME Profiling Analysis (중소기업 프로파일링 분석을 통한 기술유출 방지 및 보호 모형 연구)

  • Yoo, In-Jin;Park, Do-Hyung
    • The Journal of Information Systems
    • /
    • v.27 no.1
    • /
    • pp.171-191
    • /
    • 2018
  • Purpose Corporate technology leakage is not only monetary loss, but also has a negative impact on the corporate image and further deteriorates sustainable growth. In particular, since SMEs are highly dependent on core technologies compared to large corporations, loss of technology leakage threatens corporate survival. Therefore, it is important for SMEs to "prevent and protect technology leakage". With the recent development of data analysis technology and the opening of public data, it has become possible to discover and proactively detect companies with a high probability of technology leakage based on actual company data. In this study, we try to construct profiles of enterprises with and without technology leakage experience through profiling analysis using data mining techniques. Furthermore, based on this, we propose a classification model that distinguishes companies that are likely to leak technology. Design/methodology/approach This study tries to develop the empirical model for prevention and protection of technology leakage through profiling method which analyzes each SME from the viewpoint of individual. Based on the previous research, we tried to classify many characteristics of SMEs into six categories and to identify the factors influencing the technology leakage of SMEs from the enterprise point of view. Specifically, we divided the 29 SME characteristics into the following six categories: 'firm characteristics', 'organizational characteristics', 'technical characteristics', 'relational characteristics', 'financial characteristics', and 'enterprise core competencies'. Each characteristic was extracted from the questionnaire data of 'Survey of Small and Medium Enterprises Technology' carried out annually by the Government of the Republic of Korea. Since the number of SMEs with experience of technology leakage in questionnaire data was significantly smaller than the other, we made a 1: 1 correspondence with each sample through mixed sampling. We conducted profiling of companies with and without technology leakage experience using decision-tree technique for research data, and derived meaningful variables that can distinguish the two. Then, empirical model for prevention and protection of technology leakage was developed through discriminant analysis and logistic regression analysis. Findings Profiling analysis shows that technology novelty, enterprise technology group, number of intellectual property registrations, product life cycle, technology development infrastructure level(absence of dedicated organization), enterprise core competency(design) and enterprise core competency(process design) help us find SME's technology leakage. We developed the two empirical model for prevention and protection of technology leakage in SMEs using discriminant analysis and logistic regression analysis, and each hit ratio is 65%(discriminant analysis) and 67%(logistic regression analysis).

Implementation of an Efficient Microbial Medical Image Retrieval System Applying Knowledge Databases (지식 데이타베이스를 적용한 효율적인 세균 의료영상 검색 시스템의 구현)

  • Shin Yong Won;Koo Bong Oh
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.1 s.33
    • /
    • pp.93-100
    • /
    • 2005
  • This study is to desist and implement an efficient microbial medical image retrieval system based on knowledge and content of them which can make use of more accurate decision on colony as doll as efficient education for new techicians. For this. re first address overall inference to set up flexible search path using rule-base in order U redure time required original microbial identification by searching the fastest path of microbial identification phase based on heuristics knowledge. Next, we propose a color ffature gfraction mtU, which is able to extract color feature vectors of visual contents from a inn microbial image based on especially bacteria image using HSV color model. In addition, for better retrieval performance based on large microbial databases, we present an integrated indexing technique that combines with B+-tree for indexing simple attributes, inverted file structure for text medical keywords list, and scan-based filtering method for high dimensional color feature vectors. Finally. the implemented system shows the possibility to manage and retrieve the complex microbial images using knowledge and visual contents itself effectively. We expect to decrease rapidly Loaming time for elementary technicians by tell organizing knowledge of clinical fields through proposed system.

  • PDF

A Study on the Classification of Unstructured Data through Morpheme Analysis

  • Kim, SungJin;Choi, NakJin;Lee, JunDong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.105-112
    • /
    • 2021
  • In the era of big data, interest in data is exploding. In particular, the development of the Internet and social media has led to the creation of new data, enabling the realization of the era of big data and artificial intelligence and opening a new chapter in convergence technology. Also, in the past, there are many demands for analysis of data that could not be handled by programs. In this paper, an analysis model was designed and verified for classification of unstructured data, which is often required in the era of big data. Data crawled DBPia's thesis summary, main words, and sub-keyword, and created a database using KoNLP's data dictionary, and tokenized words through morpheme analysis. In addition, nouns were extracted using KAIST's 9 part-of-speech classification system, TF-IDF values were generated, and an analysis dataset was created by combining training data and Y values. Finally, The adequacy of classification was measured by applying three analysis algorithms(random forest, SVM, decision tree) to the generated analysis dataset. The classification model technique proposed in this paper can be usefully used in various fields such as civil complaint classification analysis and text-related analysis in addition to thesis classification.

Response Modeling for the Marketing Promotion with Weighted Case Based Reasoning Under Imbalanced Data Distribution (불균형 데이터 환경에서 변수가중치를 적용한 사례기반추론 기반의 고객반응 예측)

  • Kim, Eunmi;Hong, Taeho
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.29-45
    • /
    • 2015
  • Response modeling is a well-known research issue for those who have tried to get more superior performance in the capability of predicting the customers' response for the marketing promotion. The response model for customers would reduce the marketing cost by identifying prospective customers from very large customer database and predicting the purchasing intention of the selected customers while the promotion which is derived from an undifferentiated marketing strategy results in unnecessary cost. In addition, the big data environment has accelerated developing the response model with data mining techniques such as CBR, neural networks and support vector machines. And CBR is one of the most major tools in business because it is known as simple and robust to apply to the response model. However, CBR is an attractive data mining technique for data mining applications in business even though it hasn't shown high performance compared to other machine learning techniques. Thus many studies have tried to improve CBR and utilized in business data mining with the enhanced algorithms or the support of other techniques such as genetic algorithm, decision tree and AHP (Analytic Process Hierarchy). Ahn and Kim(2008) utilized logit, neural networks, CBR to predict that which customers would purchase the items promoted by marketing department and tried to optimized the number of k for k-nearest neighbor with genetic algorithm for the purpose of improving the performance of the integrated model. Hong and Park(2009) noted that the integrated approach with CBR for logit, neural networks, and Support Vector Machine (SVM) showed more improved prediction ability for response of customers to marketing promotion than each data mining models such as logit, neural networks, and SVM. This paper presented an approach to predict customers' response of marketing promotion with Case Based Reasoning. The proposed model was developed by applying different weights to each feature. We deployed logit model with a database including the promotion and the purchasing data of bath soap. After that, the coefficients were used to give different weights of CBR. We analyzed the performance of proposed weighted CBR based model compared to neural networks and pure CBR based model empirically and found that the proposed weighted CBR based model showed more superior performance than pure CBR model. Imbalanced data is a common problem to build data mining model to classify a class with real data such as bankruptcy prediction, intrusion detection, fraud detection, churn management, and response modeling. Imbalanced data means that the number of instance in one class is remarkably small or large compared to the number of instance in other classes. The classification model such as response modeling has a lot of trouble to recognize the pattern from data through learning because the model tends to ignore a small number of classes while classifying a large number of classes correctly. To resolve the problem caused from imbalanced data distribution, sampling method is one of the most representative approach. The sampling method could be categorized to under sampling and over sampling. However, CBR is not sensitive to data distribution because it doesn't learn from data unlike machine learning algorithm. In this study, we investigated the robustness of our proposed model while changing the ratio of response customers and nonresponse customers to the promotion program because the response customers for the suggested promotion is always a small part of nonresponse customers in the real world. We simulated the proposed model 100 times to validate the robustness with different ratio of response customers to response customers under the imbalanced data distribution. Finally, we found that our proposed CBR based model showed superior performance than compared models under the imbalanced data sets. Our study is expected to improve the performance of response model for the promotion program with CBR under imbalanced data distribution in the real world.

Development of 1ST-Model for 1 hour-heavy rain damage scale prediction based on AI models (1시간 호우피해 규모 예측을 위한 AI 기반의 1ST-모형 개발)

  • Lee, Joonhak;Lee, Haneul;Kang, Narae;Hwang, Seokhwan;Kim, Hung Soo;Kim, Soojun
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.5
    • /
    • pp.311-323
    • /
    • 2023
  • In order to reduce disaster damage by localized heavy rains, floods, and urban inundation, it is important to know in advance whether natural disasters occur. Currently, heavy rain watch and heavy rain warning by the criteria of the Korea Meteorological Administration are being issued in Korea. However, since this one criterion is applied to the whole country, we can not clearly recognize heavy rain damage for a specific region in advance. Therefore, in this paper, we tried to reset the current criteria for a special weather report which considers the regional characteristics and to predict the damage caused by rainfall after 1 hour. The study area was selected as Gyeonggi-province, where has more frequent heavy rain damage than other regions. Then, the rainfall inducing disaster or hazard-triggering rainfall was set by utilizing hourly rainfall and heavy rain damage data, considering the local characteristics. The heavy rain damage prediction model was developed by a decision tree model and a random forest model, which are machine learning technique and by rainfall inducing disaster and rainfall data. In addition, long short-term memory and deep neural network models were used for predicting rainfall after 1 hour. The predicted rainfall by a developed prediction model was applied to the trained classification model and we predicted whether the rain damage after 1 hour will be occurred or not and we called this as 1ST-Model. The 1ST-Model can be used for preventing and preparing heavy rain disaster and it is judged to be of great contribution in reducing damage caused by heavy rain.

Establishment of Safety Factors for Determining Use-by-Date for Foods (식품의 소비기한 참고치 설정을 위한 안전계수)

  • Byoung Hu Kim;Soo-Jin Jung;June Gu Kang;Yohan Yoon;Jae-Wook Shin;Cheol-Soo Lee;Sang-Do Ha
    • Journal of Food Hygiene and Safety
    • /
    • v.38 no.6
    • /
    • pp.528-536
    • /
    • 2023
  • In Korea, from January 2023, the Act on Labeling and Advertising of Food was revised to reflect the use-by-date rather than the sell-by-date. Hence, the purpose of this study was to establish a system for calculating the safety factor and determining the recommended use-by-date for each food type, thereby providing a scientific basis for the recommended use-by-date labels. A safety factor calculation technique based on scientific principles was designed through literature review and simulation, and opinions were collected by conducting surveys and discussions including industry and academia, among others. The main considerations in this study were pH, Aw, sterilization, preservatives, packaging for storage improvement, storage temperature, and other external factors. A safety factor of 0.97 was exceptionally applied for frozen products and 1.0 for sterilized products. In addition, a between-sample error value of 0.08 was applied to factors related to product and experimental design. This study suggests that clearly providing a safe use-by-date will help reduce food waste and contribute to carbon neutrality.