• Title/Summary/Keyword: internal rule

Search Result 202, Processing Time 0.023 seconds

Development and Validation of Classroom Problem Behavior Scale - Elementary School Version(CPBS-E) (초등학생 문제행동선별척도: 교사용(CPBS-E)의 개발과 타당화)

  • Song, Wonyoung;Chang, Eun Jin;Choi, Gayoung;Choi, Jae Gwang;ChoBlair, Kwang-Sun;Won, Sung-Doo;Han, Miryeung
    • Korean Journal of School Psychology
    • /
    • v.16 no.3
    • /
    • pp.433-451
    • /
    • 2019
  • This study aimed to develop and validate the Classroom Problem Behavior Scale - Elementary School Version (CPBS-E) measure which is unique to classroom problem behavior exhibited by Korean elementary school students. The focus was on developing a universal screening instrument designed to identify and provide intervention to students who are at-risk for severe social-emotional and behavioral problems. Items were initially drawn from the literature, interviews with elementary school teachers, common office discipline referral measures used in U.S. elementary schools, penalty point systems used in Korean schools, 'Green Mileage', and the Inventory of Emotional and Behavioral Traits. The content validity of the initially developed items was assessed by six classroom and subject teachers, which resulted in the development of a preliminary scale consisting of 63 two-dimensional items (i.e., Within Classroom Problem Behavior and Outside of Classroom Problem Behavior), each of which consisted of 3 to 4 factors. The Within Classroom Problem Behavior dimension consisted of 4 subscales (not being prepared for class, class disruption, aggression, and withdrawn) and the Outside of Classroom Problem Behavior dimension consisted of 3 subscales (rule-violation, aggression, and withdrawn). The CPBS-E was pilot tested on a sample of 154 elementary school students, which resulted in reducing the scale to 23 items. Following the scale revision, the CPBS-E was validated on a sample population of 209 elementary school students. The validation results indicated that the two-dimensional CPBS-E scale of classroom problem behavior was a reliable and valid measure. The test-retest reliability was stable at above .80 in most of the subscales. The CPBS-E measure demonstrated high internal consistency of .76-.94. In examining the criterion validity, the scale's correlation with the Teacher Observation of Classroom Adaptation-Checklist (TOCA-C) was high and the aggression and withdrawn subscales of the CPBS-E demonstrated high correlations with externalization and internalization, respectively, of the Child Behavior Checklist - Teacher Report Form CBCL-TRF). In addition, the factor structure of the CPBS-E scale was examined using the structural equation model and found to be acceptable. The results are discussed in relation to implications, contributions to the field, and limitations.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.