• Title/Summary/Keyword: Comprehensive Evaluation of Information Disclosure

Search Result 4, Processing Time 0.022 seconds

A Study on Detailed Nondisclosure Criteria for the Administrative Departments (행정각부 비공개 대상정보 세부기준 개선방안 연구)

  • Youseung Kim
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.23 no.3
    • /
    • pp.115-136
    • /
    • 2023
  • The purpose of this study is to discuss problems and seek improvement plans based on a critical analysis of the detailed standards for nondisclosure of 19 administrative departments in accordance with Article 26 of the Government Organization Act. To this end, the status of information disclosure-related regulations in 19 administrative departments was analyzed, and 6,094 cases of nondisclosed information were investigated and analyzed. In addition, through interviews with seven information disclosure experts, the analysis contents of this study were shared and reviewed. Furthermore, opinions on the effectiveness, problems, and system improvement areas of the detailed standards for nondisclosed information were collected. As a conclusion, three improvement measures were proposed: first, the legislation on the establishment of detailed standards for nondisclosure; second, the establishment of a system for regular substantive inspection of detailed standards for nondisclosure; and third, the improvement in the service of detailed standards for nondisclosure.

Importance Analysis of ESG Management Diagnosis Items for Small and Medium-sized Logistics Companies (중소·중견 물류기업 ESG 경영 이행 진단항목 중요도 분석)

  • Wonbae Park;Maowei Chen;Jayeon Lee;Kyongjun Yun
    • Journal of Korea Port Economic Association
    • /
    • v.40 no.2
    • /
    • pp.53-64
    • /
    • 2024
  • ESG management has garnered significant recognition as a crucial concern across all global industries. Within the logistics sector, there is a growing awareness of the importance of ESG management. However, active engagement in ESG practices remains predominantly confined to large corporations, leaving small and medium-sized logistics companies lagging in their comprehension and implementation of ESG principles. Previous studies have consistently underscored the necessity of establishing ESG management guidelines. Furthermore, there has been a call to determine the relative weights assigned to various ESG implementation evaluation criteria, taking into account the distinctive attributes of each category of logistics company. This study endeavors to ascertain the weightings of ESG implementation evaluation items for different types of logistics companies by employing the Analytic Hierarchy Process (AHP) methodology. The framework of evaluation is based on the evaluation items outlined in prior research, particularly focusing on ESG management guidelines tailored for small and medium-sized logistics companies. The findings of the analysis reveal distinct prioritizations across different sectors within the logistics industry. For maritime logistics companies, the environment emerges as the foremost concern, followed by society, information disclosure, and governance. Conversely, land transportation companies prioritize society, followed by governance, environment, and information disclosure. In the warehousing sector, environment takes precedence, followed by society, information disclosure, and governance. Comprehensive logistics firms, on the other hand, prioritize the environment, followed by information disclosure, society, and governance. Such guidelines are pertinent for regulatory bodies and industry stakeholders seeking to assess ESG practices within these enterprises. Moreover, this research contributes to the body of knowledge available to domestic small and medium-sized logistics companies, aiding them in effectively navigating and implementing ESG management principles.

Semantics-aware Obfuscation for Location Privacy

  • Damiani, Maria Luisa;Silvestri, Claudio;Bertino, Elisa
    • Journal of Computing Science and Engineering
    • /
    • v.2 no.2
    • /
    • pp.137-160
    • /
    • 2008
  • The increasing availability of personal location data pushed by the widespread use of location-sensing technologies raises concerns with respect to the safeguard of location privacy. To address such concerns location privacy-preserving techniques are being investigated. An important area of application for such techniques is represented by Location Based Services (LBS). Many privacy-preserving techniques designed for LBS are based on the idea of forwarding to the LBS provider obfuscated locations, namely position information at low spatial resolution, in place of actual users' positions. Obfuscation techniques are generally based on the use of geometric methods. In this paper, we argue that such methods can lead to the disclosure of sensitive location information and thus to privacy leaks. We thus propose a novel method which takes into account the semantic context in which users are located. The original contribution of the paper is the introduction of a comprehensive framework consisting of a semantic-aware obfuscation model, a novel algorithm for the generation of obfuscated spaces for which we report results from an experimental evaluation and reference architecture.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.