• Title/Summary/Keyword: default probability

Search Result 52, Processing Time 0.049 seconds

Analysis of the maintenance margin level in the KOSPI200 futures market (KOSPI200 선물 유지증거금률에 대한 실증연구)

  • Kim, Joon;Kim, Young-Sik
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.8 no.2
    • /
    • pp.85-95
    • /
    • 2005
  • The margin level in the futures market platys an important role in balancing the default probability with the investor's opportunity cost. In this paper, we investigate whether the movement of KOSPI200 futures daily prices can be modeled with the extreme value theory. Based on this investigation, we examine the validity of the margin level set by the extreme value theory. Moreover, we propose an expected profit-maximization model for securities companies. In this model, the extreme value theory is used for cost estimation, and a regression analysis is used for revenue calculation. Computational results are presented to compare the extreme value distribution with the empirical distribution of margin violation in KOSPI200 and to examine the suitability of the expected profit-maximization model.

  • PDF

Generating and Validating Synthetic Training Data for Predicting Bankruptcy of Individual Businesses

  • Hong, Dong-Suk;Baik, Cheol
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.4
    • /
    • pp.228-233
    • /
    • 2021
  • In this study, we analyze the credit information (loan, delinquency information, etc.) of individual business owners to generate voluminous training data to establish a bankruptcy prediction model through a partial synthetic training technique. Furthermore, we evaluate the prediction performance of the newly generated data compared to the actual data. When using conditional tabular generative adversarial networks (CTGAN)-based training data generated by the experimental results (a logistic regression task), the recall is improved by 1.75 times compared to that obtained using the actual data. The probability that both the actual and generated data are sampled over an identical distribution is verified to be much higher than 80%. Providing artificial intelligence training data through data synthesis in the fields of credit rating and default risk prediction of individual businesses, which have not been relatively active in research, promotes further in-depth research efforts focused on utilizing such methods.

Credit Card Interest Rate with Imperfect Information (불완전 정보와 신용카드 이자율)

  • Song, Soo-Young
    • The Korean Journal of Financial Management
    • /
    • v.22 no.2
    • /
    • pp.213-226
    • /
    • 2005
  • Adverse selection is a heavily scrutinized subject within the financial intermediary industry. Consensus is reached regarding its effect on the loan interest rate. Despite the similar features of financial service offered by the credit card, we still have controversy regarding credit card interest rate on how is adverse selection incurred with the change of interest rate. Thus, this paper explores how does the adverse selection, if ever, take place and affect the credit card interest rate. Information asymmetry regarding the credit card users' type represented by the default probability is assumed. The users are assumed to be rational in that they want to minimize the per unit dollar expense associated with the commercial transaction and financing between the two typical payment methods, cash and credit card. Suppliers, i.e. credit card companies, would like to maximize their profit and would be better off with more pervasive use of credit cards over the cash. Then we could show that the increasing credit card interest rate is subject to the adverse selection, sharing the same tenet with that of the bank loan interest rate proposed by Stiglitz and Weiss. Hence the current theory predicts that credit card market also suffers from adverse selection with increasing interest rate.

  • PDF

Corrosion Assessment by Using Risk-Based Inspection Method for Petrochemical Plant - Practical Experience

  • Choi, Song-Chun;Song, Ki-Hun
    • Corrosion Science and Technology
    • /
    • v.8 no.3
    • /
    • pp.119-125
    • /
    • 2009
  • Corrosion assessment has a number of uses but the use considered here is as a precursor to Risk-Based Inspection (RBI) planning. Systematic methods consisting of technical modules of RBI program were used to assess the effect of specific corrosion mechanism on the probability of failure in equipments of petrochemical plants. Especially in part of the damage and corrosion assessment, screening step involved evaluating the combinations of process conditions and construction materials for each equipment item in order to determine which damage mechanisms are potentially active. For general internal corrosion, either API 510 or API 570 was applied as the damage rate in the calculation to determine the remaining life and inspection frequency. In some cases, a measured rate of corrosion may not be available. The technical modules of RBI program employ default values for corrosion, typically derived from published data or from experience with similar processes, for use until inspection results are available. This paper describes the case study of corrosion and damage assessment by using RBI methodology in petrochemical plant. Specifically, this paper reports the methodology and the results of its application to the petrochemical units using the $KGS-RBI^{TM}$ program, developed by the Korea Gas Safety Corporation to suit Korean situation in conformity with API 581 Codes.

A Novel Smart Contract based Optimized Cloud Selection Framework for Efficient Multi-Party Computation

  • Haotian Chen;Abir EL Azzaoui;Sekione Reward Jeremiah;Jong Hyuk Park
    • Journal of Information Processing Systems
    • /
    • v.19 no.2
    • /
    • pp.240-257
    • /
    • 2023
  • The industrial Internet of Things (IIoT) is characterized by intelligent connection, real-time data processing, collaborative monitoring, and automatic information processing. The heterogeneous IIoT devices require a high data rate, high reliability, high coverage, and low delay, thus posing a significant challenge to information security. High-performance edge and cloud servers are a good backup solution for IIoT devices with limited capabilities. However, privacy leakage and network attack cases may occur in heterogeneous IIoT environments. Cloud-based multi-party computing is a reliable privacy-protecting technology that encourages multiparty participation in joint computing without privacy disclosure. However, the default cloud selection method does not meet the heterogeneous IIoT requirements. The server can be dishonest, significantly increasing the probability of multi-party computation failure or inefficiency. This paper proposes a blockchain and smart contract-based optimized cloud node selection framework. Different participants choose the best server that meets their performance demands, considering the communication delay. Smart contracts provide a progressive request mechanism to increase participation. The simulation results show that our framework improves overall multi-party computing efficiency by up to 44.73%.

The effect of sensitive and non-sensitive parameters on DCGL in probability analysis for decommissioning of nuclear facilities

  • Hyung-Woo Seo;Hyein Kim
    • Nuclear Engineering and Technology
    • /
    • v.55 no.10
    • /
    • pp.3559-3570
    • /
    • 2023
  • In the decommissioning of nuclear facilities, Derived Concentration Guideline Level (DCGL) derivation is necessary for the release of the facility after the site remediation, which also needs to be implemented in the stage of establishing a decommissioning planning. In order to derive DCGL, the dose assessment for the receptors can be conducted from residual radioactivity by using RESRAD code. When performing sensitivity analysis on probabilistic parameters, secondary evaluation is performed by assigning a single value for parameters classified as sensitive. However, several options may arise in the handling of nonsensitive parameters. Therefore, we compared the results of the first execution of RESRAD applying probabilistic parameters for each scenario with the results of the second execution applying a single value to sensitive parameters among the probabilistic parameters. In addition, we analyzed the effect of setting options for non-sensitive parameters. As a result, the effect on DCGL were different depending on the application scenario, the target radionuclides, and the input parameter selections. In terms of the overall evaluation period, the DCGL graph of the default option was generally shown as the most conservative except for some radionuclides. However, it will not necessarily be given priority in the aspect of the need to reflect site characteristics. The reason for selecting a probabilistic parameter is the availability of the parameter and the uncertainty of applying a single value. Therefore, as an alternative, it can be consistently applied to distribution as an option for non-sensitive parameters after sensitivity analysis.

Development Study of a Predictive Model for the Possibility of Collection Delinquent Health Insurance Contributions (체납된 건강보험료 징수 가능성 예측모형 개발 연구)

  • Young-Kyoon Na
    • Health Policy and Management
    • /
    • v.33 no.4
    • /
    • pp.450-456
    • /
    • 2023
  • Background: This study aims to develop a "Predictive Model for the Possibility of Collection Delinquent Health Insurance Contributions" for the National Health Insurance Service to enhance administrative efficiency in protecting and collecting contributions from livelihood-type defaulters. Additionally, it aims to establish customized collection management strategies based on individuals' ability to pay health insurance contributions. Methods: Firstly, to develop the "Predictive Model for the Possibility of Collection Delinquent Health Insurance Contributions," a series of processes including (1) analysis of defaulter characteristics, (2) model estimation and performance evaluation, and (3) model derivation will be conducted. Secondly, using the predictions from the model, individuals will be categorized into four types based on their payment ability and livelihood status, and collection strategies will be provided for each type. Results: Firstly, the regression equation of the prediction model is as follows: phat = exp (0.4729 + 0.0392 × gender + 0.00894 × age + 0.000563 × total income - 0.2849 × low-income type enrollee - 0.2271 × delinquency frequency + 0.9714 × delinquency action + 0.0851 × reduction) / [1 + exp (0.4729 + 0.0392 × gender + 0.00894 × age + 0.000563 × total income - 0.2849 × low-income type enrollee - 0.2271 × delinquency frequency + 0.9714 × delinquency action + 0.0851 × reduction)]. The prediction performance is an accuracy of 86.0%, sensitivity of 87.0%, and specificity of 84.8%. Secondly, individuals were categorized into four types based on livelihood status and payment ability. Particularly, the "support needed group," which comprises those with low payment ability and low-income type enrollee, suggests enhancing contribution relief and support policies. On the other hand, the "high-risk group," which comprises those without livelihood type and low payment ability, suggests implementing stricter default handling to improve collection rates. Conclusion: Upon examining the regression equation of the prediction model, it is evident that individuals with lower income levels and a history of past defaults have a lower probability of payment. This implies that defaults occur among those without the ability to bear the burden of health insurance contributions, leading to long-term defaults. Social insurance operates on the principles of mandatory participation and burden based on the ability to pay. Therefore, it is necessary to develop policies that consider individuals' ability to pay, such as transitioning livelihood-type defaulters to medical assistance or reducing insurance contribution burdens.

CCDP Evaluation of the Eire Area of NPPs Using Eire Model CEAST (화재모델 CFAST를 이용한 원전 화재구역의 CCDP평가)

  • Lee Yoon-Hwan;Yang Joon-Eon;Kim Jong-Hoon;Noh Sam-Kyu
    • Fire Science and Engineering
    • /
    • v.18 no.4
    • /
    • pp.64-71
    • /
    • 2004
  • This paper describes the result of the pump room fire analysis of the nuclear power plant using CFAST fire modeling code developed by NIST. The sensitivity studies are performed over the input parameters of CFAST: the constrained or unconstrained fire, Lower Oxygen Limit (LOL), Radiative Fraction (RF), and the opening ratio of the fire doors. According to the results, a pump room fire is the ventilation-controlled fire, so it is adequate that the value of LOL is 10% which is also the default value. It is anlayzed that the Radiative Fraction does not affect the temperature of the upper gas layer. It is appeared that the integrity of the cable located at the upper layer is maintained except for the safety pump at the fire area and the Conditional Core Damage Probability (CCDP) is 9.25E-07. It seems that CCDP result is more realistic and less uncertain than that of Fire Hazard Analysis (FHA).

Performance Modelling of Adaptive VANET with Enhanced Priority Scheme

  • Lim, Joanne Mun-Yee;Chang, YoongChoon;Alias, MohamadYusoff;Loo, Jonathan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.4
    • /
    • pp.1337-1358
    • /
    • 2015
  • In this paper, we present an analytical and simulated study on the performance of adaptive vehicular ad hoc networks (VANET) priority based on Transmission Distance Reliability Range (TDRR) and data type. VANET topology changes rapidly due to its inherent nature of high mobility nodes and unpredictable environments. Therefore, nodes in VANET must be able to adapt to the ever changing environment and optimize parameters to enhance performance. However, there is a lack of adaptability in the current VANET scheme. Existing VANET IEEE802.11p's Enhanced Distributed Channel Access; EDCA assigns priority solely based on data type. In this paper, we propose a new priority scheme which utilizes Markov model to perform TDRR prediction and assign priorities based on the proposed Markov TDRR Prediction with Enhanced Priority VANET Scheme (MarPVS). Subsequently, we performed an analytical study on MarPVS performance modeling. In particular, considering five different priority levels defined in MarPVS, we derived the probability of successful transmission, the number of low priority messages in back off process and concurrent low priority transmission. Finally, the results are used to derive the average transmission delay for data types defined in MarPVS. Numerical results are provided along with simulation results which confirm the accuracy of the proposed analysis. Simulation results demonstrate that the proposed MarPVS results in lower transmission latency and higher packet success rate in comparison with the default IEEE802.11p scheme and greedy scheduler scheme.

A Study on the Optimal Loan Limit Management Using the Newsvendor Model (뉴스벤더 모델을 이용한 최적 대출금 한도 관리에 관한 연구)

  • Sin, Jeong-Hun;Hwang, Seung-June
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.38 no.3
    • /
    • pp.39-48
    • /
    • 2015
  • In this study, granting the optimal loan limit on SME (Small and Medium Enterprise) loans of financial institutions was proposed using the traditional newsvendor model. This study was the first domestic case study that applied the newsvendor model that was mainly used to calculate the optimum order quantity under some uncertain demands to the calculation of the loan limit (debt ceiling) of institutions. The method presented in this study made it possible to calculate the loan limit (debt ceiling) to maximize the revenue of a financial institution using probability functions, applied the newsvendor model setting the order volume of merchandise goods as the loan product order volume of the financial institution, and proposed, through the analysis of empirical data, the availability of additional loan to the borrower and the reduction of the debt ceiling and a management method for the recovery of the borrower who could not generate profit. In addition, the profit based loan money management model presented in this study also demonstrated that it also contributed to some extent to the prediction of the bankruptcy of the borrowing SME (Small and Medium Enterprise), as well as the calculation of the loan limit based on profit, by deriving the result values that the borrowing SME (Small and Medium Enterprise) actually went through bankruptcy at later times once the model had generated a signal of loan recovery for them during the validation of empirical data. accordingly, The method presented in this study suggested a methodology to generated a signal of loan recovery to reduce the losses by the bankruptcy.