• Title/Summary/Keyword: Risk Rating

Search Result 300, Processing Time 0.072 seconds

Development of a method to create a matrix of heavy rain damage rating standards using rainfall and heavy rain damage data (강우량 및 호우피해 자료를 이용한 호우피해 등급기준 Matrix작성 기법 개발)

  • Jeung, Se Jin;Yoo, Jae Eun;Hur, Dasom;Jung, Seung Kwon
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.2
    • /
    • pp.115-124
    • /
    • 2023
  • Currently, as the frequency of extreme weather events increases, the scale of damage increases when extreme weather events occur. This has been providing forecast information by investing a lot of time and resources to predict rainfall from the past. However, this information is difficult for non-experts to understand, and it does not include information on how much damage occurs when extreme weather events occur. Therefore, in this study, a risk matrix based on heavy rain damage rating was presented by using the impact forecasting standard through the creation of a risk matrix presented for the first time in the UK. First, through correlation analysis between rainfall data and damage data, variables necessary for risk matrix creation are selected, and PERCENTILE (25%, 75%, 90%, 95%) and JNBC (Jenks Natural Breaks Classification) techniques suggested in previous studies are used. Therefore, a rating standard according to rainfall and damage was calculated, and two rating standards were synthesized to present one standard. As a result of the analysis, in the case of the number of households affected by the disaster, PERCENTILE showed the highest distribution than JNBC in the Yeongsan River and Seomjin River basins where the most damage occurred, and similar results were shown in the Chungcheong-do area. Looking at the results of rainfall grading, JNBC's grade was higher than PERCENTILE's, and the highest grade was shown especially in Jeolla-do and Chungcheong-do. In addition, when comparing with the current status of heavy rain warnings in the affected area, it can be confirmed that JNBC is similar. In the risk matrix results, it was confirmed that JNBC replicated better than PERCENTILE in Sejong, Daejeon, Chungnam, Chungbuk, Gwangju, Jeonnam, and Jeonbuk regions, which suffered the most damage.

The application of simplified risk assessment for tunnel (터널 리스크 평가 기법의 적용성에 대한 연구)

  • Kim, Sang-Hwan;Lee, Chung-Hwan
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.9 no.1
    • /
    • pp.63-74
    • /
    • 2007
  • Unexpected ground conditions have always been a major problem for the construction of tunnel. Therefore, it is necessary to evaluate the risk capacity before and/or during construction of new tunnel. This paper presents the simplified risk assessment system using modified stability number (N), namely Underground Risk Index (URI) system, to evaluate the tunnel risk possibility in the design stage. URI is a scoring system for risk possibility by rating the each appraisal elements. The modified stability number (N) which is one of risk factor in the Interaction Matrix parameters such as RQD, UCS, weathering, overburden, stability number, ground water-table, RMR, permeability and so on, is used in the system. In addition, the case study is performed in order to verify the applicability of URI-system in practice.

  • PDF

Developing Fire-Danger Rating Model (산림화재예측(山林火災豫測) Model의 개발(開發)을 위(爲)한 연구(硏究))

  • Han, Sang Yeol;Choi, Kwan
    • Journal of Korean Society of Forest Science
    • /
    • v.80 no.3
    • /
    • pp.257-264
    • /
    • 1991
  • Korea has accomplished the afforestation of its forest land in the early 1980's. To meet the increasing demand for forest products and forest recreation, a development of scientific forest management system is needed as a whole. For this purpose the development of efficient forestfire management system is essential. In this context, the purpose of this study is to develop a theoretical foundation of forestfire danger rating system. In this study, it is hypothesized that the degree of forestfire risk is affected by Weather Factor and Man-Caused Risk Factor. (1) To accommodate the Weather Factor, a statistical model was estimated in which weather variables such as humidity, temperature, precipitation, wind velocity, duration of sunshine were included as independent variables and the probability of forestfire occurrence as dependent variable. (2) To account man-caused risk, historical data of forestfire occurrence was investigated. The contribution of man's activities make to risk was evaluated from three inputs. The first, potential risk class is a semipermanent number which ranks the man-caused fire potential of the individual protection unit relative to that of the other protection units. The second, the risk sources ratio, is that portion of the potential man-caused fire problem which can be charged to a specific cause. The third, daily activity level is that the fire control officer's estimate of how active each of these sources is, For each risk sources, evaluate its daily activity level ; the resulting number is the partial risk factor. Sum up the partial risk factors, one for each source, to get the unnormalized Man-Caused Risk. To make up the Man-Caused Risk, the partial risk factor and the unit's potential risk class were considered together. (3) At last, Fire occurrence index was formed fire danger rating estimation by the Weather Factors and the Man-Caused Risk Index were integrated to form the final Fire Occurrence Index.

  • PDF

Comparison of Rating Methods by Disaster Indicators (사회재난 지표별 등급화 기법 비교: 가축질병을 중심으로)

  • Lee, Hyo Jin;Yun, Hong Sic;Han, Hak
    • Journal of the Society of Disaster Information
    • /
    • v.17 no.2
    • /
    • pp.319-328
    • /
    • 2021
  • Purpose: Recently, a large social disaster has called for the need to diagnose social disaster safety, and the Ministry of Public Administration and Security calculates and publishes regional safety ratings such as regional safety index and national safety diagnosis every year. The existing safety diagnosis system uses equal intervals or normal distribution to grade risk maps in a uniform manner. Method: However, the equidistant technique can objectively analyze risk ratings, but there is a limit to classifying risk ratings when the distribution is skewed to one side, and the z-score technique has a problem of losing credibility if the population does not follow a normal distribution. Because the distribution of statistical data varies from indicator to indicator, the most appropriate rating should be applied for each data distribution. Result: Therefore, in this paper, we analyze the data of disaster indicators and present a comparison and suitable method for traditional equidistant and natural brake techniques to proceed with optimized grading for each indicator. Conclusion: As a result, three of the six new indicators were applied differently from conventional grading techniques

Hazard Analysis and Risk Assessments for Industrial Processes Using FMEA and Bow-Tie Methodologies

  • Afefy, Islam H.
    • Industrial Engineering and Management Systems
    • /
    • v.14 no.4
    • /
    • pp.379-391
    • /
    • 2015
  • Several risk assessment techniques have been presented and investigated in previous research, focusing mainly on the failure mode and effect analysis (FMEA). FMEA can be employed to determine where failures can occur within industrial systems and to assess the impact of such failures. This research proposes a novel methodology for hazard analysis and risk assessments that integrates FMEA with the bow-tie model. The proposed method has been applied and evaluated in a real industrial process, illustrating the effectiveness of the proposed method. Specifically, the bowtie diagram of the critical equipment in the adopted plant in the case study was built. Safety critical barriers are identified and each of these is assigned to industrial process with an individual responsible. The detection rating to the failure mode and the values of risk priority number (RPN) are calculated. The analysis shows the high values of RPN are 500 and 490 in this process. A global corrective actions are suggested to improve the RPN measure. Further managerial insights have been provided.

Diagnosis and Evaluation for the Early Detection of Delirium (섬망의 조기 발견을 위한 진단 및 평가 방법)

  • Chon, Young-Hoon;Lee, Sang-Yeol
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.19 no.1
    • /
    • pp.3-14
    • /
    • 2011
  • Delirium is a common psychiatric disorder and occurs in many hospitalized older patients and has serious consequences including increased mortality rate. Despite its importance, health care clinicians often fail to recognize delirium or misdiagnosed as other psychiatric illness. Awareness of the etiologies and risk factors of delirium should enable clinicians to focus on patients at risk and to recognize delirium symptoms early. To improve early recognition of delirium, emphasis should be given to terminology, psychopathology and knowledge regarding clinical rating scale for delirium in the specific medical and surgical clinical settings. In this study, authors introduce rating scales for delirium and knowledge of clinical diagnostic process for delirium and give rise to appropriate assessment of delirium in the clinical situation.

  • PDF

Local Scalar Trust Metrics with a Fuzzy Adjustment Method

  • Seo, Yang-Jin;Han, Sang-Yong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.2
    • /
    • pp.138-153
    • /
    • 2010
  • The interactions between people who do not know each other have been greatly increased with the on-going increase of people's cyberspace activities. In this situation, there exist potential risk factors such as the possibility of fraud, so we need a method to reduce or eliminate those risk factors. Concerning this necessity, rating systems are widely used, and many trust metrics calculated from rate values that people give to each other are proposed to help them make decisions. However, the trust metrics decrease the accuracy, and this is caused by the different rating scales and ranges of each person. So, we propose a fuzzy adjustment method to solve this problem. It is possible to catch the exact meaning of the trust value that each person selects through applying fuzzy sets, which improve the accuracy of the trust metric calculated from the trust values. We have applied our fuzzy adjustment method to the TidalTrust algorithm, a representative algorithm for calculating the local scalar trust metric, and we performed an experimental evaluation with four data sets and three evaluation methods.

Integration of Similarity Values Reflecting Rating Time for Collaborative Filtering

  • Lee, Soojung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.1
    • /
    • pp.83-89
    • /
    • 2022
  • As a representative technique of recommender systems, collaborative filtering has been successfully in service through many commercial and academic systems. This technique recommends items highly rated by similar neighbor users, based on similarity of ratings on common items rated by two users. Recently research on time-aware recommender systems has been conducted, which attempts to improve system performance by reflecting user rating time of items. However, the decay rate uniform to past ratings has a risk of lowering the rating prediction performance of the system. This study proposes a rating time-aware similarity measure between users, which is a novel approach different from previous ones. The proposed approach considers changes of similarity value over time, not item rating time. In order to evaluate performance of the proposed method, experiments using various parameter values and types of time change functions are conducted, resulting in improving prediction accuracy of existing traditional similarity measures significantly.

A Study on the Determination of Indicators for the Risk Assessment of Ground Depression Using SAR Imageson (SAR 영상을 활용한 지반침하의 위험평가를 위한 지표결정에 대한 연구)

  • Lee, Hyojin;Yoon, Hongsic;Han, Hak
    • Journal of the Korean GEO-environmental Society
    • /
    • v.22 no.7
    • /
    • pp.13-20
    • /
    • 2021
  • The problem of subsidence of the roadbed near the Honam High Speed Railway, which opened in April 2015, continues to be raised, and the ground stability of the area near the Honam High Speed Railway may also be problematic. It is very important to select the factors that determine the indicators and indicators in producing the risk maps. Existing risk indicators are calculated as the final displacement volume based on the last observed date of the observed period, and time-series indicator displacement must be identified to analyze the cause of subsidence and the behavior of the indicator. Furthermore, for a wide range of regions, it is economically inefficient to conduct direct level measurements, so we wanted to observe surface displacement using SAR images. In this paper, time series indicator displacement was observed using PS-InSAR techniques, and risk was compared by rating each factor using the difference between final indicator displacement, cumulative indicator displacement, minimum displacement and maximum displacement as factors for determining risk indicators. As a result, the risk rating of the final displacement is different from that of each factor, and we propose adding factors from different perspectives in determining risk indicators. It is expected to be an important study in finding the cause of ground subsidence and finding solutions.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.