• Title/Summary/Keyword: Ratio of random variable

Search Result 78, Processing Time 0.026 seconds

An empirical study on the influence of product portfolio and interest rate on the lapse rate in the life insurance industry (생명보험산업에서 상품 판매비중과 금리가 해약률에 미치는 영향에 관한 연구)

  • Jung, Se-Chang;Ouh, Seung-Cheol;Kang, Jung-Chul
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.1
    • /
    • pp.73-80
    • /
    • 2011
  • The purpose of this study is to analyse the influence of product portfolio and interest rate on the lapse ratio. This issue is very important because of the recent introduction of IFRS and CFP. The fixed-effect model and the random-effect model are estimated with using panel data and the Hausman test is employed in order to select a model. The results of this study is summarized as follows. Firstly, the random effect model is selected. According to the model, the lapse rate increases as the portfolio of savings plan, sickness, and death increases and the interest rate is high. Secondly, health insurance and variable insurance product show a negative relationship with the lapse rate.

An Order Statistic-Based Spectrum Sensing Scheme for Cooperative Cognitive Radio Networks in Non-Gaussian Noise Environments (비정규 잡음 환경에서 협력 무선인지 네트워크를 위한 순서 기반 스펙트럼 센싱 기법)

  • Cho, Hyung-Weon;Lee, Youngpo;Yoon, Seokho;Bae, Suk-Neung;Lee, Kwang-Eog
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37A no.11
    • /
    • pp.943-951
    • /
    • 2012
  • In this paper, we propose a novel spectrum sensing scheme based on the order statistic for cooperative cognitive radio network in non-Gaussian noise environments. Specifically, we model the ambient noise as the bivariate isotropic symmetric ${\alpha}$-stable random variable, and then, propose a cooperative spectrum sensing scheme based on the order of observations and the generalized likelihood ratio test. From numerical results, it is confirmed that the proposed scheme offers a substantial performance improvement over the conventional scheme in non-Gaussian noise environments.

Numerical Integration-based Performance Analysis of Cross-eye Jamming Algorithm through Amplitude Ratio Perturbation (진폭비 섭동에 의한 cross-eye 재밍에 대한 수치적분 기반 성능분석)

  • Kim, Je-An;Choi, Yoon-Ju;Lee, Joon-Ho
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.12
    • /
    • pp.59-64
    • /
    • 2021
  • This paper deals with the performance analysis of the jamming effect of cross-eye when the difference between the real amplitude ratio and the nominal amplitude ratio due to mechanical defects is modeled as a random variable with a normal distribution. We propose how to evaluate mean square difference (MSD) obtained using a numerical integration-based approach. The MSD obtained by the proposed method is closer to non-approximated Monte-Carlo simulation-based MSD than the analytic MSD calculated using the first-order Taylor approximation and the second-order Taylor approximation. It is shown that, based on the numerical integration, the effect of amplitude ratio perturbation on the cross-eye jamming performance can be evaluated without going through the computationally intensive Monte-Carlo method.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

Analysis of Traffic Crash Severity on Freeway Using Hierarchical Binomial Logistic Model (계층 이항 로지스틱모형에 의한 고속도로 교통사고 심각도 분석)

  • Mun, Sung-Ra;Lee, Young-Ihn
    • International Journal of Highway Engineering
    • /
    • v.13 no.4
    • /
    • pp.199-209
    • /
    • 2011
  • In the study of traffic safety, the analysis on factors affecting crash severity and the understanding about their relationship is important to be planning and execute to improve safety of road and traffic facilities. The purpose of this study is to develop a hierarchical binomial logistic model to identify the significant factors affecting fatal injuries and vehicle damages of traffic crashes on freeway. Two models on death and total vehicle damage are developed. The hierarchical structure of response variable is composed of two level, crash-occupant and crash-vehicle. As a result, we have gotten the crash-level random effect from these hierarchical structure as well as the fixed effect of covariates, namely odds ratio. The crash on the main line and in-out section have greater damage than other facilities. Injuries and vehicle damages are severe in case of traffic violations, centerline invasion and speeding. Also, collision crash and fire occurrence is more severe damaged than other crash types. The surrounding environment of surface conditions by climate and visibility conditions by day and night is a significant factor on crash occurrence. On the orher hand, the geometric condition of road isn't.

Performance of cross-eye jamming due to amplitude mismatch: Comparison of performance analysis of angle tracking error (진폭비 불일치에 의한 cross-eye 재밍 성능: 각도 추적 오차 성능 분석 비교)

  • Kim, Je-An;Kim, Jin-Sung;Lee, Joon-Ho
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.11
    • /
    • pp.51-56
    • /
    • 2021
  • In this paper, performance degradation in the cross-eye jamming due to amplitude mismatch of two jamming antennas is considered. The mismatch of the amplitude ratio is modeled as a random variable with a normal distribution of the difference between the actual amplitude ratio and the nominal amplitude ratio due to mechanical defects. In the proposed analytic performance analysis, the first-order Taylor series expansion and the second-order Taylor series expansion is adopted. Performance measure of the cross-eye jamming is the mean square difference (MSD). The analytically derived MSD is validated by comparing the analytically derived MSD with the first-order Taylor series-based simulation-based MSD and the second-order Taylor series-based simulation-based MSD. It shows that the analysis-based MSD is superior to the Monte-Carlo-based MSD, which has a high calculation cost.

Design Sensitivity and Reliability Analysis of Plates (판구조물의 설계감도해석 및 신뢰성해석)

  • 김지호;양영순
    • Computational Structural Engineering
    • /
    • v.4 no.4
    • /
    • pp.125-133
    • /
    • 1991
  • For the purpose of efficiently calculating the design sensitivity and the reliability for the complicated structures in which the structural responses or limit state functions are given by implicit form, the probabilistic finite element method is introduced to formulate the deterministic design sensitivity analysis method and incorporated with the second moment reliability methods such as MVFOSM, AFOSM and SORM. Also, the probabilistic design sensitivity analysis method needed in the reliability-based design is proposed. As numerical examples, two thin plates are analyzed for the cases of plane stress and plate bending. The initial yielding is defined as failure criterion, and applied loads, yield stress, plate thickness, Young's modulus and Poisson's ratio are treated as random variables. It is found that the response variances and the failure probabilities calculated by the proposed PFEM-based reliability method show good agreement with those by Monte Carlo simulation. The probabilistic design sensitivity evaluates explicitly the contribution of each random variable to probability of failure. Further, the design change can be evaluated without any difficulty, and their effect on reliability can be estimated quickly with high accuracy.

  • PDF

Predicting the compressive strength of SCC containing nano silica using surrogate machine learning algorithms

  • Neeraj Kumar Shukla;Aman Garg;Javed Bhutto;Mona Aggarwal;Mohamed Abbas;Hany S. Hussein;Rajesh Verma;T.M. Yunus Khan
    • Computers and Concrete
    • /
    • v.32 no.4
    • /
    • pp.373-381
    • /
    • 2023
  • Fly ash, granulated blast furnace slag, marble waste powder, etc. are just some of the by-products of other sectors that the construction industry is looking to include into the many types of concrete they produce. This research seeks to use surrogate machine learning methods to forecast the compressive strength of self-compacting concrete. The surrogate models were developed using Gradient Boosting Machine (GBM), Support Vector Machine (SVM), Random Forest (RF), and Gaussian Process Regression (GPR) techniques. Compressive strength is used as the output variable, with nano silica content, cement content, coarse aggregate content, fine aggregate content, superplasticizer, curing duration, and water-binder ratio as input variables. Of the four models, GBM had the highest accuracy in determining the compressive strength of SCC. The concrete's compressive strength is worst predicted by GPR. Compressive strength of SCC with nano silica is found to be most affected by curing time and least by fine aggregate.

Probabilistic Modeling of Photovoltaic Power Systems with Big Learning Data Sets (대용량 학습 데이터를 갖는 태양광 발전 시스템의 확률론적 모델링)

  • Cho, Hyun Cheol;Jung, Young Jin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.5
    • /
    • pp.412-417
    • /
    • 2013
  • Analytical modeling of photovoltaic power systems has been receiving significant attentions in recent years in that it is easy to apply for prediction of its dynamics and fault detection and diagnosis in advanced engineering technologies. This paper presents a novel probabilistic modeling approach for such power systems with a big data sequence. Firstly, we express input/output function of photovoltaic power systems in which solar irradiation and ambient temperature are regarded as input variable and electric power is output variable respectively. Based on this functional relationship, conditional probability for these three random variables(such as irradiation, temperature, and electric power) is mathematically defined and its estimation is accomplished from ratio of numbers of all sample data to numbers of cases related to two input variables, which is efficient in particular for a big data sequence of photovoltaic powers systems. Lastly, we predict the output values from a probabilistic model of photovoltaic power systems by using the expectation theory. Two case studies are carried out for testing reliability of the proposed modeling methodology in this paper.

Convergence Study on Fabrication and Plasma Module Process Technology of ReRAM Device for Neuromorphic Based (뉴로모픽 기반의 저항 변화 메모리 소자 제작 및 플라즈마 모듈 적용 공정기술에 관한 융합 연구)

  • Kim, Geunho;Shin, Dongkyun;Lee, Dong-Ju;Kim, Eundo
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.10
    • /
    • pp.1-7
    • /
    • 2020
  • The manufacturing process of the resistive variable memory device, which is the based of neuromorphic device, maintained the continuity of vacuum process and applied plasma module suitable for the production of the ReRAM(resistive random access memory) and process technology for the neuromorphic computing, which ensures high integrated and high reliability. The ReRAM device of the oxide thin-film applied to the plasma module was fabricated, and research to improve the properties of the device was conducted through various experiments through changes in materials and process methods. ReRAM device based on TiO2/TiOx of oxide thin-film using plasma module was completed. Crystallinity measured by XRD rutile, HRS:LRS current value is 2.99 × 103 ratio or higher, driving voltage was measured using a semiconductor parameter, and it was confirmed that it can be driven at low voltage of 0.3 V or less. It was possible to fabricate a neuromorphic ReRAM device using oxygen gas in a previously developed plasma module, and TiOx thin-films were deposited to confirm performance.