• Title/Summary/Keyword: posterior probability

Search Result 224, Processing Time 0.039 seconds

Using Estimated Probability from Support Vector Machines for Credit Rating in IT Industry

  • Hong, Tae-Ho;Shin, Taek-Soo
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2005.11a
    • /
    • pp.509-515
    • /
    • 2005
  • Recently, support vector machines (SVMs) are being recognized as competitive tools as compared with other data mining techniques for solving pattern recognition or classification decision problems. Furthermore, many researches, in particular, have proved it more powerful than traditional artificial neural networks (ANNs)(Amendolia et al., 2003; Huang et al., 2004, Huang et al., 2005; Tay and Cao, 2001; Min and Lee, 2005; Shin et al, 2005; Kim, 2003). The classification decision, such as a binary or multi-class decision problem, used by any classifier, i.e. data mining techniques is cost-sensitive. Therefore, it is necessary to convert the output of the classifier into well-calibrated posterior probabilities. However, SVMs basically do not provide such probabilities. So it required to use any method to create probabilities (Platt, 1999; Drish, 2001). This study applies a method to estimate the probability of outputs of SVM to bankruptcy prediction and then suggests credit scoring methods using the estimated probability for bank's loan decision making.

  • PDF

A New Fast EM Algorithm (새로운 고속 EM 알고리즘)

  • 김성수;강지혜
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.10
    • /
    • pp.575-587
    • /
    • 2004
  • In this paper. a new Fast Expectation-Maximization algorithm(FEM) is proposed. Firstly the K-means algorithm is modified to reduce the number of iterations for finding the initial values that are used as the initial values in EM process. Conventionally the Initial values in K-means clustering are chosen randomly. which sometimes forces the process of clustering converge to some undesired center points. Uniform partitioning method is added to the conventional K-means to extract the proper initial points for each clusters. Secondly the effect of posterior probability is emphasized such that the application of Maximum Likelihood Posterior(MLP) yields fast convergence. The proposed FEM strengthens the characteristics of conventional EM by reinforcing the speed of convergence. The superiority of FEM is demonstrated in experimental results by presenting the improvement results of EM and accelerating the speed of convergence in parameter estimation procedures.

Bayesian Texture Segmentation Using Multi-layer Perceptron and Markov Random Field Model (다층 퍼셉트론과 마코프 랜덤 필드 모델을 이용한 베이지안 결 분할)

  • Kim, Tae-Hyung;Eom, Il-Kyu;Kim, Yoo-Shin
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.1
    • /
    • pp.40-48
    • /
    • 2007
  • This paper presents a novel texture segmentation method using multilayer perceptron (MLP) networks and Markov random fields in multiscale Bayesian framework. Multiscale wavelet coefficients are used as input for the neural networks. The output of the neural network is modeled as a posterior probability. Texture classification at each scale is performed by the posterior probabilities from MLP networks and MAP (maximum a posterior) classification. Then, in order to obtain the more improved segmentation result at the finest scale, our proposed method fuses the multiscale MAP classifications sequentially from coarse to fine scales. This process is done by computing the MAP classification given the classification at one scale and a priori knowledge regarding contextual information which is extracted from the adjacent coarser scale classification. In this fusion process, the MRF (Markov random field) prior distribution and Gibbs sampler are used, where the MRF model serves as the smoothness constraint and the Gibbs sampler acts as the MAP classifier. The proposed segmentation method shows better performance than texture segmentation using the HMT (Hidden Markov trees) model and HMTseg.

Sinus mucosal healing pattern according to pterygomaxillary disjunction type after Le Fort I osteotomy

  • Jang, Tae-Seok;Lee, Seung-Woo;Lee, Baek-Soo;Shim, Gyujo;Seon, Suyun;Ohe, Joo-Young
    • Journal of the Korean Association of Oral and Maxillofacial Surgeons
    • /
    • v.48 no.5
    • /
    • pp.292-296
    • /
    • 2022
  • Objectives: During Le Fort I osteotomy, the separation of the pterygomaxillary junction (PMJ) is a difficult procedure for most surgeons because it is invisible. In this process, damage to the posterior structures constituting the sinus or those adjacent to it, including the maxillary sinus posterior wall and pterygoid plate, may occur. We would like to investigate the effects of this on the inside of the maxillary sinus after surgery and whether there are complications. Materials and Methods: One-hundred patients who underwent Le Fort I osteotomy from 2013 to 2020 using cone-beam computed tomography images were classified into two groups (clean-cut type and fractured type) according to the PMJ cutting pattern. In addition, the mucosal thickness in the maxillary sinus was divided into preoperative, postoperative three months, one year, and the change over the course of surgery was evaluated retrospectively. Results: Of the total 100 cases, the clean-cut type numbered 28 cases and the fractured type totaled 72 cases. Among the fracture types, part of the sinus wall and the pterygoid plate were broken in 69 cases, and the maxillary sinus posterior wall was detached in three cases. There was no statistically significant difference in sinus mucosal thickening between the clean-cut type and fractured type of the PMJ, three months and one year after surgery between the two groups. However, there was a significant difference in sinus mucosal thickness at postoperative one year in the case where a partial detachment of the maxillary sinus posterior wall occurred compared to not. Conclusion: Even if there is some damage to the structures behind the PMJ, it may not be reasonable to spend some time on the PMJ separation process considering the overall postoperative complications, if there is no significant difference inside the sinus, or increased probability of postoperative complications.

Agency Problems in Banks and the Efficiency of Restructuring Distressed Firms (은행의 대리문제와 부실기업에의 출자전환)

  • Lee, Sang-Woo;Park, Rae-Soo
    • The Korean Journal of Financial Management
    • /
    • v.24 no.2
    • /
    • pp.113-145
    • /
    • 2007
  • In this paper, we examine whether the poor performance of distressed firms where banks take equity may occur due to agency problems in banks. By adopting the debt-equity swap, the bank can effectively postpone the occurrence of bad loans form the failure of the distressed firm. As a result, firms with more debt will be more likely to obtain debt-equity swap, regardless of their probabilities of revival. This is not because they are more profitable, but because they have more debt and thus it poses greater risk to the bank. We empirically look into these predictions with the data of 44 workout firms and find the following results. First, debt-equity swap appears to be more applicable especially when the distressed firms are large and when BIS of related banks is low. Specifically, the conditional probability of 'large firms' based on debt-equity swap is 65.52% and the conditional probability of 'bad banks' based on debt-equity swap is 75.86%. Also, as predicted, the performance of these debt-equity firms is poorer than that of non debt-equity firms. The conditional probability of 'large firms' based on posterior failure is 84.62% and the conditional probability of 'bad banks' based on posterior failure is 84.62%. This is consistent with our predictions and is also confirmed through results of the logit regression analysis. Second, when the restructuring is led by 'good banks', the performance of equity-swap firms is superior to that of non equity-swap firms. This result is consistent with that of James(1995). Hence, we can conclude that there may be some agency problems in restructuring distressed firm-especially when distressed firms are large and banks are bad. And these agency problems can reconcile the difference between James' results and Park, Lee, and Jang's.

  • PDF

Bayesian Method for Modeling Male Breast Cancer Survival Data

  • Khan, Hafiz Mohammad Rafiqullah;Saxena, Anshul;Rana, Sagar;Ahmed, Nasar Uddin
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.15 no.2
    • /
    • pp.663-669
    • /
    • 2014
  • Background: With recent progress in health science administration, a huge amount of data has been collected from thousands of subjects. Statistical and computational techniques are very necessary to understand such data and to make valid scientific conclusions. The purpose of this paper was to develop a statistical probability model and to predict future survival times for male breast cancer patients who were diagnosed in the USA during 1973-2009. Materials and Methods: A random sample of 500 male patients was selected from the Surveillance Epidemiology and End Results (SEER) database. The survival times for the male patients were used to derive the statistical probability model. To measure the goodness of fit tests, the model building criterions: Akaike Information Criteria (AIC), Bayesian Information Criteria (BIC), and Deviance Information Criteria (DIC) were employed. A novel Bayesian method was used to derive the posterior density function for the parameters and the predictive inference for future survival times from the exponentiated Weibull model, assuming that the observed breast cancer survival data follow such type of model. The Markov chain Monte Carlo method was used to determine the inference for the parameters. Results: The summary results of certain demographic and socio-economic variables are reported. It was found that the exponentiated Weibull model fits the male survival data. Statistical inferences of the posterior parameters are presented. Mean predictive survival times, 95% predictive intervals, predictive skewness and kurtosis were obtained. Conclusions: The findings will hopefully be useful in treatment planning, healthcare resource allocation, and may motivate future research on breast cancer related survival issues.

A Recognition Framework for Facial Expression by Expression HMM and Posterior Probability (표정 HMM과 사후 확률을 이용한 얼굴 표정 인식 프레임워크)

  • Kim, Jin-Ok
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.11 no.3
    • /
    • pp.284-291
    • /
    • 2005
  • I propose a framework for detecting, recognizing and classifying facial features based on learned expression patterns. The framework recognizes facial expressions by using PCA and expression HMM(EHMM) which is Hidden Markov Model (HMM) approach to represent the spatial information and the temporal dynamics of the time varying visual expression patterns. Because the low level spatial feature extraction is fused with the temporal analysis, a unified spatio-temporal approach of HMM to common detection, tracking and classification problems is effective. The proposed recognition framework is accomplished by applying posterior probability between current visual observations and previous visual evidences. Consequently, the framework shows accurate and robust results of recognition on as well simple expressions as basic 6 facial feature patterns. The method allows us to perform a set of important tasks such as facial-expression recognition, HCI and key-frame extraction.

Sensitivity Assessment of Meteorological Drought Index using Bayesian Network (베이지안 네트워크를 이용한 기상학적 가뭄지수의 민감도 평가)

  • Yoo, Ji-Young;Kim, Jin-Young;Kwon, Hyun-Han;Kim, Tae-Woong
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.34 no.6
    • /
    • pp.1787-1796
    • /
    • 2014
  • The main purpose of this study is to assess the sensitivity of meteorological drought indices in probabilistic perspective using Bayesian Network model. In other words, this study analyzed interrelationships between various drought indices and investigated the order of the incident. In this study, a Bayesian Network model was developed to evaluate meteorological drought characteristics by employing the percent of normal precipitation (PN) and Standardized Precipitation Index (SPI) with various time scales such as 30, 60, and 90 days. The sensitivity analysis was also performed for posterior probability of drought indices with various time scales. As a result, this study found out interdependent relationships among various drought indices and proposed the effective application method of SPI to drought monitoring.

A Novel Grasshopper Optimization-based Particle Swarm Algorithm for Effective Spectrum Sensing in Cognitive Radio Networks

  • Ashok, J;Sowmia, KR;Jayashree, K;Priya, Vijay
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.520-541
    • /
    • 2023
  • In CRNs, SS is of utmost significance. Every CR user generates a sensing report during the training phase beneath various circumstances, and depending on a collective process, either communicates or remains silent. In the training stage, the fusion centre combines the local judgments made by CR users by a majority vote, and then returns a final conclusion to every CR user. Enough data regarding the environment, including the activity of PU and every CR's response to that activity, is acquired and sensing classes are created during the training stage. Every CR user compares their most recent sensing report to the previous sensing classes during the classification stage, and distance vectors are generated. The posterior probability of every sensing class is derived on the basis of quantitative data, and the sensing report is then classified as either signifying the presence or absence of PU. The ISVM technique is utilized to compute the quantitative variables necessary to compute the posterior probability. Here, the iterations of SVM are tuned by novel GO-PSA by combining GOA and PSO. Novel GO-PSA is developed since it overcomes the problem of computational complexity, returns minimum error, and also saves time when compared with various state-of-the-art algorithms. The dependability of every CR user is taken into consideration as these local choices are then integrated at the fusion centre utilizing an innovative decision combination technique. Depending on the collective choice, the CR users will then communicate or remain silent.

LFMMI-based acoustic modeling by using external knowledge (External knowledge를 사용한 LFMMI 기반 음향 모델링)

  • Park, Hosung;Kang, Yoseb;Lim, Minkyu;Lee, Donghyun;Oh, Junseok;Kim, Ji-Hwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.5
    • /
    • pp.607-613
    • /
    • 2019
  • This paper proposes LF-MMI (Lattice Free Maximum Mutual Information)-based acoustic modeling using external knowledge for speech recognition. Note that an external knowledge refers to text data other than training data used in acoustic model. LF-MMI, objective function for optimization of training DNN (Deep Neural Network), has high performances in discriminative training. In LF-MMI, a phoneme probability as prior probability is used for predicting posterior probability of the DNN-based acoustic model. We propose using external knowledges for training the prior probability model to improve acoustic model based on DNN. It is measured to relative improvement 14 % as compared with the conventional LF-MMI-based model.