• Title/Summary/Keyword: Prior Probability

Search Result 287, Processing Time 0.029 seconds

Vertebral compression fractures after spine irradiation using conventional fractionation in patients with metastatic colorectal cancer

  • Rhee, Woo Joong;Kim, Kyung Hwan;Chang, Jee Suk;Kim, Hyun Ju;Choi, Seohee;Koom, Woong Sub
    • Radiation Oncology Journal
    • /
    • v.32 no.4
    • /
    • pp.221-230
    • /
    • 2014
  • Purpose: To evaluate the risk of vertebral compression fracture (VCF) after conventional radiotherapy (RT) for colorectal cancer (CRC) with spine metastasis and to identify risk factors for VCF in metastatic and non-metastatic irradiated spines. Materials and Methods: We retrospectively reviewed 68 spinal segments in 16 patients who received conventional RT between 2009 and 2012. Fracture was defined as a newly developed VCF or progression of an existing fracture. The target volume included all metastatic spinal segments and one additional non-metastatic vertebra adjacent to the tumor-involved spines. Results: The median follow-up was 7.8 months. Among all 68 spinal segments, there were six fracture events (8.8%) including three new VCFs and three fracture progressions. Observed VCF rates in vertebral segments with prior irradiation or pre-existing compression fracture were 30.0% and 75.0% respectively, compared with 5.2% and 4.7% for segments without prior irradiation or pre-existing compression fracture, respectively (both p < 0.05). The 1-year fracture-free probability was 87.8% (95% CI, 78.2-97.4). On multivariate analysis, prior irradiation (HR, 7.30; 95% CI, 1.31-40.86) and pre-existing compression fracture (HR, 18.45; 95% CI, 3.42-99.52) were independent risk factors for VCF. Conclusion: The incidence of VCF following conventional RT to the spine is not particularly high, regardless of metastatic tumor involvement. Spines that received irradiation and/or have pre-existing compression fracture before RT have an increased risk of VCF and require close observation.

Development of Stochastic Markov Process Model for Maintenance of Armor Units of Rubble-Mound Breakwaters (경사제 피복재의 유지관리를 위한 추계학적 Markov 확률모형의 개발)

  • Lee, Cheol-Eung
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.25 no.2
    • /
    • pp.52-62
    • /
    • 2013
  • A stochastic Markov process (MP) model has been developed for evaluating the probability of failure of the armor unit of rubble-mound breakwaters as a function of time. The mathematical MP model could have been formulated by combining the counting process or renewal process (CP/RP) on the load occurrences with the damage process (DP) on the cumulative damage events, and applied to the armor units of rubble-mound breakwaters. Transition probabilities have been estimated by Monte-Carlo simulation (MCS) technique with the definition of damage level of armor units, and very well satisfies some conditions constrained in the probabilistic and physical views. The probabilities of failure have been also compared and investigated in process of time which have been calculated according to the variations of return period and safety factor being the important variables related to design of armor units of rubble-mound breakwater. In particular, it can be quantitatively found how the prior damage levels can effect on the sequent probabilities of failure. Finally, two types of methodology have been in this study proposed to evaluate straightforwardly the repair times which are indispensable to the maintenance of armor units of rubble-mound breakwaters and shown several simulation results including the cost analyses.

Class Separability according to the different Type of Satellite Images (위성영상 종류에 따른 분리도 특성)

  • Son, Kyeong-Sook;Choi, Hyun;Kim, Si-Nyun;Kang, In-Joon
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2004.04a
    • /
    • pp.245-250
    • /
    • 2004
  • The classification of the satellite images is basic part in Remote sensing. In classification of the satellite images, class separability feature is very effective accuracy of the images classified. For improving classification accuracy, It is necessary to study classification methode than analysis of class separability feature deciding classification probability. In this study, IKONOS, SPOT 5, Landsat TM, were resampled to sizes 1m grid. Above images were calculated the class separability prior to the step for classification of pixels. The results of the study were valued necessary process in geometric information building. This study help to improve accuracy of classification as feature of class separability in the class through optimizing previous classification steps.

  • PDF

Interference Minimization Using Cognitive Spectrum Decision for LED-ID Network

  • Saha, Nirzhar;Le, Nam Tuan;Jang, Yeong Min
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38B no.2
    • /
    • pp.115-121
    • /
    • 2013
  • LED-ID (Light Emitting Diode-Identification) network is envisioned to be the next generation indoor wireless communication medium by which simultaneously high speed data transmission, identification, and illumination are possible. In spite of being extremely promising, it suffers from much impairment. Signals having different propagation paths can suffer from delays, and phase shifts which will eventually result interference. The probability of interference is also increased when communication links are established between a tag and several readers. Therefore it is necessary to reduce the interference in LED-ID network to ensure quality of service. It is possible to avoid interference by knowing the information about readers prior to assign the available spectrum. In this paper, we have proposed dynamic spectrum decision using cognitive radio concept. The simulation results justify that the proposed scheme is better than the conventional scheme.

Estimation of the exponentiated half-logistic distribution based on multiply Type-I hybrid censoring

  • Jeon, Young Eun;Kang, Suk-Bok
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.1
    • /
    • pp.47-64
    • /
    • 2020
  • In this paper, we derive some estimators of the scale parameter of the exponentiated half-logistic distribution based on the multiply Type-I hybrid censoring scheme. We assume that the shape parameter λ is known. We obtain the maximum likelihood estimator of the scale parameter σ. The scale parameter is estimated by approximating the given likelihood function using two different Taylor series expansions since the likelihood equation is not explicitly solved. We also obtain Bayes estimators using prior distribution. To obtain the Bayes estimators, we use the squared error loss function and general entropy loss function (shape parameter q = -0.5, 1.0). We also derive interval estimation such as the asymptotic confidence interval, the credible interval, and the highest posterior density interval. Finally, we compare the proposed estimators in the sense of the mean squared error through Monte Carlo simulation. The average length of 95% intervals and the corresponding coverage probability are also obtained.

Optimal Multi-Model Ensemble Model Development Using Hierarchical Bayesian Model Based (Hierarchical Bayesian Model을 이용한 GCMs 의 최적 Multi-Model Ensemble 모형 구축)

  • Kwon, Hyun-Han;Min, Young-Mi;Hameed, Saji N.
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2009.05a
    • /
    • pp.1147-1151
    • /
    • 2009
  • In this study, we address the problem of producing probability forecasts of summer seasonal rainfall, on the basis of Hindcast experiments from a ensemble of GCMs(cwb, gcps, gdaps, metri, msc_gem, msc_gm2, msc_gm3, msc_sef and ncep). An advanced Hierarchical Bayesian weighting scheme is developed and used to combine nine GCMs seasonal hindcast ensembles. Hindcast period is 23 years from 1981 to 2003. The simplest approach for combining GCM forecasts is to weight each model equally, and this approach is referred to as pooled ensemble. This study proposes a more complex approach which weights the models spatially and seasonally based on past model performance for rainfall. The Bayesian approach to multi-model combination of GCMs determines the relative weights of each GCM with climatology as the prior. The weights are chosen to maximize the likelihood score of the posterior probabilities. The individual GCM ensembles, simple poolings of three and six models, and the optimally combined multimodel ensemble are compared.

  • PDF

FEASIBILITY MAPPING OF GROUND WATER YIELD CHARACTERISTICS USING WEIGHT OF EVIDENCE TECHNIQUE: A CASE STUDY

  • Heo, Seon-Hee;Lee, Ki-Won
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.430-433
    • /
    • 2005
  • In this study, weight of evidence(WOE) technique based on the bayesian method was applied to estimate the groundwater yield characteristics in the Pocheon area in Kyungki-do. The ground water preservation depends on many hydrogeologic factors that include hydrologic data, landuse data, topographic data, geological map and other natural materials, even with man-made things. All these data can be digitally collected and managed by GIS database. In the applied technique of WOE, The prior probabilities were estimated as the factors that affect the yield on lineament, geology, drainage pattern or river system density, landuse and soil. We calculated the value of the Weight W+, W- of each factor and estimated the contrast value of it. Results by the ground water yield characteristic calculations were presented in the form of posterior probability map to the consideration of in-situ samples. It is concluded that this technique is regarded as one of the effective technique for the feasibility mapping related to detection of groundwater bearing zones and its spatial pattern.

  • PDF

Relative Risk Aversion and Stochastic-Statistical Dominance (상대적(相對的) 위험(危險)과 추계적(推計的)-통계적(統計的) 우세법칙(優勢法則))

  • Lee, Dae-Joo
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.15 no.2
    • /
    • pp.33-44
    • /
    • 1989
  • This paper presents stochastic-statistical dominance rules which eliminate dominated alternatives thereby reduce the number of satisficing alternatives to a manageable size so that the decision maker can choose the best alternative among them when neither the utility function nor the probability distribution of outcomes is exactly known. Specifically, it is assumed that only the characteristics of the utility function and the value function are known. Also, it is assumed that prior probabilities of the mutually exclusive states of nature are not known, but their relative bounds are known. First, the notion of relative risk aversion is used to describe the decision maker's attitude toward risk, which is defined with the acknowledgement that the utility function of the decision maker is a composite function of a cardinal value function and a utility function with-respect to the value function. Then, stochastic-statistical dominance rules are developed to screen out dominated alternatives according to the decision maker's attitude toward risk represented in the form of the measure of relative risk aversion.

  • PDF

Confidence Intervals for a Proportion in Finite Population Sampling

  • Lee, Seung-Chun
    • Communications for Statistical Applications and Methods
    • /
    • v.16 no.3
    • /
    • pp.501-509
    • /
    • 2009
  • Recently the interval estimation of binomial proportions is revisited in various literatures. This is mainly due to the erratic behavior of the coverage probability of the well-known Wald confidence interval. Various alternatives have been proposed. Among them, the Agresti-Coull confidence interval, the Wilson confidence interval and the Bayes confidence interval resulting from the noninformative Jefferys prior were recommended by Brown et al. (2001). However, unlike the binomial distribution case, little is known about the properties of the confidence intervals in finite population sampling. In this note, the property of confidence intervals is investigated in anile population sampling.

Determination of Noise Threshold from Signal Histogram in the Wavelet Domain

  • Kim, Eunseo;Lee, Kamin;Yang, Sejung;Lee, Byung-Uk
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.2
    • /
    • pp.156-160
    • /
    • 2014
  • Thresholding in frequency domain is a simple and effective noise reduction technique. Determination of the threshold is critical to the image quality. The optimal threshold minimizing the Mean Square Error (MSE) is chosen adaptively in the wavelet domain; we utilize an equation of the MSE for the soft-thresholded signal and the histogram of wavelet coefficients of the original image and noisy image. The histogram of the original signal is estimated through the deconvolution assuming that the probability density functions (pdfs) of the original signal and the noise are statistically independent. The proposed method is quite general in that it does not assume any prior for the source pdf.