• Title/Summary/Keyword: Gibbs Algorithm

검색결과 90건 처리시간 0.026초

Non-destructive assessment of the three-point-bending strength of mortar beams using radial basis function neural networks

  • Alexandridis, Alex;Stavrakas, Ilias;Stergiopoulos, Charalampos;Hloupis, George;Ninos, Konstantinos;Triantis, Dimos
    • Computers and Concrete
    • /
    • 제16권6호
    • /
    • pp.919-932
    • /
    • 2015
  • This paper presents a new method for assessing the three-point-bending (3PB) strength of mortar beams in a non-destructive manner, based on neural network (NN) models. The models are based on the radial basis function (RBF) architecture and the fuzzy means algorithm is employed for training, in order to boost the prediction accuracy. Data for training the models were collected based on a series of experiments, where the cement mortar beams were subjected to various bending mechanical loads and the resulting pressure stimulated currents (PSCs) were recorded. The input variables to the NN models were then calculated by describing the PSC relaxation process through a generalization of Boltzmannn-Gibbs statistical physics, known as non-extensive statistical physics (NESP). The NN predictions were evaluated using k-fold cross-validation and new data that were kept independent from training; it can be seen that the proposed method can successfully form the basis of a non-destructive tool for assessing the bending strength. A comparison with a different NN architecture confirms the superiority of the proposed approach.

Analysis on Topic Trends and Topic Modeling of KSHSM Journal Papers using Text Mining (텍스트마이닝을 활용한 보건의료산업학회지의 토픽 모델링 및 토픽트렌드 분석)

  • Cho, Kyoung-Won;Bae, Sung-Kwon;Woo, Young-Woon
    • The Korean Journal of Health Service Management
    • /
    • 제11권4호
    • /
    • pp.213-224
    • /
    • 2017
  • Objectives : The purpose of this study was to analyze representative topics and topic trends of papers in Korean Society and Health Service Management(KSHSM) Journal. Methods : We collected English abstracts and key words of 516 papers in KSHSM Journal from 2007 to 2017. We utilized Python web scraping programs for collecting the papers from Korea Citation Index web site, and RStudio software for topic analysis based on latent Dirichlet allocation algorithm. Results : 9 topics were decided as the best number of topics by perplexity analysis and the resultant 9 topics for all the papers were extracted using Gibbs sampling method. We could refine 9 topics to 5 topics by deep consideration of meanings of each topics and analysis of intertopic distance map. In topic trends analysis from 2007 to 2017, we could verify 'Health Management' and 'Hospital Service' were two representative topics, and 'Hospital Service' was prevalent topic by 2011, but the ratio of the two topics became to be similar from 2012. Conclusions : We discovered 5 topics were the best number of topics and the topic trends reflected the main issues of KSHSM Journal, such as name revision of the society in 2012.

Topic Extraction and Classification Method Based on Comment Sets

  • Tan, Xiaodong
    • Journal of Information Processing Systems
    • /
    • 제16권2호
    • /
    • pp.329-342
    • /
    • 2020
  • In recent years, emotional text classification is one of the essential research contents in the field of natural language processing. It has been widely used in the sentiment analysis of commodities like hotels, and other commentary corpus. This paper proposes an improved W-LDA (weighted latent Dirichlet allocation) topic model to improve the shortcomings of traditional LDA topic models. In the process of the topic of word sampling and its word distribution expectation calculation of the Gibbs of the W-LDA topic model. An average weighted value is adopted to avoid topic-related words from being submerged by high-frequency words, to improve the distinction of the topic. It further integrates the highest classification of the algorithm of support vector machine based on the extracted high-quality document-topic distribution and topic-word vectors. Finally, an efficient integration method is constructed for the analysis and extraction of emotional words, topic distribution calculations, and sentiment classification. Through tests on real teaching evaluation data and test set of public comment set, the results show that the method proposed in the paper has distinct advantages compared with other two typical algorithms in terms of subject differentiation, classification precision, and F1-measure.

Bayesian Variable Selection in Linear Regression Models with Inequality Constraints on the Coefficients (제한조건이 있는 선형회귀 모형에서의 베이지안 변수선택)

  • 오만숙
    • The Korean Journal of Applied Statistics
    • /
    • 제15권1호
    • /
    • pp.73-84
    • /
    • 2002
  • Linear regression models with inequality constraints on the coefficients are frequently used in economic models due to sign or order constraints on the coefficients. In this paper, we propose a Bayesian approach to selecting significant explanatory variables in linear regression models with inequality constraints on the coefficients. Bayesian variable selection requires computation of posterior probability of each candidate model. We propose a method which computes all the necessary posterior model probabilities simultaneously. In specific, we obtain posterior samples form the most general model via Gibbs sampling algorithm (Gelfand and Smith, 1990) and compute the posterior probabilities by using the samples. A real example is given to illustrate the method.

The Bayesian Approach of Software Optimal Release Time Based on Log Poisson Execution Time Model (포아송 실행시간 모형에 의존한 소프트웨어 최적방출시기에 대한 베이지안 접근 방법에 대한 연구)

  • Kim, Hee-Cheul;Shin, Hyun-Cheul
    • Journal of the Korea Society of Computer and Information
    • /
    • 제14권7호
    • /
    • pp.1-8
    • /
    • 2009
  • In this paper, make a study decision problem called an optimal release policies after testing a software system in development phase and transfer it to the user. The optimal software release policies which minimize a total average software cost of development and maintenance under the constraint of satisfying a software reliability requirement is generally accepted. The Bayesian parametric inference of model using log Poisson execution time employ tool of Markov chain(Gibbs sampling and Metropolis algorithm). In a numerical example by T1 data was illustrated. make out estimating software optimal release time from the maximum likelihood estimation and Bayesian parametric estimation.

Analysis of Research Trends in SIAM Journal on Applied Mathematics Using Topic Modeling (토픽모델링을 활용한 SIAM Journal on Applied Mathematics의 연구 동향 분석)

  • Kim, Sung-Yeun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • 제21권7호
    • /
    • pp.607-615
    • /
    • 2020
  • The purpose of this study was to analyze the research status and trends related to the industrial mathematics based on text mining techniques with a sample of 4910 papers collected in the SIAM Journal on Applied Mathematics from 1970 to 2019. The R program was used to collect titles, abstracts, and key words from the papers and to analyze topic modeling techniques based on LDA algorithm. As a result of the coherence score on the collected papers, 20 topics were determined optimally using the Gibbs sampling methods. The main results were as follows. First, studies on industrial mathematics were conducted in a variety of mathematics fields, including computational mathematics, geometry, mathematical modeling, topology, discrete mathematics, probability and statistics, with a focus on analysis and algebra. Second, 5 hot topics (mathematical biology, nonlinear partial differential equation, discrete mathematics, statistics, topology) and 1 cold topic (probability theory) were found based on time series regression analysis. Third, among the fields that were not reflected in the 2015 revised mathematics curriculum, numeral system, matrix, vector in space, and complex numbers were extracted as the contents to be covered in the high school mathematical curriculum. Finally, this study suggested strategies to activate industrial mathematics in Korea, described the study limitations, and proposed directions for future research.

A Bayesian Method to Semiparametric Hierarchical Selection Models (준모수적 계층적 선택모형에 대한 베이지안 방법)

  • 정윤식;장정훈
    • The Korean Journal of Applied Statistics
    • /
    • 제14권1호
    • /
    • pp.161-175
    • /
    • 2001
  • Meta-analysis refers to quantitative methods for combining results from independent studies in order to draw overall conclusions. Hierarchical models including selection models are introduced and shown to be useful in such Bayesian meta-analysis. Semiparametric hierarchical models are proposed using the Dirichlet process prior. These rich class of models combine the information of independent studies, allowing investigation of variability both between and within studies, and weight function. Here we investigate sensitivity of results to unobserved studies by considering a hierachical selection model with including unknown weight function and use Markov chain Monte Carlo methods to develop inference for the parameters of interest. Using Bayesian method, this model is used on a meta-analysis of twelve studies comparing the effectiveness of two different types of flouride, in preventing cavities. Clinical informative prior is assumed. Summaries and plots of model parameters are analyzed to address questions of interest.

  • PDF

Bayesian Inference for Autoregressive Models with Skewed Exponential Power Errors (비대칭 지수멱 오차를 가지는 자기회귀모형에서의 베이지안 추론)

  • Ryu, Hyunnam;Kim, Dal Ho
    • The Korean Journal of Applied Statistics
    • /
    • 제27권6호
    • /
    • pp.1039-1047
    • /
    • 2014
  • An autoregressive model with normal errors is a natural model that attempts to fit time series data. More flexible models that include normal distribution as a special case are necessary because they can cover normality to non-normality models. The skewed exponential power distribution is a possible candidate for autoregressive models errors that may have tails lighter(platykurtic) or heavier(leptokurtic) than normal and skewness; in addition, the use of skewed exponential power distribution can reduce the influence of outliers and consequently increases the robustness of the analysis. We use SIR algorithm and grid method for an efficient Bayesian estimation.

A Comparison of Bayesian and Maximum Likelihood Estimations in a SUR Tobit Regression Model (SUR 토빗회귀모형에서 베이지안 추정과 최대가능도 추정의 비교)

  • Lee, Seung-Chun;Choi, Byongsu
    • The Korean Journal of Applied Statistics
    • /
    • 제27권6호
    • /
    • pp.991-1002
    • /
    • 2014
  • Both Bayesian and maximum likelihood methods are efficient for the estimation of regression coefficients of various Tobit regression models (see. e.g. Chib, 1992; Greene, 1990; Lee and Choi, 2013); however, some researchers recognized that the maximum likelihood method tends to underestimate the disturbance variance, which has implications for the estimation of marginal effects and the asymptotic standard error of estimates. The underestimation of the maximum likelihood estimate in a seemingly unrelated Tobit regression model is examined. A Bayesian method based on an objective noninformative prior is shown to provide proper estimates of the disturbance variance as well as other regression parameters

The Analysis of Changes in East Coast Tourism using Topic Modeling (토핑 모델링을 활용한 동해안 관광의 변화 분석)

  • Jeong, Eun-Hee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • 제13권6호
    • /
    • pp.489-495
    • /
    • 2020
  • The amount of data is increasing through various IT devices in a hyper-connected society where the 4th revolution is progressing, and new value can be created by analyzing that data. This paper was collected total 1,526 articles from 2017 to 2019 in central magazines, economic magazines, regional associations, and major broadcasting companies with the keyword "(East Coast Tourism or East Coast Travel) and Gangwon-do" through Bigkinds. It was performed the topic modeling using LDA algorithm implemented in the R language to analyze the collected 1,526 articles. It was extracted keywords for each year from 2017 to 2019, and classified and compared keywords with high frequency for each year. It was setted the optimal number of topics to 8 using Log Likelihood and Perplexity, and then inferred 8 topics using the Gibbs Sampling method. The inferred topics were Gangneung and Beach, Goseong and Mt.Geumgang, KTX and Donghae-Bukbu line, weekend sea tour, Sokcho and Unification Observatory, Yangyang and Surfing, experience tour, and transportation network infra. The changes of articles on East coast tourism was was analyzed using the proportion of the inferred eight topics. As the result, the proportion of Unification Observatory and Mt. Geumgang showed no significant change, the proportion of KTX and experience tour increased, and the proportion of other topics decreased in 2018 compared to 2017. In 2019, the proportion of KTX and experience tour decreased, but the proportion of other topics showed no significant change.