• Title/Summary/Keyword: Statistical Current Model

Search Result 372, Processing Time 0.034 seconds

Macro-Model of Magnetic Tunnel Junction for STT-MRAM including Dynamic Behavior

  • Kim, Kyungmin;Yoo, Changsik
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.14 no.6
    • /
    • pp.728-732
    • /
    • 2014
  • Macro-model of magnetic tunnel junction (MTJ) for spin transfer torque magnetic random access memory (STT-MRAM) has been developed. The macro-model can describe the dynamic behavior such as the state change of MTJ as a function of the pulse width of driving current and voltage. The statistical behavior has been included in the model to represent the variation of the MTJ characteristic due to process variation. The macro-model has been developed in Verilog-A.

Small Area Estimation Techniques Based on Logistic Model to Estimate Unemployment Rate

  • Kim, Young-Won;Choi, Hyung-a
    • Communications for Statistical Applications and Methods
    • /
    • v.11 no.3
    • /
    • pp.583-595
    • /
    • 2004
  • For the Korean Economically Active Population Survey(EAPS), we consider the composite estimator based on logistic regression model to estimate the unemployment rate for small areas(Si/Gun). Also, small area estimation technique based on hierarchical generalized linear model is proposed to include the random effect which reflect the characteristic of the small areas. The proposed estimation techniques are applied to real domestic data which is from the Korean EAPS of Choongbuk. The MSE of these estimators are estimated by Jackknife method, and the efficiencies of small area estimators are evaluated by the RRMSE. As a result, the composite estimator based on logistic model is much more efficient than others and it turns out that the composite estimator can produce the reliable estimates under the current EAPS system.

Variational Bayesian inference for binary image restoration using Ising model

  • Jang, Moonsoo;Chung, Younshik
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.1
    • /
    • pp.27-40
    • /
    • 2022
  • In this paper, the focus on the removal noise in the binary image based on the variational Bayesian method with the Ising model. The observation and the latent variable are the degraded image and the original image, respectively. The posterior distribution is built using the Markov random field and the Ising model. Estimating the posterior distribution is the same as reconstructing a degraded image. MCMC and variational Bayesian inference are two methods for estimating the posterior distribution. However, for the sake of computing efficiency, we adapt the variational technique. When the image is restored, the iterative method is used to solve the recursive problem. Since there are three model parameters in this paper, restoration is implemented using the VECM algorithm to find appropriate parameters in the current state. Finally, the restoration results are shown which have maximum peak signal-to-noise ratio (PSNR) and evidence lower bound (ELBO).

Prediction of sharp change of particulate matter in Seoul via quantile mapping

  • Jeongeun Lee;Seoncheol Park
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.3
    • /
    • pp.259-272
    • /
    • 2023
  • In this paper, we suggest a new method for the prediction of sharp changes in particulate matter (PM10) using quantile mapping. To predict the current PM10 density in Seoul, we consider PM10 and precipitation in Baengnyeong and Ganghwa monitoring stations observed a few hours before. For the PM10 distribution estimation, we use the extreme value mixture model, which is a combination of conventional probability distributions and the generalized Pareto distribution. Furthermore, we also consider a quantile generalized additive model (QGAM) for the relationship modeling between precipitation and PM10. To prove the validity of our proposed model, we conducted a simulation study and showed that the proposed method gives lower mean absolute differences. Real data analysis shows that the proposed method could give a more accurate prediction when there are sharp changes in PM10 in Seoul.

The Development of Evaluation Criteria Model for Discriminating Specialized General Hospital (종합전문요양기관 인정기준 모형 개발)

  • Chun Ki Hong;Kang Hye-Young;Kang Dae Ryong;Nam Chung Mo;Lee Gye-Cheol
    • Health Policy and Management
    • /
    • v.15 no.4
    • /
    • pp.46-64
    • /
    • 2005
  • This study was conducted to verify the current criteria and classification system used to determine specialized general hospitals status. In this study, we proposed a new classification system which Is simpler and more convenient than the current one. In the new classification system clinical procedure was chosen as the unit of analysis in order to reflect all the resource consumption and the complexities and degree of medical technologies in determining specialized general hospitals. We developed a statistical model and applied this model to 117 general hospitals which claim their national insurance through electronic data interchange(EDI). Analysis based on 984 clinical procedures and medical facilities' characteristic variable discriminated specialized general hospital in present without misclassification. It means that we can determine specialized general hospital's permission In new way without using the current complicated criteria. This study discriminated specialized general hospital by the new proposed model based on clinical procedures provided by each hospital. For clustering the same types of medical facilities using 984 clinical procedures, we executed multidimensional scale analysis and divided 117 hospitals into 4 groups by two axises : a variety of procedure and the Proportion of high technology Procedure. Therefore, we divided 117 hospitals into 4 groups and one of them was considered as specialized general hospital. In discriminating analysis, we abstracted proportion of 16 clinical procedures which effect on discriminating the specialized general hospital in statistical system also we identify discriminating function which include these variables. As a result, we identify 2 discriminating functions, one is for current discriminating system and the other two is for new discriminating system of specialized general hospital.

Statistical Analysis of Electrical Tree Inception Voltage, Breakdown Voltage and Tree Breakdown Time Data of Unsaturated Polyester Resin

  • Ahmad, Mohd Hafizi;Bashir, Nouruddeen;Ahmad, Hussein;Piah, Mohamed Afendi Mohamed;Abdul-Malek, Zulkurnain;Yusof, Fadhilah
    • Journal of Electrical Engineering and Technology
    • /
    • v.8 no.4
    • /
    • pp.840-849
    • /
    • 2013
  • This paper presents a statistical approach to analyze electrical tree inception voltage, electrical tree breakdown voltage and tree breakdown time of unsaturated polyester resin subjected to AC voltage. The aim of this work was to show that Weibull and lognormal distribution may not be the most suitable distributions for analysis of electrical treeing data. In this paper, an investigation of statistical distributions of electrical tree inception voltage, electrical tree breakdown voltage and breakdown time data was performed on 108 leaf-like specimen samples. Revelations from the test results showed that Johnson SB distribution is the best fit for electrical tree inception voltage and tree breakdown time data while electrical tree breakdown voltage data is best suited with Wakeby distribution. The fitting step was performed by means of Anderson-Darling (AD) Goodness-of-fit test (GOF). Based on the fitting results of tree inception voltage, tree breakdown time and tree breakdown voltage data, Johnson SB and Wakeby exhibit the lowest error value respectively compared to Weibull and lognormal.

A Study on Bayesian p-values

  • Hwnag, Hyungtae;Oh, Heejung
    • Communications for Statistical Applications and Methods
    • /
    • v.9 no.3
    • /
    • pp.725-732
    • /
    • 2002
  • P-values are often perceived as measurements of degree of compatibility between the current data and the hypothesized model. In this paper, a new concept of Bayesian p-values is proposed and studied under the non-informative prior distributions, which can be thought as the Bayesian counterparts of the classical p-values in the sense of using the concept of significance level. The performances of the proposed Bayesian p-values are compared with those of the classical p-values through several examples.

Development of Subsurface Spatial Information Model with Cluster Analysis and Ontology Model (온톨로지와 군집분석을 이용한 지하공간 정보모델 개발)

  • Lee, Sang-Hoon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.13 no.4
    • /
    • pp.170-180
    • /
    • 2010
  • With development of the earth's subsurface space, the need for a reliable subsurface spatial model such as a cross-section, boring log is increasing. However, the ground mass was essentially uncertain. To generate model was uncertain because of the shortage of data and the absence of geotechnical interpretation standard(non-statistical uncertainty) as well as field environment variables(statistical uncertainty). Therefore, the current interpretation of the data and the generation of the model were accomplished by a highly trained experts. In this study, a geotechnical ontology model was developed using the current expert experience and knowledge, and the information content was calculated in the ontology hierarchy. After the relative distance between the information contents in the ontology model was combined with the distance between cluster centers, a cluster analysis that considered the geotechnical semantics was performed. In a comparative test of the proposed method, k-means method, and expert's interpretation, the proposed method is most similar to expert's interpretation, and can be 3D-GIS visualization through easily handling massive data. We expect that the proposed method is able to generate the more reasonable subsurface spatial information model without geotechnical experts' help.

Optimization of Robust Design Model using Data Mining (데이터 바이닝을 이용한 로버스트 설계 모형의 최적화)

  • Jung, Hey-Jin;Koo, Bon-Cheol
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.30 no.2
    • /
    • pp.99-105
    • /
    • 2007
  • According to the automated manufacturing processes followed by the development of computer manufacturing technologies, products or quality characteristics produced on the processes have measured and recorded automatically. Much amount of data daily produced on the processes may not be efficiently analyzed by current statistical methodologies (i.e., statistical quality control and statistical process control methodologies) because of the dimensionality associated with many input and response variables. Although a number of statistical methods to handle this situation, there is room for improvement. In order to overcome this limitation, we integrated data mining and robust design approach in this research. We find efficiently the significant input variables that connected with the interesting response variables by using the data mining technique. And we find the optimum operating condition of process by using RSM and robust design approach.

Statistical Analysis of Bivariate Current Status Data with Informative Censoring Using Frailty Effects

  • Kim, Yang-Jin
    • The Korean Journal of Applied Statistics
    • /
    • v.25 no.1
    • /
    • pp.115-123
    • /
    • 2012
  • In animal tumorigenicity data, tumor onsets occur at several sites and onset times cannot be exactly observed. Instead, the existence of tumors is examined only at death time or sacrifice time of the animal. Such an incomplete data structure makes it difficult to investigate the effect of treatment on tumor onset times; in addition, such dependence should be considered when censoring due to death is related with tumor onset. A bivariate frailty effect is incorporated to model bivariate tumor onsets and to connect death with tumor. For the inference of parameters, EM algorithm is applied and a real NTP(National Toxicology Program) dataset is analyzed as an illustrative example.