• Title/Summary/Keyword: variance errors.

Search Result 237, Processing Time 0.03 seconds

Allocation in Multi-way Stratification by Linear Programing

  • NamKung, Pyong;Choi, Jae-Hyuk
    • Communications for Statistical Applications and Methods
    • /
    • v.13 no.2
    • /
    • pp.327-341
    • /
    • 2006
  • Winkler (1990, 2001), Sitter and Skinner (1994), Wilson and Sitter (2002) present a method which applies linear programing to designing surveys with multi-way stratification, primarily in situation where the desired sample size is less than or only slightly larger than the total number of stratification cells. A comparison is made with existing methods both by illustrating the sampling schemes generated for specific examples, by evaluating sample mean, variance estimation, and mean squared errors, and by simulating sample mean for all methods. The computations required can, however, increase rapidly as the number of cells in the multi-way classification increase. In this article their approach is applied to multi-way stratification using real data.

Bearing ultra-fine fault detection method and application (베어링 초 미세 결함 검출방법과 실제 적용)

  • Park, Choon-Su;Choi, Young-Chul;Kim, Yang-Hann;Ko, Eul-Seok
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2004.11a
    • /
    • pp.1093-1096
    • /
    • 2004
  • Bearings are elementary machinery component which loads and do rotating motion. Excessive loads or many other reasons can cause incipient faults to be created and grown in each component. Moreover, it happens that incipient faults which were caused by manufacturing or assembling process' errors of the bearings are created. Finding the incipient faults as early as possible is necessary to the bearings in severe condition: high speed or frequently varying load condition, etc. How early we can detect the faults has to do with how the detection algorithm finds the fault information from measured signal. Fortunately, the bearing fault signal makes periodic impulse train. This information allows us to find the faults regardless how much noise contaminates the signal. This paper shows the basic signal processing idea and experimental results that demonstrate how good the method is.

  • PDF

Sensory Difference Testing: The Problem of Overdispersion and the Use of Beta Binomial Statistical Analysis

  • Lee, Hye-Seong;O'Mahony, Michael
    • Food Science and Biotechnology
    • /
    • v.15 no.4
    • /
    • pp.494-498
    • /
    • 2006
  • An increase in variance (overdispersion) can occur when a binomial statistical analysis is applied to sensory difference test data in which replicate sensory evaluations (tastings) and multiple evaluators (judges) are combined to increase the sample size. Such a practice can cause extensive Type I errors, leading to serious misinterpretations of the data, especially when traditional simple binomial analysis is applied. Alternatively, the use of beta binomial analysis will circumvent the problem of overdispersion. This brief review discusses the uses and computation methodology of beta binomial analysis and in practice evidence for the occurrence of overdispersion.

The skew-t censored regression model: parameter estimation via an EM-type algorithm

  • Lachos, Victor H.;Bazan, Jorge L.;Castro, Luis M.;Park, Jiwon
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.3
    • /
    • pp.333-351
    • /
    • 2022
  • The skew-t distribution is an attractive family of asymmetrical heavy-tailed densities that includes the normal, skew-normal and Student's-t distributions as special cases. In this work, we propose an EM-type algorithm for computing the maximum likelihood estimates for skew-t linear regression models with censored response. In contrast with previous proposals, this algorithm uses analytical expressions at the E-step, as opposed to Monte Carlo simulations. These expressions rely on formulas for the mean and variance of a truncated skew-t distribution, and can be computed using the R library MomTrunc. The standard errors, the prediction of unobserved values of the response and the log-likelihood function are obtained as a by-product. The proposed methodology is illustrated through the analyses of simulated and a real data application on Letter-Name Fluency test in Peruvian students.

STATISTICALLY PREPROCESSED DATA BASED PARAMETRIC COST MODEL FOR BUILDING PROJECTS

  • Sae-Hyun Ji;Moonseo Park;Hyun-Soo Lee
    • International conference on construction engineering and project management
    • /
    • 2009.05a
    • /
    • pp.417-424
    • /
    • 2009
  • For a construction project to progress smoothly, effective cost estimation is vital, particularly in the conceptual and schematic design stages. In these early phases, despite the fact that initial estimates are highly sensitive to changes in project scope, owners require accurate forecasts which reflect their supplying information. Thus, cost estimators need effective estimation strategies. Practically, parametric cost estimates are the most commonly used method in these initial phases, which utilizes historical cost data (Karshenas 1984, Kirkham 2007). Hence, compilation of historical data regarding appropriate cost variance governing parameters is a prime requirement. However, precedent practice of data mining (data preprocessing) for denoising internal errors or abnormal values is needed before compilation. As an effort to deal with this issue, this research proposed a statistical methodology for data preprocessing and verified that data preprocessing has a positive impact on the enhancement of estimate accuracy and stability. Moreover, Statistically Preprocessed data Based Parametric (SPBP) cost models are developed based on multiple regression equations and verified their effectiveness compared with conventional cost models.

  • PDF

CONTINUOUS DATA ASSIMILATION FOR THE THREE-DIMENSIONAL LERAY-α MODEL WITH STOCHASTICALLY NOISY DATA

  • Bui Kim, My;Tran Quoc, Tuan
    • Bulletin of the Korean Mathematical Society
    • /
    • v.60 no.1
    • /
    • pp.93-111
    • /
    • 2023
  • In this paper we study a nudging continuous data assimilation algorithm for the three-dimensional Leray-α model, where measurement errors are represented by stochastic noise. First, we show that the stochastic data assimilation equations are well-posed. Then we provide explicit conditions on the observation density (resolution) and the relaxation (nudging) parameter which guarantee explicit asymptotic bounds, as the time tends to infinity, on the error between the approximate solution and the actual solution which is corresponding to these measurements, in terms of the variance of the noise in the measurements.

Using the Monte Carlo method to solve the half-space and slab albedo problems with Inönü and Anlı-Güngör strongly anisotropic scattering functions

  • Bahram R. Maleki
    • Nuclear Engineering and Technology
    • /
    • v.55 no.1
    • /
    • pp.324-329
    • /
    • 2023
  • Different types of deterministic solution methods were used to solve neutron transport equations corresponding to half-space and slab albedo problems. In these types of solution methods, in addition to the error of the numerical solutions, the obtained results contain truncation and discretization errors. In the present work, a non-analog Monte Carlo method is provided to simulate the half-space and slab albedo problems with Inönü and Anlı-Güngör strongly anisotropic scattering functions. For each scattering function, the sampling method of the direction of the scattered neutrons is presented. The effects of different beams with different angular dependencies and the effects of different scattering parameters on the reflection probability are investigated using the developed Monte Carlo method. The validity of the Monte Carlo method is also confirmed through the comparison with the published data.

Crafting an Automated Algorithm for Estimating the Quantity of Beam Rebar

  • Widjaja, Daniel Darma;Kim, Do-Yeong;Kim, Sunkuk
    • Journal of the Korea Institute of Building Construction
    • /
    • v.23 no.4
    • /
    • pp.485-496
    • /
    • 2023
  • Precise construction cost estimation is paramount to determining the total construction expense of a project prior to the initiation of the construction phase. Despite this, manual quantification and cost estimation methods, which continue to be widely used, may result in imprecise estimation and subsequent financial loss. Given the fast-paced and efficiency-demanding nature of the construction industry, trustworthy quantity and cost estimation is essential. To mitigate these obstacles, this research is focused on establishing an automated quantity estimation algorithm, particularly designed for the main rebar of beams which are recognized for their complicated reinforcement configurations. The exact quantity derived from the proposed algorithm is compared to the manually approximated quantity, reflecting a variance of 10.27%. As a result, significant errors and impending financial loss can be averted. The implementation of the findings from this research holds the potential to significantly assist construction firms in quickly and accurately estimating rebar quantities while adhering strictly to applicable specifications and regulatory requirements.

Comparative studies of different machine learning algorithms in predicting the compressive strength of geopolymer concrete

  • Sagar Paruthi;Ibadur Rahman;Asif Husain
    • Computers and Concrete
    • /
    • v.32 no.6
    • /
    • pp.607-613
    • /
    • 2023
  • The objective of this work is to determine the compressive strength of geopolymer concrete utilizing four distinct machine learning approaches. These techniques are known as gradient boosting machine (GBM), generalized linear model (GLM), extremely randomized trees (XRT), and deep learning (DL). Experimentation is performed to collect the data that is then utilized for training the models. Compressive strength is the response variable, whereas curing days, curing temperature, silica fume, and nanosilica concentration are the different input parameters that are taken into consideration. Several kinds of errors, including root mean square error (RMSE), coefficient of correlation (CC), variance account for (VAF), RMSE to observation's standard deviation ratio (RSR), and Nash-Sutcliffe effectiveness (NSE), were computed to determine the effectiveness of each algorithm. It was observed that, among all the models that were investigated, the GBM is the surrogate model that can predict the compressive strength of the geopolymer concrete with the highest degree of precision.

Quantitative assessment of Endorectal Ultrasonography by using GLCM Algorithm (GLCM알고리즘을 이용한 경직장 초음파 영상의 정량적 평가)

  • Nho, Da-Jung;Kang, Min-Ji;Kim, Yoo-Kyeong;Seo, Ah-Reum;Lee, In-Ho;Jeong, Hee-Seong;Jo, Jin-Yeong;Ko, Seong-Jin
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.05a
    • /
    • pp.383-387
    • /
    • 2015
  • Bowel and rectal diseases are on the increase by irregular life and westernized eating habits of modern people, especially rectal cancer, which accounts for 50% of the entire colon cancer. For the initial rectal cancer, because there is no portion projecting on the surface, if not see inside the tissue with ultrasound, you make an errors that misdiagnosis as rectal abscess. However there is a need for more accurate diagnosis, because it is sometimes difficult to distinguish abscess from rectal cancer depending on staging, in spite of the ultrasonic diagnosis. Therefore, this study was performed quantitative analysis by using a computer algorithm for rectal cancer and abscess image. Each of 20 cases about normal, abscess and cancer by setting analysis region ($50{\times}50$ pixels) applies to GLCM algorithm and Autocorrelation, Max probability, Sum average, Sum variance in each image were analyzed by comparing the 4 single parameter. Consequently, The high lesion detection efficiency was presented 100% by the 3 parameter of Autocorrelation, Max probability, Sum variance and the parameter of Sum average presents 95% in cancer, more than 90% in abscess. Those parameters are valuable in distinction standard about normal, cancer and abscess in rectum. It is sufficient availability as a computer assisted diagnosis system depended on clinical using.

  • PDF