• Title/Summary/Keyword: Equal Interval

Search Result 169, Processing Time 0.03 seconds

SEQUENTIAL CONFIDENCE INTERVALS WITH ${\beta}-PROTECTION$ IN A NORMAL DISTRIBUTION HAVING EQUAL MEAN AND VARIANCE

  • Kim, Sung-Kyun;Kim, Sung-Lai;Lee, Young-Whan
    • Journal of applied mathematics & informatics
    • /
    • v.23 no.1_2
    • /
    • pp.479-488
    • /
    • 2007
  • A sequential procedure is proposed in order to construct one-sided confidence intervals for a normal mean with guaranteed coverage probability and ${\beta}-protection$ when the normal mean and variance are identical. First-order asymptotic properties on the sequential sample size are found. The derived results hold with uniformity in the total parameter space or its subsets.

Discretization of Continuous-Valued Attributes for Classification Learning (분류학습을 위한 연속 애트리뷰트의 이산화 방법에 관한 연구)

  • Lee, Chang-Hwan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.6
    • /
    • pp.1541-1549
    • /
    • 1997
  • Many classification algorithms require that training examples contain only discrete values. In order to use these algorithms when some attributes have continuous numeric values, the numeric attributes must be converted into discrete ones. This paper describes a new way of discretizing numeric values using information theory. Our method is context-sensitive in the sense that it takes into account the value of the target attribute. The amount of information each interval gives to the target attribute is measured using Hellinger divergence, and the interval boundaries are decided so that each interval contains as equal amount of information as possible. In order to compare our discretization method with some current discretization methods, several popular classification data sets are selected for experiment. We use back propagation algorithm and ID3 as classification tools to compare the accuracy of our discretization method with that of other methods.

  • PDF

FINITE SPEED OF PROPAGATION IN DEGENERATE EINSTEIN BROWNIAN MOTION MODEL

  • HEVAGE, ISANKA GARLI;IBRAGIMOV, AKIF
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.26 no.2
    • /
    • pp.108-120
    • /
    • 2022
  • We considered qualitative behaviour of the generalization of Einstein's model of Brownian motion when the key parameter of the time interval of free jump degenerates. Fluids will be characterised by number of particles per unit volume (density of fluid) at point of observation. Degeneration of the phenomenon manifests in two scenarios: a) flow of the fluid, which is highly dispersing like a non-dense gas and b) flow of fluid far away from the source of flow, when the velocity of the flow is incomparably smaller than the gradient of the density. First, we will show that both types of flows can be modeled using the Einstein paradigm. We will investigate the question: What features will particle flow exhibit if the time interval of the free jump is inverse proportional to the density and its gradient ? We will show that in this scenario, the flow exhibits localization property, namely: if at some moment of time t0 in the region, the gradient of the density or density itself is equal to zero, then for some T during time interval [t0, t0 + T] there is no flow in the region. This directly links to Barenblatt's finite speed of propagation property for the degenerate equation. The method of the proof is very different from Barenblatt's method and based on the application of Ladyzhenskaya - De Giorgi iterative scheme and Vespri - Tedeev technique. From PDE point of view it assumed that solution exists in appropriate Sobolev type of space.

Comparison of the Power of Bootstrap Two-Sample Test and Wilcoxon Rank Sum Test for Positively Skewed Population

  • Heo, Sunyeong
    • Journal of Integrative Natural Science
    • /
    • v.15 no.1
    • /
    • pp.9-18
    • /
    • 2022
  • This research examines the power of bootstrap two-sample test, and compares it with the powers of two-sample t-test and Wilcoxon rank sum test, through simulation. For simulation work, a positively skewed and heavy tailed distribution was selected as a population distribution, the chi-square distributions with three degrees of freedom, χ23. For two independent samples, the fist sample was selected from χ23. The second sample was selected independently from the same χ23 as the first sample, and calculated d+ax for each sampled value x, a randomly selected value from χ23. The d in d+ax has from 0 to 5 by 0.5 interval, and the a has from 1.0 to 1.5 by 0.1 interval. The powers of three methods were evaluated for the sample sizes 10,20,30,40,50. The null hypothesis was the two population medians being equal for Bootstrap two-sample test and Wilcoxon rank sum test, and the two population means being equal for the two-sample t-test. The powers were obtained using r program language; wilcox.test() in r base package for Wilcoxon rank sum test, t.test() in r base package for the two-sample t-test, boot.two.bca() in r wBoot pacakge for the bootstrap two-sample test. Simulation results show that the power of Wilcoxon rank sum test is the best for all 330 (n,a,d) combinations and the power of two-sample t-test comes next, and the power of bootstrap two-sample comes last. As the results, it can be recommended to use the classic inference methods if there are widely accepted and used methods, in terms of time, costs, sometimes power.

Investigation of the Ethanol Fermentation Characteristics of K. fragilis by Semicontinuous Culture (반 연속식 배양에 의한 효모 K. fragilis의 알콜발효 특성에 관한 연구)

  • 허병기;류장수목영일
    • KSBB Journal
    • /
    • v.4 no.2
    • /
    • pp.185-190
    • /
    • 1989
  • Semicontinuous alcohol fermentation of Jerusalem Artichoke by K. fragilis CBS 1555 was performed to investigate the effect of the effective dilution rate and influent sugar concentration to the ethanol concentration and alcohol productivity at steady state. When the time interval for the replacement of fresh influent with fermentation broth was less than or equal to 1 hr, the effective dilution rate was found out to be equal to the specific growth rate. Wash out was not occurred until the effective dilution rate, 0.425 hr-1, and the maximum alcohol productivity was around 5.5 g/1·hr. In this case, the effective dilution rate was 0.25 hr-1 and the influent sugar concentration was distributed from 85 g/l to 135 g/1.

  • PDF

Effective Sample Sizes for the Test of Mean Differences Based on Homogeneity Test

  • Heo, Sunyeong
    • Journal of Integrative Natural Science
    • /
    • v.12 no.3
    • /
    • pp.91-99
    • /
    • 2019
  • Many researchers in various study fields use the two sample t-test to confirm their treatment effects. The two sample t-test is generally used for small samples, and assumes that two independent random samples are selected from normal populations, and the population variances are unknown. Researchers often conduct F-test, the test of equality of variances, before testing the treatment effects, and the test statistic or confidence interval for the two sample t-test has two formats according to whether the variances are equal or not. Researchers using the two sample t-test often want to know how large sample sizes they need to get reliable test results. This research gives some guidelines for sample sizes to them through simulation works. The simulation had run for normal populations with the different ratios of two variances for different sample sizes (${\leq}30$). The simulation results are as follows. First, if one has no idea equality of variances but he/she can assume the difference is moderate, it is safe to use sample size at least 20 in terms of the nominal level of significance. Second, the power of F-test for the equality of variances is very low when the sample sizes are small (<30) even though the ratio of two variances is equal to 2. Third, the sample sizes at least 10 for the two sample t-test are recommendable in terms of the nominal level of significance and the error limit.

Research on the modified algorithm for improving accuracy of Random Forest classifier which identifies automatically arrhythmia (부정맥 증상을 자동으로 판별하는 Random Forest 분류기의 정확도 향상을 위한 수정 알고리즘에 대한 연구)

  • Lee, Hyun-Ju;Shin, Dong-Kyoo;Park, Hee-Won;Kim, Soo-Han;Shin, Dong-Il
    • The KIPS Transactions:PartB
    • /
    • v.18B no.6
    • /
    • pp.341-348
    • /
    • 2011
  • ECG(Electrocardiogram), a field of Bio-signal, is generally experimented with classification algorithms most of which are SVM(Support Vector Machine), MLP(Multilayer Perceptron). But this study modified the Random Forest Algorithm along the basis of signal characteristics and comparatively analyzed the accuracies of modified algorithm with those of SVM and MLP to prove the ability of modified algorithm. The R-R interval extracted from ECG is used in this study and the results of established researches which experimented co-equal data are also comparatively analyzed. As a result, modified RF Classifier showed better consequences than SVM classifier, MLP classifier and other researches' results in accuracy category. The Band-pass filter is used to extract R-R interval in pre-processing stage. However, the Wavelet transform, median filter, and finite impulse response filter in addition to Band-pass filter are often used in experiment of ECG. After this study, selection of the filters efficiently deleting the baseline wandering in pre-processing stage and study of the methods correctly extracting the R-R interval are needed.

Development of Farm Management Diagnostic Checklist Reflecting Crop Characteristics (작물 특성을 반영한 농가경영진단표 개발)

  • Choi, Don-Woo;Lim, Cheong-Ryong
    • Journal of Korean Society of Rural Planning
    • /
    • v.23 no.2
    • /
    • pp.1-7
    • /
    • 2017
  • The purpose of this study is to develop a farm management diagnostic checklist form, which can be applied to any crops. First, upper indexes and subordinate indexes were identified through survey with expert, and weighted values for each subordinate index were calculated through AHP analysis. Second, as a reuslt of Analytic Hierarchy Process (AHP) analysis, marketing management (0.276) was found to be the most important index of all upper indexes. In the case of subordinate indexes, reflecting management evaluation (0.252) of management consciousness, quality enhancement efforts (0.332) of production management, locating new sales outlets (0.323) of marketing management, agriculture accounting (0.300) of finance management, and adjusting shipping dates (0.274) of risk management were found to be the highest. Third, the interval division using weight of farm receiving prices was higher discrimination in comparison to equal interval division of weighted values for each index. The newly developed farm management diagnostic checklist can be applied to any crops, as it utilizes indexes such as management consciousness, production management, marketing management, financial management, risk management, etc. based on professional opinions. In addition, it allows an objective evaluation of farm management situations by utilizing the weighted value of farm receiving prices.

Empirical Bayesian Prediction Analysis on Accelerated Lifetime Data (가속수명자료를 이용한 경험적 베이즈 예측분석)

  • Cho, Geon-Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.8 no.1
    • /
    • pp.21-30
    • /
    • 1997
  • In accelerated life tests, the failure time of an item is observed under a high stress level, and based on the time the performances of items are investigated at the normal stress level. In this paper, when the mean of the prior of a failure rate is known in the exponential lifetime distribution with censored accelerated failure time data, we utilize the empirical Bayesian method by using the moment estimators in order to estimate the parameters of the prior distribution and obtain the empirical Bayesian predictive density and predictive intervals for a future observation under the normal stress level.

  • PDF

Time Series Using Fuzzy Logic (삼각퍼지수를 이용한 시계열모형)

  • Jung, Hye-Young;Choi, Seung-Hoe
    • Communications for Statistical Applications and Methods
    • /
    • v.15 no.4
    • /
    • pp.517-530
    • /
    • 2008
  • In this paper we introduce a time series model using the triangle fuzzy numbers in order to construct a statistical relation for the data which is a sequence of observations which are ordered in time. To estimate the proposed fuzzy model we split of a universal set includes all observation into closed intervals and determine a number and length of the closed interval by the frequency of events belong to the interval. Also we forecast the data by using a difference between observations when the fuzzified numbers equal at successive times. To investigate the efficiency of the proposed model we compare the ordinal and the fuzzy time series model using examples.