• Title/Summary/Keyword: Minimum statistics

Search Result 356, Processing Time 0.029 seconds

Blind channel equalization using fourth-order cumulants and a neural network

  • Han, Soo-whan
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.1
    • /
    • pp.13-20
    • /
    • 2005
  • This paper addresses a new blind channel equalization method using fourth-order cumulants of channel inputs and a three-layer neural network equalizer. The proposed algorithm is robust with respect to the existence of heavy Gaussian noise in a channel and does not require the minimum-phase characteristic of the channel. The transmitted signals at the receiver are over-sampled to ensure the channel described by a full-column rank matrix. It changes a single-input/single-output (SISO) finite-impulse response (FIR) channel to a single-input/multi-output (SIMO) channel. Based on the properties of the fourth-order cumulants of the over-sampled channel inputs, the iterative algorithm is derived to estimate the deconvolution matrix which makes the overall transfer matrix transparent, i.e., it can be reduced to the identity matrix by simple recordering and scaling. By using this estimated deconvolution matrix, which is the inverse of the over-sampled unknown channel, a three-layer neural network equalizer is implemented at the receiver. In simulation studies, the stochastic version of the proposed algorithm is tested with three-ray multi-path channels for on-line operation, and its performance is compared with a method based on conventional second-order statistics. Relatively good results, withe fast convergence speed, are achieved, even when the transmitted symbols are significantly corrupted with Gaussian noise.

The performance analysis of SA fitters for images corrupted by biased noise (비대칭 노이즈 영상에서 SA 필터의 성능 분석)

  • Song, Jong-Kwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.2
    • /
    • pp.362-368
    • /
    • 2009
  • The SA filters encompass a large class of filters based on order statistics as well as linear FIR filters. The class of SA filters is defined as a multi-stage filters whose output is linear combination of nonlinear(minimum, maximum, exclusive-OR) sub-filter outputs. According to the lust stage nonlinear sub-filters, SA filters are called SAMAX, SAMIN, and SAXOR filters. In this paper, optimal SAMAX and SAMED filters are designed for images corrupted by biased noise. The performance analysis of this experiment shows that SAMAX filters outperforms SAMED filters for biased noise. In the case of un-biased noise, the SAMAX and SAMED filters give the same performance. This result leads us to a new guideline in the application of SA filters.

Selection of Signal-to-Noise Ratios through Simple Data Analysis (망목특성에서의 자료분석을 통한 SN비의 선택)

  • Lim, Yong Bin
    • Journal of Korean Society for Quality Management
    • /
    • v.22 no.4
    • /
    • pp.1-12
    • /
    • 1994
  • For quality improvement, Taguchi emphasizes the reduction of variation of the quality characteristic. Taguchi has used the signal to noise ratios for achieving minimum dispersion of the quality characteristic with its location adjusted to some desired target value. At each setting of design factors, the variance of the quality characteristic could be affected by the mean. In most cases, as the mean get larger, the variance tends to increase, The Taguchi's SN ratio corresponds to the case that the variance is proportional to the square of the mean. But the variance can increase faster or slower than the square of the mean. We propose to infer a linking relationship of the variance and mean through simple data analysis technique, and then use a reasonable SN ratio.

  • PDF

Application of the Weibull-Poisson long-term survival model

  • Vigas, Valdemiro Piedade;Mazucheli, Josmar;Louzada, Francisco
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.4
    • /
    • pp.325-337
    • /
    • 2017
  • In this paper, we proposed a new long-term lifetime distribution with four parameters inserted in a risk competitive scenario with decreasing, increasing and unimodal hazard rate functions, namely the Weibull-Poisson long-term distribution. This new distribution arises from a scenario of competitive latent risk, in which the lifetime associated to the particular risk is not observable, and where only the minimum lifetime value among all risks is noticed in a long-term context. However, it can also be used in any other situation as long as it fits the data well. The Weibull-Poisson long-term distribution is presented as a particular case for the new exponential-Poisson long-term distribution and Weibull long-term distribution. The properties of the proposed distribution were discussed, including its probability density, survival and hazard functions and explicit algebraic formulas for its order statistics. Assuming censored data, we considered the maximum likelihood approach for parameter estimation. For different parameter settings, sample sizes, and censoring percentages various simulation studies were performed to study the mean square error of the maximum likelihood estimative, and compare the performance of the model proposed with the particular cases. The selection criteria Akaike information criterion, Bayesian information criterion, and likelihood ratio test were used for the model selection. The relevance of the approach was illustrated on two real datasets of where the new model was compared with its particular cases observing its potential and competitiveness.

A CONSTRUCTION OF TWO-WEIGHT CODES AND ITS APPLICATIONS

  • Cheon, Eun Ju;Kageyama, Yuuki;Kim, Seon Jeong;Lee, Namyong;Maruta, Tatsuya
    • Bulletin of the Korean Mathematical Society
    • /
    • v.54 no.3
    • /
    • pp.731-736
    • /
    • 2017
  • It is well-known that there exists a constant-weight $[s{\theta}_{k-1},k,sq^{k-1}]_q$ code for any positive integer s, which is an s-fold simplex code, where ${\theta}_j=(q^{j+1}-1)/(q-1)$. This gives an upper bound $n_q(k,sq^{k-1}+d){\leq}s{\theta}_{k-1}+n_q(k,d)$ for any positive integer d, where $n_q(k,d)$ is the minimum length n for which an $[n,k,d]_q$ code exists. We construct a two-weight $[s{\theta}_{k-1}+1,k,sq^{k-1}]_q$ code for $1{\leq}s{\leq}k-3$, which gives a better upper bound $n_q(k,sq^{k-1}+d){\leq}s{\theta}_{k-1}+1+n_q(k-1,d)$ for $1{\leq}d{\leq}q^s$. As another application, we prove that $n_q(5,d)={\sum_{i=0}^{4}}{\lceil}d/q^i{\rceil}$ for $q^4+1{\leq}d{\leq}q^4+q$ for any prime power q.

The Determinants of Nursing Home Quality Indicators;A Multilevel Analysis (노인요양시설의 질 지표 결정요인에 관한 연구;다수준 분석)

  • Lee, Seung-Hee
    • Journal of Korean Academy of Nursing Administration
    • /
    • v.12 no.3
    • /
    • pp.473-481
    • /
    • 2006
  • Purpose: The Purpose of this study was to examine the factors on the nursing home quality indicators. Methods: The subjects of this study were 377 residents living in the nursing home more than 30 bed. The subject's minimum length of residence is 3 months and age of the subject is year of 65 over. The data were analyzed using descriptive statistics, Pearson correlation and multilevel analysis. Results: The main result of the study were in following. First, the quality gap among nursing homes resulted from both institutional and person level factors. Second, the person level factors affecting the quality of nursing home included ALD. However institution level factors had no direct effect on dependent valuable. Third, the interaction effect between the institution and person level factors was found. The ADL have less effect on the quality of nursing homes doing more quality management than of nursing homes doing less. Forth, The effect of ADL was different according to the level of care planing and satisfaction survey. Conclusion: These results suggest that the determinants of nursing home quality indicators were ADL & quality management. This study will contribute to apply nusing home quality indicators in Korea.

  • PDF

The 17th Century Dry Period in the Time Series of the Monthly Rain and Snow Days of Seoul (서울의 강우와 강설 일수 자료에 나타난 17세기 말엽의 건조기)

  • Lim, Gyu-Ho;Choi, Eun-Ho;Koo, Kyosang;Won, Myoungsoo
    • Atmosphere
    • /
    • v.22 no.3
    • /
    • pp.381-386
    • /
    • 2012
  • The monthly number of days with rain or snow in Seoul extends backward to 1626 from the present. The number of rain and snow days are from the ancient records and combined with the modern precipitation records from 1907 to the present. There are two distinct and abrupt changes in the time series, which allow us to divide the entire period into three sub-periods of CR-I, CR-II, and MR. For each sub-period, we calculated the basic statistics and the associated distributions. The analysis proves Seoul, which may comprise East Asia when considering the lengthy period of dry condition, had dry climate for the Maunder Minimum when Europe experienced cold climate. We also note relationships between the rain days and sunspot numbers in various frequency bands.

Fuzzy Regression Model Using Trapezoidal Fuzzy Numbers for Re-auction Data

  • Kim, Il Kyu;Lee, Woo-Joo;Yoon, Jin Hee;Choi, Seung Hoe
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.16 no.1
    • /
    • pp.72-80
    • /
    • 2016
  • Re-auction happens when a bid winner defaults on the payment without making second in-line purchase declaration even after determining sales permission. This is a process of selling under the court's authority. Re-auctioning contract price of real estate is largely influenced by the real estate business, real estate value, and the number of bidders. This paper is designed to establish a statistical model that deals with the number of bidders participating especially in apartment re-auctioning. For these, diverse factors are taken into consideration, including ratio of minimum sales value from the point of selling to re-auctioning, number of bidders at the time of selling, investment value of the real estate, and so forth. As an attempt to consider ambiguous and vague factors, this paper presents a comparatively vague concept of real estate and bidders as trapezoid fuzzy number. Two different methods based on the least squares estimation are applied to fuzzy regression model in this paper. The first method is the estimating method applying substitution after obtaining the estimators of regression coefficients, and the other method is to estimate directly from the estimating procedure without substitution. These methods are provided in application for re-auction data, and appropriate performance measure is also provided to compare the accuracies.

GOODNESS-OF-FIT TEST USING LOCAL MAXIMUM LIKELIHOOD POLYNOMIAL ESTIMATOR FOR SPARSE MULTINOMIAL DATA

  • Baek, Jang-Sun
    • Journal of the Korean Statistical Society
    • /
    • v.33 no.3
    • /
    • pp.313-321
    • /
    • 2004
  • We consider the problem of testing cell probabilities in sparse multinomial data. Aerts et al. (2000) presented T=${{\Sigma}_{i=1}}^{k}{[{p_i}^{*}-E{(p_{i}}^{*})]^2$ as a test statistic with the local least square polynomial estimator ${{p}_{i}}^{*}$, and derived its asymptotic distribution. The local least square estimator may produce negative estimates for cell probabilities. The local maximum likelihood polynomial estimator ${{\hat{p}}_{i}}$, however, guarantees positive estimates for cell probabilities and has the same asymptotic performance as the local least square estimator (Baek and Park, 2003). When there are cell probabilities with relatively much different sizes, the same contribution of the difference between the estimator and the hypothetical probability at each cell in their test statistic would not be proper to measure the total goodness-of-fit. We consider a Pearson type of goodness-of-fit test statistic, $T_1={{\Sigma}_{i=1}}^{k}{[{p_i}^{*}-E{(p_{i}}^{*})]^2/p_{i}$ instead, and show it follows an asymptotic normal distribution. Also we investigate the asymptotic normality of $T_2={{\Sigma}_{i=1}}^{k}{[{p_i}^{*}-E{(p_{i}}^{*})]^2/p_{i}$ where the minimum expected cell frequency is very small.

A Statistical Approach to Examine the Impact of Various Meteorological Parameters on Pan Evaporation

  • Pandey, Swati;Kumar, Manoj;Chakraborty, Soubhik;Mahanti, N.C.
    • The Korean Journal of Applied Statistics
    • /
    • v.22 no.3
    • /
    • pp.515-530
    • /
    • 2009
  • Evaporation from surface water bodies is influenced by a number of meteorological parameters. The rate of evaporation is primarily controlled by incoming solar radiation, air and water temperature and wind speed and relative humidity. In the present study, influence of weekly meteorological variables such as air temperature, relative humidity, bright sunshine hours, wind speed, wind velocity, rainfall on rate of evaporation has been examined using 35 years(1971-2005) of meteorological data. Statistical analysis was carried out employing linear regression models. The developed regression models were tested for goodness of fit, multicollinearity along with normality test and constant variance test. These regression models were subsequently validated using the observed and predicted parameter estimates with the meteorological data of the year 2005. Further these models were checked with time order sequence of residual plots to identify the trend of the scatter plot and then new standardized regression models were developed using standardized equations. The highest significant positive correlation was observed between pan evaporation and maximum air temperature. Mean air temperature and wind velocity have highly significant influence on pan evaporation whereas minimum air temperature, relative humidity and wind direction have no such significant influence.