• Title/Summary/Keyword: Local sequence of parameters

Search Result 37, Processing Time 0.023 seconds

SOME CHARACTERIZATIONS OF COHEN-MACAULAY MODULES IN DIMENSION > s

  • Dung, Nguyen Thi
    • Bulletin of the Korean Mathematical Society
    • /
    • v.51 no.2
    • /
    • pp.519-530
    • /
    • 2014
  • Let (R,m) be a Noetherian local ring and M a finitely generated R-module. For an integer s > -1, we say that M is Cohen-Macaulay in dimension > s if every system of parameters of M is an M-sequence in dimension > s introduced by Brodmann-Nhan [1]. In this paper, we give some characterizations for Cohen-Macaulay modules in dimension > s in terms of the Noetherian dimension of the local cohomology modules $H^i_m(M)$, the polynomial type of M introduced by Cuong [5] and the multiplicity e($\underline{x}$;M) of M with respect to a system of parameters $\underline{x}$.

Regularity of Maximum Likelihood Estimation for ARCH Regression Model with Lagged Dependent Variables

  • Hwang, Sun Y.
    • Journal of the Korean Statistical Society
    • /
    • v.29 no.1
    • /
    • pp.9-16
    • /
    • 2000
  • This article addresses the problem of maximum likelihood estimation in ARCH regression with lagged dependent variables. Some topics in asymptotics of the model such as uniform expansion of likelihood function and construction of a class of MLE are discussed, and the regularity property of MLE is obtained. The error process here is possibly non-Gaussian.

  • PDF

A Nonparametric Method for Nonlinear Regression Parameters

  • Kim, Hae-Kyung
    • Journal of the Korean Statistical Society
    • /
    • v.18 no.1
    • /
    • pp.46-61
    • /
    • 1989
  • This paper is concerned with the development of a nonparametric procedure for the statistical inference about the nonlinear regression parameters. A confidence region and a hypothesis testing procedure based on a class of signed linear rank statistics are proposed and the asymptotic distributions of the test statistic both under the null hypothesis and under a sequence of local alternatives are investigated. Some desirable asymptotic properties including the asymptotic relative efficiency are discussed for various score functions.

  • PDF

A Reduced Complexity Post Filter to Simultaneously Reduce Blocking and Ringing Artifacts of Compressed Video Sequence (압축동영상의 블록화 및 링 현상 제거를 위한 저 계산량 Post필터)

  • Hong, Min-Cheol;Cha, Hyeong-Tae;Han, Heon-Su
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.6
    • /
    • pp.665-674
    • /
    • 2001
  • In this paper, a reduced complexity fillet to simultaneously suppress the blocking and ringing artifacts of compressed video sequence is addressed. A new one dimensional regularized function to incorporate the smoothness to its neighboring pixels into the solution is defined, resulting in very low complexity filter The proposed regularization function consists of two sub-functions that combine local data fidelity and local smoothing constraints. The regularization parameters to control the trade-off between the local fidelity to the data and the smoothness are determined by available overhead information in decoder, such as maroc-block type and quantization step size. In addition, the regularization parameters are designed to have the limited range and stored as look-up-table, and therefore, the computational cost to determine the parameters can be reduced. The experimental results show the capability and efficiency of the proposed algorithm.

  • PDF

The Grammatical Structure of Protein Sequences

  • Bystroff, Chris
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2000.11a
    • /
    • pp.28-31
    • /
    • 2000
  • We describe a hidden Markov model, HMMTIR, for general protein sequence based on the I-sites library of sequence-structure motifs. Unlike the linear HMMs used to model individual protein families, HMMSTR has a highly branched topology and captures recurrent local features of protein sequences and structures that transcend protein family boundaries. The model extends the I-sites library by describing the adjacencies of different sequence-structure motifs as observed in the database, and achieves a great reduction in parameters by representing overlapping motifs in a much more compact form. The HMM attributes a considerably higher probability to coding sequence than does an equivalent dipeptide model, predicts secondary structure with an accuracy of 74.6% and backbone torsion angles better than any previously reported method, and predicts the structural context of beta strands and turns with an accuracy that should be useful for tertiary structure prediction. HMMSTR has been incorporated into a public, fully-automated protein structure prediction server.

  • PDF

Directional adjacency-score function for protein fold recognition

  • Heo, Mu-Young;Cheon, Moo-Kyung;Kim, Suhk-Mann;Chung, Kwang-Hoon;Chang, Ik-Soo
    • Interdisciplinary Bio Central
    • /
    • v.1 no.2
    • /
    • pp.8.1-8.6
    • /
    • 2009
  • Introduction: It is a challenge to design a protein score function which stabilizes the native structures of many proteins simultaneously. The coarse-grained description of proteins to construct the pairwise-contact score function usually ignores the backbone directionality of protein structures. We propose a new two-body score function which stabilizes all native states of 1,006 proteins simultaneously. This two-body score function differs from the usual pairwise-contact functions in that it considers two adjacent amino acids at two ends of each peptide bond with the backbone directionality from the N-terminal to the C-terminal. The score is a corresponding propensity for a directional alignment of two adjacent amino acids with their local environments. Results and Discussion: We show that the construction of a directional adjacency-score function was achieved using 1,006 training proteins with the sequence homology less than 30%, which include all representatives of different protein classes. After parameterizing the local environments of amino acids into 9 categories depending on three secondary structures and three kinds of hydrophobicity of amino acids, the 32,400 adjacency-scores of amino acids could be determined by the perceptron learning and the protein threading. These could stabilize simultaneously all native folds of 1,006 training proteins. When these parameters are tested on the new distinct 382 proteins with the sequence homology less than 90%, 371 (97.1%) proteins could recognize their native folds. We also showed using these parameters that the retro sequence of the SH3 domain, the B domain of Staphylococcal protein A, and the B1 domain of Streptococcal protein G could not be stabilized to fold, which agrees with the experimental evidence.

Inversion of Geophysical Data Using Genetic Algorithms (유전적 기법에 의한 지구물리자료의 역산)

  • Kim, Hee Joon
    • Economic and Environmental Geology
    • /
    • v.28 no.4
    • /
    • pp.425-431
    • /
    • 1995
  • Genetic algorithms are so named because they are analogous to biological processes. The model parameters are coded in binary form. The algorithm then starts with a randomly chosen population of models called chromosomes. The second step is to evaluate the fitness values of these models, measured by a correlation between data and synthetic for a particular model. Then, the three genetic processes of selection, crossover, and mutation are performed upon the model in sequence. Genetic algorithms share the favorable characteristics of random Monte Carlo over local optimization methods in that they do not require linearizing assumptions nor the calculation of partial derivatives, are independent of the misfit criterion, and avoid numerical instabilities associated with matrix inversion. An additional advantage over converntional methods such as iterative least squares is that the sampling is global, rather than local, thereby reducing the tendency to become entrapped in local minima and avoiding the dependency on an assumed starting model.

  • PDF

Signed Linear Rank Statistics for Autoregressive Processes

  • Kim, Hae-Kyung;Kim, Il-Kyu
    • Communications for Statistical Applications and Methods
    • /
    • v.2 no.2
    • /
    • pp.198-212
    • /
    • 1995
  • This study provides a nonparametric procedure for the statistical inference of the parameters in stationary autoregressive processes. A confidence region and a hypothesis testing procedure based on a class of signed linear rank statistics are proposed and the asymptotic distributions of the test statistic both underthe null hypothesis and under a sequence of local alternatives are investigated. Some desirable asymptotic properties including the asymptotic relative efficiency are discussed for various score functions.

  • PDF

Test of Hypotheses based on LAD Estimators in Nonlinear Regression Models

  • Seung Hoe Choi
    • Communications for Statistical Applications and Methods
    • /
    • v.2 no.2
    • /
    • pp.288-295
    • /
    • 1995
  • In this paper a hypotheses test procedure based on the least absolute deviation estimators for the unknown parameters in nonlinear regression models is investigated. The asymptotic distribution of the proposed likelihood ratio test statistic are established voth under the null hypotheses and a sequence of local alternative hypotheses. The asymptotic relative efficiency of the proposed test with classical test based on the least squares estimator is also discussed.

  • PDF

Implementation of CNN in the view of mini-batch DNN training for efficient second order optimization (효과적인 2차 최적화 적용을 위한 Minibatch 단위 DNN 훈련 관점에서의 CNN 구현)

  • Song, Hwa Jeon;Jung, Ho Young;Park, Jeon Gue
    • Phonetics and Speech Sciences
    • /
    • v.8 no.2
    • /
    • pp.23-30
    • /
    • 2016
  • This paper describes some implementation schemes of CNN in view of mini-batch DNN training for efficient second order optimization. This uses same procedure updating parameters of DNN to train parameters of CNN by simply arranging an input image as a sequence of local patches, which is actually equivalent with mini-batch DNN training. Through this conversion, second order optimization providing higher performance can be simply conducted to train the parameters of CNN. In both results of image recognition on MNIST DB and syllable automatic speech recognition, our proposed scheme for CNN implementation shows better performance than one based on DNN.