• Title/Summary/Keyword: f-invariant

Search Result 109, Processing Time 0.027 seconds

The Null Distribution of the Likelihood Ratio Test for a Mixture of Two Gammas

  • Min, Dae-Hee
    • Journal of the Korean Data and Information Science Society
    • /
    • v.9 no.2
    • /
    • pp.289-298
    • /
    • 1998
  • We investigate the distribution of likelihood ratio test(LRT) of null hypothesis a sample is from single gamma with unknown shape and scale against the alternative hypothesis a sample is from a mixture of two gammas, each with unknown scale and unknown (but equal) scale. To obtain stable maximum likelihood estimates(MLE) of a mixture of two gamma distributions, the EM(Dempster, Laird, and Robin(1977))and Modified Newton(Jensen and Johansen(1991)) algorithms were implemented. Based on EM, we made a simple structure likelihood equation for each parameter and could obtain stable solution by Modified Newton Algorithms. Simulation study was conducted to investigate the distribution of LRT for sample size n = 25, 50, 75, 100, 50, 200, 300, 400, 500 with 2500 replications. To determine the small sample distribution of LRT, I considered the model of a gamma distribution with shape parameter equal to 1 + f(n) and scale parameter equal to 2. The simulation results indicate that the null distribution is essentially invariant to the value of the shape parameter. Modeling of the null distribution indicates that it is well approximated by a gamma distribution with shape parameter equal to the quantity $0.927+1.18/\sqrt{n}$ and scale parameter equal to 2.16.

  • PDF

Double K-Means Clustering (이중 K-평균 군집화)

  • 허명회
    • The Korean Journal of Applied Statistics
    • /
    • v.13 no.2
    • /
    • pp.343-352
    • /
    • 2000
  • In this study. the author proposes a nonhierarchical clustering method. called the "Double K-Means Clustering", which performs clustering of multivariate observations with the following algorithm: Step I: Carry out the ordinary K-means clmitering and obtain k temporary clusters with sizes $n_1$,... , $n_k$, centroids $c_$1,..., $c_k$ and pooled covariance matrix S. $\bullet$ Step II-I: Allocate the observation x, to the cluster F if it satisfies ..... where N is the total number of observations, for -i = 1, . ,N. $\bullet$ Step II-2: Update cluster sizes $n_1$,... , $n_k$, centroids $c_$1,..., $c_k$ and pooled covariance matrix S. $\bullet$ Step II-3: Repeat Steps II-I and II-2 until the change becomes negligible. The double K-means clustering is nearly "optimal" under the mixture of k multivariate normal distributions with the common covariance matrix. Also, it is nearly affine invariant, with the data-analytic implication that variable standardizations are not that required. The method is numerically demonstrated on Fisher's iris data.

  • PDF

Microarray analysis of hypoxia-induced changes in gene expression in BV-2 microglial cells (BV-2 microglia 세포주에서 저산소증의 유전자 발현에 대한 마이크로어레이 분석)

  • Kim, Bum-Shik;Seo, Jung-chul
    • Journal of Acupuncture Research
    • /
    • v.20 no.4
    • /
    • pp.85-92
    • /
    • 2003
  • 목적 : 허혈시 발생되는 저산소중 상태에서는 세포독성을 유발한다고 알려져 있으나 정확한 기전은 아직 규명되지 않았다. 본 연구에서는 뇌허혈로 인한 세포독성의 기전을 유전자 발현을 통하여 살펴보고자 하였다. 방법 : 본 실험에서는 BV-2 microglia 세포주에 12시간 동안의 저산소 상태에서의 유전자 발현을 분석하기 위하여 마이크로에레이를 시행하였다. 결과 : 저산소 상태에서는 정상에 비하여 cathepsin F, growth factor independent 1, calcitonin/calcitonin-related poly, leucine-rich repeat LGI family membrane, dublecortin, cyclohydrolase 1, Ia-associated invariant chain, carbohydrate kinase-like과 erythrocyte protein band 4.1-like 3 등의 유전자 발현이 3배 이상 증가하였다. 한편 neuronal guanine nucleotide exchange factor, Bcl-2-related ovarian killer protein, chemokine (C-X-C motif) ligand 5, RNA binding motif protein 3, interleukin 2 receptor, alpha chain, crystallin zeta, cytochrome P450 subfamily IV B, asparagine synthetase과 moesin 등의 유전자 발현은 0.2배 이하로 감소하였다. 결론 : 이상의 결과는 저산소중에 관여하는 유전자 및 저산소중과 관련된 뇌경색 등의 질환의 기전을 밝히는데 기초적 자료로 이용될 수 있을 것이다.

  • PDF

Image Registration Based On Statistical Descriptors In Frequency Domain

  • Chang, Min-hyuk;Ahmad, Muhammad-Bilal;Lee, Cheul-hee;Chun, Jong-hoon;Park, Seung-jin;Park, Jong-an
    • Proceedings of the IEEK Conference
    • /
    • 2002.07c
    • /
    • pp.1531-1534
    • /
    • 2002
  • Shape description and its corresponding matching algorithm is one of the main concerns in MPEG-7. In this paper, a new method is proposed for shape registration of 2D objects for MPEG-7 Shapes are recognized using the Hu statistical moments in frequency domain. The Hu moments are moment-based descriptors of planar shapes, which are invariant under general translation, rotational, scaling, and reflection transformation. The image is transformed into frequency domain using Fourier Transform. Annular and radial wedge distributions fur the power spectra are extracted. Different statistical features (Hu moments) are found f3r the power spectrum of each selected transformed individual feature. The Euclidean distance of the extracted moment descriptors of the features are found with respect to the shapes in the database. The minimum Euclidean distance is the candidate for the matched shape. The simulation results are performed on the test shapes of MPEG-7.

  • PDF

Testing Gravity with Cosmic Shear Data from the Deep Lens Survey

  • Sabiu, Cristiano G.;Yoon, Mijin;Jee, Myungkook James
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.43 no.2
    • /
    • pp.40.4-41
    • /
    • 2018
  • The current 'standard model' of cosmology provides a minimal theoretical framework that can explain the gaussian, nearly scale-invariant density perturbations observed in the CMB to the late time clustering of galaxies. However accepting this framework, requires that we include within our cosmic inventory a vacuum energy that is ~122 orders of magnitude lower than Quantum Mechanical predictions, or alternatively a new scalar field (dark energy) that has negative pressure. An alternative approach to adding extra components to the Universe would be to modify the equations of Gravity. Although GR is supported by many current observations there are still alternative models that can be considered. Recently there have been many works attempting to test for modified gravity using the large scale clustering of galaxies, ISW, cluster abundance, RSD, 21cm observations, and weak lensing. In this work, we compare various modified gravity models using cosmic shear data from the Deep Lens Survey as well as data from CMB, SNe Ia, and BAO. We use the Bayesian Evidence to quantify the comparison robustly, which naturally penalizes complex models with weak data support. In this talk we present our methodology and preliminary results that show f(R) gravity is mildly disfavoured by the data.

  • PDF

Characteristics of Measurement Errors due to Reflective Sheet Targets - Surveying for Sejong VLBI IVP Estimation (반사 타겟의 관측 오차 특성 분석 - 세종 VLBI IVP 결합 측량)

  • Hong, Chang-Ki;Bae, Tae-Suk
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.4
    • /
    • pp.325-332
    • /
    • 2022
  • Determination of VLBI IVP (Very Long Baseline Interferometry Invariant Point) position with high accuracy is required to compute local tie vectors between the space geodetic techniques. In general, reflective targets are attached on VLBI antenna and slant distances, horizontal and vertical angles are measured from the pillars. Then, adjustment computation is performed by using the mathematical model which connects measurements and unknown parameters. This indicates that the accuracy of the estimated solutions is affected by the accuracy of the measurements. One of issues in local tie surveying, however, is that the reflective targets are not in favorable condition, that is, the reflective sheet target cannot be perfectly aligned to the instrument perpendicularly. Deviation from the line of sight of an instrument may cause different type of measurement errors. This inherent limitation may lead to incorrect stochastic modeling for the measurements in adjustment computation procedures. In this study, error characteristics by measurement types and pillars are analyzed, respectively. The analysis on the studentized residuals is performed after adjustment computation. The normality of the residuals is tested and then equal variance test between the measurement types are performed. The results show that there are differences in variance according to the measurement types. Differences in variance between distances and angle measurements are observed when F-test is performed for the measurements from each pillar. Therefore, more detailed stochastic modeling is required for optimal solutions, especially in local tie survey.

An Intensity Based Self-referencing Fiber Optic Sensor Using Tunable Fabry-Perot Filter and FBG (가변 페브리-페로 필터와 FBG를 이용한 광세기 기반 자기기준 광섬유 센서)

  • Choi, Sang-Jin;Pan, Jae-Kyung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.3
    • /
    • pp.146-152
    • /
    • 2013
  • In this paper, we have proposed and experimentally demonstrated an intensity-based self-referencing fiber optic sensor. The proposed fiber optic sensor consists of a broadband light source (BLS), fiber Bragg grating (FBG), tunable Fabry-Perot (F-P) filter, and LabVIEW program. We define the measurement parameter (X) and the calibration parameter (${\beta}$) to determine the transfer function(H) of the self-referencing fiber optic sensor, and the validity of the theoretical analysis is confirmed by experiments. The self-referencing characteristic for the proposed system has been validated by showing that the measurement parameter (X) is invariant for BLS optical power attenuations of 0 dB, 3 dB, and 6 dB. Also, the measured result is irrelevant to the FBGs with different characteristics. This means that the proposed fiber optic sensor offers the flexibility for determining the FBGs needed for implementation. Experimental results for the proposed fiber optic sensor are in good agreement with a theoretical analysis for BLS optical power attenuations and for three FBG pairs with different characteristics. So, the proposed fiber optic sensor has several benefits, including the self-referencing characteristic and the flexibility to determine the FBGs.

Separation and Elution Behavior of Some Iron(Ⅲ)porphyrin Complexes by Reversed-Phase Liquid Chromatography (역상 액체 크로마토그래피에 의한 Iron(Ⅲ)porphyrin 착화합물들의 분리 및 용리거동에 관한 연구)

  • Chang Hee Kang;In Whan Kim;Won Lee
    • Journal of the Korean Chemical Society
    • /
    • v.37 no.12
    • /
    • pp.1035-1046
    • /
    • 1993
  • Some iron(III)porphyrin complexes were prepared, and identified by the spectroscopic methods. Elution behavior of iron(III)porphyrin complexes was investigated by reversed-phase HPLC. The optimum conditions for the separation of iron(III)porphyrin complexes were examined with respect to flow rate and mobile phase strength. These complexes were successfully separated on NOVA-PAK $C_{18}$ column using methanol / water(95/5) for $[T_pCF_3PP)Fe(R)]$ and methanol / water (98/2) for $[(P)Fe(C_6F_5)]$ as a mobile phase. It was found that these complexes were largely eluted in an acceptable range of capacity factor value ($0{\leq}logk'{\leq}1$). The dependence of the capacity factor (k') on the volume fraction of water in the binary mobile phase as well as the dependence of k' on the liquid-liquid extraction distribution ratio$(D_c)$ in methanol-water / n-pentadecane extraction system showed a good linearity. It means that the retention of iron(III)porphyrin complexes on NOVA-PAK $C_{18}$ column is largely due to the solvophobic effect. Also, there was a good linear dependence of the capacity factor(k') on the column temperature and enthalpy calculated by van't Hoff plot. From these results, it was confirmed that the retention mechanism of iron(III)porphyrin complexes in reversed-phase liquid chromatography was invariant under the condition of various temperature, and the solvophobic binding process exhibited isoequilibrium behavior.

  • PDF

A Study on LRFD Reliability Based Design Criteria of RC Flexural Members (R.C. 휨부재(部材)의 L.R.F.D. 신뢰성(信賴性) 설계기준(設計基準)에 관한 연구(研究))

  • Cho, Hyo Nam
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.1 no.1
    • /
    • pp.21-32
    • /
    • 1981
  • Recent trends in design standards development in some European countries and U.S.A. have encouraged the use of probabilistic limit sate design concepts. Reliability based design criteria such as LSD, LRFD, PBLSD, adopted in those advanced countries have the potentials that they afford for symplifying the design process and placing it on a consistent reliability bases for various construction materials. A reliability based design criteria for RC flexural members are proposed in this study. Lind-Hasofer's invariant second-moment reliability theory is used in the derivation of an algorithmic reliability analysis method as well as an iterative determination of load and resistance factors. In addition, Cornell's Mean First-Order Second Moment Method is employed as a practical tool for the approximate reliability analysis and the derivation of design criteria. Uncertainty measures for flexural resistance and load effects are based on the Ellingwood's approach for the evaluation of uncertainties of loads and resistances. The implied relative safety levels of RC flexural members designed by the strength design provisions of the current standard code were evaluated using the second moment reliability analysis method proposed in this study. And then, resistance and load factors corresponding to the target reliability index(${\beta}=4$) which is considered to be appropriate level of reliability considering our practices are calculated by using the proposed methods. These reliability based factors were compared to those specified by our current ultimate strength design provisions. It was found that the reliability levels of flexural members designed by current code are not appropriate, and the code specified resistance and load factors were considerably different from the reliability based resistance and load factors proposed in this study.

  • PDF