• Title/Summary/Keyword: maximum likelihood estimators

Search Result 314, Processing Time 0.022 seconds

Sensor Location Estimation in of Landscape Plants Cultivating System (LPCS) Based on Wireless Sensor Networks with IoT

  • Kang, Tae-Sun;Lee, Sang-Hyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.4
    • /
    • pp.226-231
    • /
    • 2020
  • In order to maximize the production of landscape plants in optimal condition while coexisting with the environment in terms of precision agriculture, quick and accurate information gathering of the internal environmental elements of the growing container is necessary. This may depend on the accuracy of the positioning of numerous sensors connected to landscape plants cultivating system (LPCS) in containers. Thus, this paper presents a method for estimating the location of the sensors related to cultivation environment connected to LPCS by measuring the received signal strength (RSS) or time of arrival TOA received between oneself and adjacent sensors. The Small sensors connected to the LPCS of container are known for their locations, but the remaining locations must be estimated. For this in the paper, Rao-Cramer limits and maximum likelihood estimators are derived from Gaussian models and lognormal models for TOA and RSS measurements, respectively. As a result, this study suggests that both RSS and TOA range measurements can produce estimates of the exact locations of the cultivation environment sensors within the wireless sensor network related to the LPCS.

Carrier phase recovery algorithm for LDPC coded system (LDPC 코드를 이용한 위상 동기 알고리즘)

  • Lee Juhyung;Kim Namshik;Park Hyuncheol;Kim Pansu;Oh Dukgil;Lee Hojin
    • Proceedings of the IEEK Conference
    • /
    • 2004.06a
    • /
    • pp.43-46
    • /
    • 2004
  • In this paper, we present a carrier phase estimation algorithm for LDPC coded systems. LDPC coded system can not achieve the ideal performance if phase offset is introduced by channel. However, the estimation of phase offset is very hard since the operating point of LDPC is very low SNR. To solve this problem, the algorithm using the tentative soft decision value and based on Maximum Likelihood (ML), was proposed in [2]. But this algorithm has problem which works only under constant phase offset. If phase offset is time variant, it has a severe degradation in performance. To solve this problem. we propose two types of estimators. symbol by symbol estimator: Unidirectional estimator (UDE) and hi-directional estimator (BDE), and sub-block estimator (SBE).

  • PDF

Testing for $P(X_{1}\;<\;X_{2})$ in Bivariate Exponential Model with Censored Data (중단자료를 갖는 이변량 지수 모형에서 $P(X_{1}\;<\;X_{2})$에 대한 검정)

  • Park, Jin-Pyo;Cho, Jang-Sik
    • Journal of the Korean Data and Information Science Society
    • /
    • v.8 no.2
    • /
    • pp.143-152
    • /
    • 1997
  • In this paper, we obtain maximum likelihood estimators for $P(X_{1}\;<\;X_{2})$ in the Marshall and Olkin's bivariate exponential model with bivariate censored data. The asymptotic normality of the estimator is derived. Also we propose approximate testing for $P(X_{1}\;<\;X_{2})$ based on the M.L.E. We compare the test powers under vsrious conditions through Monte Carlo simulation.

  • PDF

Joint Modeling of Death Times and Counts Considering a Marginal Frailty Model (공변량을 포함한 사망시간과 치료횟수의 모형화를 위한 주변환경효과모형의 적용)

  • Park, Hee-Chang;Park, Jin-Pyo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.9 no.2
    • /
    • pp.311-322
    • /
    • 1998
  • In this paper the problem of modeling count data where the observation period is determined by the survival time of the individual under study is considered. We assume marginal frailty model in the counts. We assume that the death times follow a Weibull distribution with a rate that depends on some covariates. For the counts, given a frailty, a Poisson process is assumed with the intensity depending on time and the covariates. A gamma model is assumed for the frailty. Maximum likelihood estimators of the model parameters are obtained. The model is applied to data set of patients with breast cancer who received a bone marrow transplant. A model for the time to death and the number of supportive transfusions a patient received is constructed and consequences of the model are examined.

  • PDF

New Inference for a Multiclass Gaussian Process Classification Model using a Variational Bayesian EM Algorithm and Laplace Approximation

  • Cho, Wanhyun;Kim, Sangkyoon;Park, Soonyoung
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.4
    • /
    • pp.202-208
    • /
    • 2015
  • In this study, we propose a new inference algorithm for a multiclass Gaussian process classification model using a variational EM framework and the Laplace approximation (LA) technique. This is performed in two steps, called expectation and maximization. First, in the expectation step (E-step), using Bayes' theorem and the LA technique, we derive the approximate posterior distribution of the latent function, indicating the possibility that each observation belongs to a certain class in the Gaussian process classification model. In the maximization step, we compute the maximum likelihood estimators for hyper-parameters of a covariance matrix necessary to define the prior distribution of the latent function by using the posterior distribution derived in the E-step. These steps iteratively repeat until a convergence condition is satisfied. Moreover, we conducted the experiments by using synthetic data and Iris data in order to verify the performance of the proposed algorithm. Experimental results reveal that the proposed algorithm shows good performance on these datasets.

On a bivariate step-stress life test (두 개의 부품으로 구성된 시스템의 단계적 충격생명검사에 관한 연구)

  • 이석훈;박래현;박희창
    • The Korean Journal of Applied Statistics
    • /
    • v.5 no.2
    • /
    • pp.193-209
    • /
    • 1992
  • We consider a Step Life Testing which is deviced for a two-component serial system with the considerably long life time. In the modelling stage we discuss the bivariate exponential distribution suggested by Block and Basu as the bivariate survival function for the two-component system, and develope the cumulative exposure model introduced by Nelson so that it can be used under the bivariate function. We consider inference on the component life time when the components are at work in the system by combining the information from system life test and that from the component tests carried out separately under the controlled environment. In data analysis, maximum likelihood estimators are discussed with the initial value obtained by an weighted least square method. Finally we discuss the optimal time for changing the stress in the simple step stress life testing.

  • PDF

Use of Lèvy distribution to analyze longitudinal data with asymmetric distribution and presence of left censored data

  • Achcar, Jorge A.;Coelho-Barros, Emilio A.;Cuevas, Jose Rafael Tovar;Mazucheli, Josmar
    • Communications for Statistical Applications and Methods
    • /
    • v.25 no.1
    • /
    • pp.43-60
    • /
    • 2018
  • This paper considers the use of classical and Bayesian inference methods to analyze data generated by variables whose natural behavior can be modeled using asymmetric distributions in the presence of left censoring. Our approach used a $L{\grave{e}}vy$ distribution in the presence of left censored data and covariates. This distribution could be a good alternative to model data with asymmetric behavior in many applications as lifetime data for instance, especially in engineering applications and health research, when some observations are large in comparison to other ones and standard distributions commonly used to model asymmetry data like the exponential, Weibull or log-logistic are not appropriate to be fitted by the data. Inferences for the parameters of the proposed model under a classical inference approach are obtained using a maximum likelihood estimators (MLEs) approach and usual asymptotical normality for MLEs based on the Fisher information measure. Under a Bayesian approach, the posterior summaries of interest are obtained using standard Markov chain Monte Carlo simulation methods and available software like SAS. A numerical illustration is presented considering data of thyroglobulin levels present in a group of individuals with differentiated cancer of thyroid.

Numerical Bayesian updating of prior distributions for concrete strength properties considering conformity control

  • Caspeele, Robby;Taerwe, Luc
    • Advances in concrete construction
    • /
    • v.1 no.1
    • /
    • pp.85-102
    • /
    • 2013
  • Prior concrete strength distributions can be updated by using direct information from test results as well as by taking into account indirect information due to conformity control. Due to the filtering effect of conformity control, the distribution of the material property in the accepted inspected lots will have lower fraction defectives in comparison to the distribution of the entire production (before or without inspection). A methodology is presented to quantify this influence in a Bayesian framework based on prior knowledge with respect to the hyperparameters of concrete strength distributions. An algorithm is presented in order to update prior distributions through numerical integration, taking into account the operating characteristic of the applied conformity criteria, calculated based on Monte Carlo simulations. Different examples are given to derive suitable hyperparameters for incoming strength distributions of concrete offered for conformity assessment, using updated available prior information, maximum-likelihood estimators or a bootstrap procedure. Furthermore, the updating procedure based on direct as well as indirect information obtained by conformity assessment is illustrated and used to quantify the filtering effect of conformity criteria on concrete strength distributions in case of a specific set of conformity criteria.

Exploring modern machine learning methods to improve causal-effect estimation

  • Kim, Yeji;Choi, Taehwa;Choi, Sangbum
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.2
    • /
    • pp.177-191
    • /
    • 2022
  • This paper addresses the use of machine learning methods for causal estimation of treatment effects from observational data. Even though conducting randomized experimental trials is a gold standard to reveal potential causal relationships, observational study is another rich source for investigation of exposure effects, for example, in the research of comparative effectiveness and safety of treatments, where the causal effect can be identified if covariates contain all confounding variables. In this context, statistical regression models for the expected outcome and the probability of treatment are often imposed, which can be combined in a clever way to yield more efficient and robust causal estimators. Recently, targeted maximum likelihood estimation and causal random forest is proposed and extensively studied for the use of data-adaptive regression in estimation of causal inference parameters. Machine learning methods are a natural choice in these settings to improve the quality of the final estimate of the treatment effect. We explore how we can adapt the design and training of several machine learning algorithms for causal inference and study their finite-sample performance through simulation experiments under various scenarios. Application to the percutaneous coronary intervention (PCI) data shows that these adaptations can improve simple linear regression-based methods.

Estimating the Economic Value of the Songieong Beach Using A Count Data Model: - Off-season Estimating Value of the Beach - (가산자료모형을 이용한 송정 해수욕장의 경제적 가치추정: - 비수기 해수욕장의 가치추정 -)

  • Heo, Yun-Jeong;Lee, Seung-Lae
    • The Journal of Fisheries Business Administration
    • /
    • v.38 no.2
    • /
    • pp.79-101
    • /
    • 2007
  • The purpose of this study is to estimate the economic value of the Songieong Beach in Off-season, using a Individual Travel Cost Model(ITCM). Songieong Beach is located in Busan but far away from city. These days, however, the increased rate of traffic inflow to the Songieong beach and the five-day working week are reflected in the trend analysis. Moreover, people have changed psychological value. For that reason, visitors are on the increase on the beach in off-season. The ITCM is applied to estimate non-market value or environmental Good like a Contingent Valuation Method and Hedonic Price Model etc. The ITCM was derived from the Count Data Model(i.e. Poisson and Negative Binomial model). So this paper compares Poisson and negative binomial count data models to measure the tourism demands. The data for the study were collected from the Songjeong Beach on visitors over the a week from November 1 through November 23, 2006. Interviewers were instructed to interview only individuals. So the sample was taken in 113. A dependent variable that is defined on the non-negative integers and subject to sampling truncation is the result of a truncated count data process. This paper analyzes the effects of determinants on visitors' demand for exhibition using a class of maximum-likelihood regression estimators for count data from truncated samples, The count data and truncated models are used primarily to explain non-negative integer and truncation properties of tourist trips as suggested by the economic valuation literature. The results suggest that the truncated negative binomial model is improved overdispersion problem and more preferred than the other models in the study. This paper is not the same as the others. One thing is that Estimating Value of the Beach in off-season. The other thing is this study emphasizes in particular 'travel cost' that is not only monetary cost but also including opportunity cost of 'travel time'. According to the truncated negative binomial model, estimates the Consumer Surplus(CS) values per trip of about 199,754 Korean won and the total economic value was estimated to be 1,288,680 Korean won.

  • PDF