• Title/Summary/Keyword: 최대엔트로피

Search Result 167, Processing Time 0.025 seconds

Prior distributions using the entropy principles (엔트로피 이론을 이용한 사전 확률 분포함수의 추정)

  • Lee, Jung-Jin;Shin, Wan-Seon
    • The Korean Journal of Applied Statistics
    • /
    • v.3 no.2
    • /
    • pp.91-105
    • /
    • 1990
  • Several practical prior distributions are derived using the maximum entropy principle. Also, an interactive method for estimating a prior distribution which uses the minimum cross-entropy principle is proposed when there are many prior informations. The consistency of the prior distributions obtained by the entropy principles is discussed.

  • PDF

Maximum Entropy Spectral Analysis for Nonstationary Random Response of Vehicle (최대 엔트로피 스펙트럼 방법을 이용한 차량의 과도 응답 특성 해석)

  • Zhang, Li Jun;Lee, Chang-Myung;Wang, Yan Song
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.12 no.8
    • /
    • pp.589-597
    • /
    • 2002
  • In this paper the nonstationary response of accelerating vehicle is firstly obtained by using nonstationary road roughness model in time domain. To get the result of nonstationary response in frequency domain, the maximum entropy method is used for Processing nonstationary response of vehicle in frequency domain. The three-dimensional transient maximum entropy spectrum (MES) of response is given.

Generalized Maximum Entropy Estimator for the Linear Regression Model with a Spatial Autoregressive Disturbance (오차항이 SAR(1)을 따르는 공간선형회귀모형에서 일반화 최대엔트로피 추정량에 관한 연구)

  • Cheon, Soo-Young;Lim, Seong-Seop
    • Communications for Statistical Applications and Methods
    • /
    • v.16 no.2
    • /
    • pp.265-275
    • /
    • 2009
  • This paper considers a linear regression model with a spatial autoregressive disturbance with ill-posed data and proposes the generalized maximum entropy(GME) estimator of regression coefficients. The performance of this estimator is investigated via Monte Carlo experiments. The results show that the GME estimator provides efficient and robust estimate for the unknown parameter.

Probability Distribution of Nonlinear Random Wave Heights Using Maximum Entropy Method (최대 엔트로피 방법을 이용한 비선형 불규칙 파고의 확률분포함수)

  • 안경모
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.10 no.4
    • /
    • pp.204-210
    • /
    • 1998
  • This paper presents the development of the probability density function applicable for wave heights (peak-to-trough excursions) in finite water depth including shallow water depth. The probability distribution applicable to wave heights of a non-Gaussian random process is derived based on the concept of the maximum entropy method. When wave heights are limited by breaking wave heights (or water depth) and only first and second moments of wave heights are given, the probability density function developed is closed form and expressed in terms of wave parameters such as $H_m$(mean wave height), $H_{rms}$(root-mean-square wave height), $H_b$(breaking wave height). When higher than third moment of wave heights are given, it is necessary to solve the system of nonlinear integral equations numerically using Newton-Raphson method to obtain the parameters of probability density function which is maximizing the entropy function. The probability density function thusly derived agrees very well with the histogram of wave heights in finite water depth obtained during storm. The probability density function of wave heights developed using maximum entropy method appears to be useful in estimating extreme values and statistical properties of wave heights for the design of coastal structures.

  • PDF

Comparison of Two Parametric Estimators for the Entropy of the Lognormal Distribution (로그정규분포의 엔트로피에 대한 두 모수적 추정량의 비교)

  • Choi, Byung-Jin
    • Communications for Statistical Applications and Methods
    • /
    • v.18 no.5
    • /
    • pp.625-636
    • /
    • 2011
  • This paper proposes two parametric entropy estimators, the minimum variance unbiased estimator and the maximum likelihood estimator, for the lognormal distribution for a comparison of the properties of the two estimators. The variances of both estimators are derived. The influence of the bias of the maximum likelihood estimator on estimation is analytically revealed. The distributions of the proposed estimators obtained by the delta approximation method are also presented. Performance comparisons are made with the two estimators. The following observations are made from the results. The MSE efficacy of the minimum variance unbiased estimator appears consistently high and increases rapidly as the sample size and variance, n and ${\sigma}^2$, become simultaneously small. To conclude, the minimum variance unbiased estimator outperforms the maximum likelihood estimator.

Magnetization and Magnetic Entropy Change in Superparamagnetic Co-Ferrite Nanoparticle (초상자성 코발트 페라이트 나노입자에 대한 자화 및 자기엔트로피 변화)

  • Ahn, Yang-Kyu;Choi, Eun-Jung
    • Journal of the Korean Magnetics Society
    • /
    • v.18 no.2
    • /
    • pp.63-66
    • /
    • 2008
  • In order to the magnetization and magnetic entropy change for superparamagnetic ferrite nanoparticles, ultrafine cobalt ferrite particles were synthesized using a mircoemulsion method. The peak of X-ray diffraction pattern corresponds to a cubic spinel structure with the lattice constant 8.40 $\AA$. The average particle size, determined from X-ray diffraction line-broadening using Scherrer's, is 7.9 nm. The maximal magnetizations measured at 5 and 300 K are 24.3 emu/g and 17.2 emu/g, respectively. Superparamagnetic behavior of the sample is confirmed by the coincidence of the M vs. H/T plots at various temperatures. According to the thermodynamic theory, magnetic entropy change decreases with increasing temperature.

A Study on the Entropy of Binary First Order Markov Information Source (이진 일차 Markov 정보원의 엔트로피에 관한 연구)

  • 송익호;안수길
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.20 no.2
    • /
    • pp.16-22
    • /
    • 1983
  • In this paper, we obtained PFME(probability for maximum entropy) and entropy when a conditional probability was given in a binary list order Markov Information Source. And, when steady state probability was constant, the influence of change of a conditional probability on entropy was examined, too.

  • PDF

Maximum Entropy-based Emotion Recognition Model using Individual Average Difference (개인별 평균차를 이용한 최대 엔트로피 기반 감성 인식 모델)

  • Park, So-Young;Kim, Dong-Keun;Whang, Min-Cheol
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.7
    • /
    • pp.1557-1564
    • /
    • 2010
  • In this paper, we propose a maximum entropy-based emotion recognition model using the individual average difference of emotional signal, because an emotional signal pattern depends on each individual. In order to accurately recognize a user's emotion, the proposed model utilizes the difference between the average of the input emotional signals and the average of each emotional state's signals(such as positive emotional signals and negative emotional signals), rather than only the given input signal. With the aim of easily constructing the emotion recognition model without the professional knowledge of the emotion recognition, it utilizes a maximum entropy model, one of the best-performed and well-known machine learning techniques. Considering that it is difficult to obtain enough training data based on the numerical value of emotional signal for machine learning, the proposed model substitutes two simple symbols such as +(positive number)/-(negative number) for every average difference value, and calculates the average of emotional signals per second rather than the total emotion response time(10 seconds).

Intra-Sentence Segmentation using Maximum Entropy Model for Efficient Parsing of English Sentences (효율적인 영어 구문 분석을 위한 최대 엔트로피 모델에 의한 문장 분할)

  • Kim Sung-Dong
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.5
    • /
    • pp.385-395
    • /
    • 2005
  • Long sentence analysis has been a critical problem in machine translation because of high complexity. The methods of intra-sentence segmentation have been proposed to reduce parsing complexity. This paper presents the intra-sentence segmentation method based on maximum entropy probability model to increase the coverage and accuracy of the segmentation. We construct the rules for choosing candidate segmentation positions by a teaming method using the lexical context of the words tagged as segmentation position. We also generate the model that gives probability value to each candidate segmentation positions. The lexical contexts are extracted from the corpus tagged with segmentation positions and are incorporated into the probability model. We construct training data using the sentences from Wall Street Journal and experiment the intra-sentence segmentation on the sentences from four different domains. The experiments show about $88\%$ accuracy and about $98\%$ coverage of the segmentation. Also, the proposed method results in parsing efficiency improvement by 4.8 times in speed and 3.6 times in space.