• 제목/요약/키워드: Sample entropy

검색결과 71건 처리시간 0.027초

역가우스분포에 대한 쿨백-라이블러 정보 기반 적합도 검정 (Kullback-Leibler Information-Based Tests of Fit for Inverse Gaussian Distribution)

  • 최병진
    • 응용통계연구
    • /
    • 제24권6호
    • /
    • pp.1271-1284
    • /
    • 2011
  • 본 논문에서는 위치와 척도모수가 모두 알려지지 않은 역가우스분포에 대한 적합도 검정으로 기존에 개발된 엔트로피 기반 검정을 확장한 쿨백-라이블러 정보 기반 적합도 검정을 소개한다. 역가우스분포에 대한 단순 또는 복합 영가설을 검정하기 위한 4가지 형태의 검정통계량을 제시하고 검정통계량의 계산에 사용할 표본크기에 따른 윈도크기와 기각값을 모의실험을 통해 결정하여 표의 형태로 제공한다. 검정력 분석을 위해 수행한 모의실험의 결과에서 위치와 척도모수가 모두 알려진 역가우스분포에 대한 쿨백-라이블러 정보 기반 적합도 검정은 모든 대립분포와 표본크기에서 EDF 검정들보다 좋은 검정력을 가지는 것으로 나타난다. 위치모수 또는 척도모수만 알려진 역가우스분포에 대한 쿨백-라이블러 정보 기반 적합도 검정은 모든 대립분포에 대해서 표본크기가 커짐에 따라 검정력이 증가하는 경향을 보인다. 위치와 척도모수가 모두 알려지지 않은 역가우스분포에 대한 쿨백-라이블러 정보 기반 적합도 검정은 대체적으로 엔트로피 기반 검정과 비슷한 수준의 검정력을 보이는 것으로 나타나고 이 결과를 통해서 두 검정은 동일함을 확인할 수 있다.

Large Magnetic Entropy Change in La0.55Ce0.2Ca0.25MnO3 Perovskite

  • Anwar, M.S.;Kumar, Shalendra;Ahmed, Faheem;Arshi, Nishat;Kim, G.W.;Lee, C.G.;Koo, Bon-Heun
    • Journal of Magnetics
    • /
    • 제16권4호
    • /
    • pp.457-460
    • /
    • 2011
  • In this paper, magnetic property and magnetocaloric effect (MCE) in perovskite manganites of the type $La_{(0.75-X)}Ce_XCa_{0.25}MnO_3$ (x = 0.0, 0.2, 0.3 and 0.5) synthesized by using the standard solid state reaction method have been reported. From the magnetic measurements as a function of temperature and applied magnetic field, we have observed that the Curie temperature ($T_C$) of the prepared samples strongly dependent on Ce content and was found to be 255, 213 and 150 K for x = 0.0, 0.2 and 0.3, respectively. A large magnetocaloric effect in vicinity of $T_C$ has been observed with a maximum magnetic entropy change (${\mid}{\Delta}S_M{\mid}_{max}$) of 3.31 and 6.40 J/kgK at 1.5 and 4 T, respectively, for $La_{0.55}Ce_{0.2}Ca_{0.25}MnO_3$. In addition, relative cooling power (RCP) of the sample under the magnetic field variation of 1.5 T reaches 59 J/kg. These results suggest that $La_{0.55}Ce_{0.2}Ca_{0.25}MnO_3$ compound could be a suitable candidate as working substance in magnetic refrigeration at 213 K.

The Annealing Effect on Magnetocaloric Properties of Fe91-xYxZr9 Alloys

  • Kim, K.S.;Min, S.G.;Zidanic, J.;Yu, S.C.
    • Journal of Magnetics
    • /
    • 제12권4호
    • /
    • pp.133-136
    • /
    • 2007
  • We have carried out the study of magnetocaloric effect for as-quenched and annealed $Fe_{91-x}Y_xZr_9$ alloys. Samples were prepared by arc melting the high-purity elemental constituents under argon gas atmosphere and by single roller melt spinning. These alloys were annealed one hour at 773 K in vacuum chamber. The magnetization behaviours of the samples were measured by vibrating sample magnetometer. The Curie temperature increases with increasing Y concentration (x=0 to 8). Temperature dependence of the entropy variation ${\Delta}S_M$ was found to appear in the vicinity of the Curie temperature. The results show that annealed $Fe_{86}Y_5Zr_9$ and $Fe_{83}Y_8Zr_9$ alloys a bigger magnetocaloric effect than that those in as-quenched alloys. The value is 1.23 J/kg K for annealed $Fe_{86}Y_5Zr_9$ alloy and 0.89 J/kg K for as-quenched alloy, respectively. In addition, the values of ${\Delta}S_M$ for $Fe_{83}Y_8Zr_9$ alloy is 0.72 J/Kg K for as-quenched and 1.09 J/Kg K for annealed alloy, respectively.

영확률 최대화에 근거한 효율적인 적응 알고리듬 (Efficient Adaptive Algorithms Based on Zero-Error Probability Maximization)

  • 김남용
    • 한국통신학회논문지
    • /
    • 제39A권5호
    • /
    • pp.237-243
    • /
    • 2014
  • 이 논문에서는, 영확률을 최대화 (maximum zero-error probability, MZEP) 하도록 설계된 알고리듬에서 가중치 갱신에 쓰이는 기존의 블록 처리 방식의 합산 연산을 대신하여, 다음 기울기 계산에 현재 계산된 기울기를 활용할 수 있는 효율적인 가중치 갱신 계산 방식을 제안하였다. 실험 결과로부터, 제안한 방식은 원래의 MZEP 와 동일한 성능을 나타내면서도 오차 버퍼가 불필요하여 시스템의 복잡도를 감소시키며 연산 시간을 현저히 줄일 수 있다. 또한 제안한 알고리듬은 오차 엔트로피 (error-entropy)를 최소화하도록 설계된 알고리듬보다 우수한 수렴 속도를 지닌다.

A Hill-Sliding Strategy for Initialization of Gaussian Clusters in the Multidimensional Space

  • Park, J.Kyoungyoon;Chen, Yung-H.;Simons, Daryl-B.;Miller, Lee-D.
    • 대한원격탐사학회지
    • /
    • 제1권1호
    • /
    • pp.5-27
    • /
    • 1985
  • A hill-sliding technique was devised to extract Gaussian clusters from the multivariate probability density estimates of sample data for the first step of iterative unsupervised classification. The underlying assumption in this approach was that each cluster possessed a unimodal normal distribution. The key idea was that a clustering function proposed could distinguish elements of a cluster under formation from the rest in the feature space. Initial clusters were extracted one by one according to the hill-sliding tactics. A dimensionless cluster compactness parameter was proposed as a universal measure of cluster goodness and used satisfactorily in test runs with Landsat multispectral scanner (MSS) data. The normalized divergence, defined by the cluster divergence divided by the entropy of the entire sample data, was utilized as a general separability measure between clusters. An overall clustering objective function was set forth in terms of cluster covariance matrices, from which the cluster compactness measure could be deduced. Minimal improvement of initial data partitioning was evaluated by this objective function in eliminating scattered sparse data points. The hill-sliding clustering technique developed herein has the potential applicability to decomposition of any multivariate mixture distribution into a number of unimodal distributions when an appropriate diatribution function to the data set is employed.

One-step deep learning-based method for pixel-level detection of fine cracks in steel girder images

  • Li, Zhihang;Huang, Mengqi;Ji, Pengxuan;Zhu, Huamei;Zhang, Qianbing
    • Smart Structures and Systems
    • /
    • 제29권1호
    • /
    • pp.153-166
    • /
    • 2022
  • Identifying fine cracks in steel bridge facilities is a challenging task of structural health monitoring (SHM). This study proposed an end-to-end crack image segmentation framework based on a one-step Convolutional Neural Network (CNN) for pixel-level object recognition with high accuracy. To particularly address the challenges arising from small object detection in complex background, efforts were made in loss function selection aiming at sample imbalance and module modification in order to improve the generalization ability on complicated images. Specifically, loss functions were compared among alternatives including the Binary Cross Entropy (BCE), Focal, Tversky and Dice loss, with the last three specialized for biased sample distribution. Structural modifications with dilated convolution, Spatial Pyramid Pooling (SPP) and Feature Pyramid Network (FPN) were also performed to form a new backbone termed CrackDet. Models of various loss functions and feature extraction modules were trained on crack images and tested on full-scale images collected on steel box girders. The CNN model incorporated the classic U-Net as its backbone, and Dice loss as its loss function achieved the highest mean Intersection-over-Union (mIoU) of 0.7571 on full-scale pictures. In contrast, the best performance on cropped crack images was achieved by integrating CrackDet with Dice loss at a mIoU of 0.7670.

통계적 유의성을 고려하여 고객 요구속성의 중요도를 산정하는 방법 (A Method to Determine the Final Importance of Customer Attributes Considering Statistical Significance)

  • 김경미
    • 품질경영학회지
    • /
    • 제36권3호
    • /
    • pp.1-12
    • /
    • 2008
  • Obtaining the accurate final importance of each customer attribute (CA) is very important in the house of quality(HOQ), because it is deployed to the quality of the final product or service through the quality function deployment(QFD). The final importance is often calculated by the multiplication of the relative importance rate and the competitive priority rate. Traditionally, the sample mean is used for estimating two rates but the dispersion is ignored. This paper proposes a new approach that incorporates statistical significance to consider the dispersion of rates in determining the final importance of CA. The approach is illustrated with a design of car door for each case of crisp and fuzzy numbers.

Time-Frequency Analysis of Electrohysterogram for Classification of Term and Preterm Birth

  • Ryu, Jiwoo;Park, Cheolsoo
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제4권2호
    • /
    • pp.103-109
    • /
    • 2015
  • In this paper, a novel method for the classification of term and preterm birth is proposed based on time-frequency analysis of electrohysterogram (EHG) using multivariate empirical mode decomposition (MEMD). EHG is a promising study for preterm birth prediction, because it is low-cost and accurate compared to other preterm birth prediction methods, such as tocodynamometry (TOCO). Previous studies on preterm birth prediction applied prefilterings based on Fourier analysis of an EHG, followed by feature extraction and classification, even though Fourier analysis is suboptimal to biomedical signals, such as EHG, because of its nonlinearity and nonstationarity. Therefore, the proposed method applies prefiltering based on MEMD instead of Fourier-based prefilters before extracting the sample entropy feature and classifying the term and preterm birth groups. For the evaluation, the Physionet term-preterm EHG database was used where the proposed method and Fourier prefiltering-based method were adopted for comparative study. The result showed that the area under curve (AUC) of the receiver operating characteristic (ROC) was increased by 0.0351 when MEMD was used instead of the Fourier-based prefilter.

12가지 냉매 (R11, R12, R13, R14, R21, R22, R23, R113, R114, R500, R502, C318)의 상태치계산 프로그램 (Development of Computer Program for Computation of 12 Refrigerant Properties)

  • 이기방;정명균
    • 대한설비공학회지:설비저널
    • /
    • 제16권5호
    • /
    • pp.477-483
    • /
    • 1987
  • A FORTRAN code has been developed to calculate thermodynamic properties of 12 kinds of refrigerants. Input variables are temperature and pressure or temperature only depending on the saturation. The program output properties are specific volume, saturation pressure, enthalpy, entropy, specific heats and speed of sound. Sample calculations show that output properties are in very good agreements with thermodynamic tables and charts.

  • PDF

적응거리 조건을 이용한 순차적 실험계획의 민감도법 (Sensitivity Approach of Sequential Sampling Using Adaptive Distance Criterion)

  • 정재준;이태희
    • 대한기계학회논문집A
    • /
    • 제29권9호
    • /
    • pp.1217-1224
    • /
    • 2005
  • To improve the accuracy of a metamodel, additional sample points can be selected by using a specified criterion, which is often called sequential sampling approach. Sequential sampling approach requires small computational cost compared to one-stage optimal sampling. It is also capable of monitoring the process of metamodeling by means of identifying an important design region for approximation and further refining the fidelity in the region. However, the existing critertia such as mean squared error, entropy and maximin distance essentially depend on the distance between previous selected sample points. Therefore, although sufficient sample points are selected, these sequential sampling strategies cannot guarantee the accuracy of metamodel in the nearby optimum points. This is because criteria of the existing sequential sampling approaches are inefficient to approximate extremum and inflection points of original model. In this research, new sequential sampling approach using the sensitivity of metamodel is proposed to reflect the response. Various functions that can represent a variety of features of engineering problems are used to validate the sensitivity approach. In addition to both root mean squared error and maximum error, the error of metamodel at optimum points is tested to access the superiority of the proposed approach. That is, optimum solutions to minimization of metamodel obtained from the proposed approach are compared with those of true functions. For comparison, both mean squared error approach and maximin distance approach are also examined.