• Title/Summary/Keyword: Information entropy

Search Result 882, Processing Time 0.028 seconds

Application of Discrimination Information (Cross Entropy) as Information-theoretic Measure to Safety Assessment in Manufacturing Processes

  • Choi, Gi-Heung;Ryu, Boo-Hyung
    • International Journal of Safety
    • /
    • v.4 no.2
    • /
    • pp.1-5
    • /
    • 2005
  • Design of manufacturing process, in general, facilitates the creation of new process that may potentially harm the workers. Design of safety-guaranteed manufacturing process is, therefore, very important since it determines the ultimate outcomes of manufacturing activities involving safety of workers. This study discusses application of discrimination information (cross entropy) to safety assessment of manufacturing processes. The idea is based on the general principles of design and their applications. An example of Cartesian robotic movement is given.

Information Theoretic Learning with Maximizing Tsallis Entropy

  • Aruga, Nobuhide;Tanaka, Masaru
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.810-813
    • /
    • 2002
  • We present the information theoretic learning based on the Tsallis entropy maximization principle for various q. The Tsallis entropy is one of the generalized entropies and is a canonical entropy in the sense of physics. Further, we consider the dependency of the learning on the parameter $\sigma$, which is a standard deviation of an assumed a priori distribution of samples such as Parzen window.

  • PDF

Entropy and information energy arithmetic operations for fuzzy numbers

  • Hong, Dug-Hun
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2002.12a
    • /
    • pp.1-4
    • /
    • 2002
  • There have been several tipical methods being used to measure the fuzziness (entropy) of fuzzy sets. Pedrycz is the original motivation of this paper. This paper studies the entropy variation on the fuzzy numbers with arithmetic operations(addition, subtraction, multiplication) and the relationship between entropy and information energy. It is shown that through the arithmetic operations, the entropy of the resultant fuzzy number has the arithmetic relation with the entropy of each original fuzzy number. Moreover, the information energy variation on the fuzzy numbers is also discussed. The results generalize earlier results of Pedrycz [FSS 64(1994) 21-30] and Wang and Chiu [FSS 103(1999) 443-455].

Entropy Coders Based on Binary Forword Classification for Image Compression (영상 압축을 위한 이진 순방향 분류 기반 엔트로피 부호기)

  • Yoo, Hoon;Jeong, Je-Chang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.4B
    • /
    • pp.755-762
    • /
    • 2000
  • Entropy coders as a noiseless compression method are widely used as end-point compression for images so there have been many contributions to increase of entropy coder performance and to reduction of entropy coder complexity. In this paper, we propose some entropy coders based on binary forward classification (BFC). BFC requires overhead of classification but there is no change between the amount of input information and that of classified output information, which we prove this property in this paper. And using the proved property, we propose entropy coders which are Golomb-Rice coder after BFC (BFC+GR) and arithmetic coder with BFC(BFC+A). The proposed entropy decoders do not have further complexity Son BFC. Simulation results also show better performance than other entropy coders which have similar complexity to proposed coders.

  • PDF

Power Investigation of the Entropy-Based Test of Fit for Inverse Gaussian Distribution by the Information Discrimination Index

  • Choi, Byungjin
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.6
    • /
    • pp.837-847
    • /
    • 2012
  • Inverse Gaussian distribution is widely used in applications to analyze and model right-skewed data. To assess the appropriateness of the distribution prior to data analysis, Mudholkar and Tian (2002) proposed an entropy-based test of fit. The test is based on the entropy power fraction(EPF) index suggested by Gokhale (1983). The simulation results report that the power of the entropy-based test is superior compared to other goodness-of-fit tests; however, this observation is based on the small-scale simulation results on the standard exponential, Weibull W(1; 2) and lognormal LN(0:5; 1) distributions. A large-scale simulation should be performed against various alternative distributions to evaluate the power of the entropy-based test; however, the use of a theoretical method is more effective to investigate the powers. In this paper, utilizing the information discrimination(ID) index defined by Ehsan et al. (1995) as a mathematical tool, we scrutinize the power of the entropy-based test. The selected alternative distributions are the gamma, Weibull and lognormal distributions, which are widely used in data analysis as an alternative to inverse Gaussian distribution. The study results are provided and an illustrative example is analyzed.

A Study on the Presentation of Idea in Information and Entropy Theory in Vegetation Data (식피 Data 에 대한 Information 과 Entropy 이론의 실용연구)

  • Park, Seung Tai
    • The Korean Journal of Ecology
    • /
    • v.10 no.2
    • /
    • pp.91-107
    • /
    • 1987
  • This study is concerned with some methods and applications, used as a basis on information and entropy analysis of vegetation data. These methods are adopted for the evaluating the effect of sampling intensity on information, which repersnets the departure of observed variable from standard component. Classes on the data matrix are caluculated by using marginal dispersion array for rank and weighting information program. Finally the information and entropy are computed by applying seven options. On the application of vegetation studies, two models for cluster analysis and analysis of concentration are explained in detail. Cluster analysis is based on use of equivocation information and Rajski's metrics. The analysis of concentration utilizes coherence coefficience being transformed values, which has been adjusted from blocks and entropy values. The relationship btween three begetation clusters and four stands of Naejangsan data is highly significant in 79% of total variance. Cluster A relatively tends to prefer north side, and cluster C south side.

  • PDF

Identification of the associations between genes and quantitative traits using entropy-based kernel density estimation

  • Yee, Jaeyong;Park, Taesung;Park, Mira
    • Genomics & Informatics
    • /
    • v.20 no.2
    • /
    • pp.17.1-17.11
    • /
    • 2022
  • Genetic associations have been quantified using a number of statistical measures. Entropy-based mutual information may be one of the more direct ways of estimating the association, in the sense that it does not depend on the parametrization. For this purpose, both the entropy and conditional entropy of the phenotype distribution should be obtained. Quantitative traits, however, do not usually allow an exact evaluation of entropy. The estimation of entropy needs a probability density function, which can be approximated by kernel density estimation. We have investigated the proper sequence of procedures for combining the kernel density estimation and entropy estimation with a probability density function in order to calculate mutual information. Genotypes and their interactions were constructed to set the conditions for conditional entropy. Extensive simulation data created using three types of generating functions were analyzed using two different kernels as well as two types of multifactor dimensionality reduction and another probability density approximation method called m-spacing. The statistical power in terms of correct detection rates was compared. Using kernels was found to be most useful when the trait distributions were more complex than simple normal or gamma distributions. A full-scale genomic dataset was explored to identify associations using the 2-h oral glucose tolerance test results and γ-glutamyl transpeptidase levels as phenotypes. Clearly distinguishable single-nucleotide polymorphisms (SNPs) and interacting SNP pairs associated with these phenotypes were found and listed with empirical p-values.

Evaluation of Classification Algorithm Performance of Sentiment Analysis Using Entropy Score (엔트로피 점수를 이용한 감성분석 분류알고리즘의 수행도 평가)

  • Park, Man-Hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.9
    • /
    • pp.1153-1158
    • /
    • 2018
  • Online customer evaluations and social media information among a variety of information sources are critical for businesses as it influences the customer's decision making. There are limitations on the time and money that the survey will ask to identify a variety of customers' needs and complaints. The customer review data at online shopping malls provide the ideal data sources for analyzing customer sentiment about their products. In this study, we collected product reviews data on the smartphone of Samsung and Apple from Amazon. We applied five classification algorithms which are used as representative sentiment analysis techniques in previous studies. The five algorithms are based on support vector machines, bagging, random forest, classification or regression tree and maximum entropy. In this study, we proposed entropy score which can comprehensively evaluate the performance of classification algorithm. As a result of evaluating five algorithms using an entropy score, the SVMs algorithm's entropy score was ranked highest.

A Study on the Entropy Evaluation Method for Time-Dependent Noise Sources of Windows Operating System and It's Applications (윈도우 운영체제의 시간 종속 잡음원에 대한 엔트로피 평가 방법 연구)

  • Kim, Yewon;Yeom, Yongjin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.4
    • /
    • pp.809-826
    • /
    • 2018
  • The entropy evaluation method for noise sources is one of the evaluation methods for the random number generator that is the essential element of modern cryptographic systems and cryptographic modules. The primary entropy evaluation methods outside of the country are more suitable to apply to hardware noise sources than software noise sources, and there is a difficulty in quantitative evaluation of entropy by software noise source. In this paper, we propose an entropy evaluation method that is suitable for software noise sources, considering characteristics of software noise sources. We select time-dependent noise sources that are software noise sources of Windows OS, and the heuristic analysis and experimental analysis are performed considering the characteristics of each time-dependent noise source. Based on these analyses, we propose an entropy harvest method from the noise source and the min-entropy estimation method as the entropy evaluation method for time-dependent noise sources. We also show how to use our entropy evaluation method in the Conditioning Component described in SP 800-90B of NIST(USA).

A New Formulation of the Reconstruction Problem in Neutronics Nodal Methods Based on Maximum Entropy Principle (노달방법의 중성자속 분포 재생 문제에의 최대 엔트로피 원리에 의한 새로운 접근)

  • Na, Won-Joon;Cho, Nam-Zin
    • Nuclear Engineering and Technology
    • /
    • v.21 no.3
    • /
    • pp.193-204
    • /
    • 1989
  • This paper develops a new method for reconstructing neutron flux distribution, that is based on the maximum entropy Principle in information theory. The Probability distribution that maximizes the entropy Provides the most unbiased objective Probability distribution within the known partial information. The partial information are the assembly volume-averaged neutron flux, the surface-averaged neutron fluxes and the surface-averaged neutron currents, that are the results of the nodal calculation. The flux distribution on the boundary of a fuel assembly, which is the boundary condition for the neutron diffusion equation, is transformed into the probability distribution in the entropy expression. The most objective boundary flux distribution is deduced using the results of the nodal calculation by the maximum entropy method. This boundary flux distribution is then used as the boundary condition in a procedure of the imbedded heterogeneous assembly calculation to provide detailed flux distribution. The results of the new method applied to several PWR benchmark problem assemblies show that the reconstruction errors are comparable with those of the form function methods in inner region of the assembly while they are relatively large near the boundary of the assembly. The incorporation of the surface-averaged neutron currents in the constraint information (that is not done in the present study) should provide better results.

  • PDF