• Title/Summary/Keyword: entropy measure

Search Result 203, Processing Time 0.025 seconds

RELATIVE SEQUENCE ENTROPY PAIRS FOR A MEASURE AND RELATIVE TOPOLOGICAL KRONECKER FACTOR

  • AHN YOUNG-HO;LEE JUNGSEOB;PARK KYEWON KOH
    • Journal of the Korean Mathematical Society
    • /
    • v.42 no.4
    • /
    • pp.857-869
    • /
    • 2005
  • Let $(X,\;B,\;{\mu},\;T)$ be a dynamical system and (Y, A, v, S) be a factor. We investigate the relative sequence entropy of a partition of X via the maximal compact extension of (Y, A, v, S). We define relative sequence entropy pairs and using them, we find the relative topological ${\mu}-Kronecker$ factor over (Y, v) which is the maximal topological factor having relative discrete spectrum over (Y, v). We also describe the topological Kronecker factor which is the maximal factor having discrete spectrum for any invariant measure.

ENTROPY OF NONAUTONOMOUS DYNAMICAL SYSTEMS

  • Zhu, Yujun;Liu, Zhaofeng;Xu, Xueli;Zhang, Wenda
    • Journal of the Korean Mathematical Society
    • /
    • v.49 no.1
    • /
    • pp.165-185
    • /
    • 2012
  • In this paper, the topological entropy and measure-theoretic entropy for nonautonomous dynamical systems are studied. Some properties of these entropies are given and the relation between them is discussed. Moreover, the bounds of them for several particular nonautonomous systems, such as affine transformations on metrizable groups (especially on the torus) and smooth maps on Riemannian manifolds, are obtained.

Information measures for generalized hesitant fuzzy information

  • Park, Jin Han;Kwark, Hee Eun;Kwun, Young Chel
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.1
    • /
    • pp.76-81
    • /
    • 2016
  • In this paper, we present the entropy and similarity measure for generalized hesitant fuzzy information, and discuss their desirable properties. Some measure formulas are developed, and the relationships among them are investigated. We show that the similarity measure and entropy for generalized hesitant fuzzy information can be transformed by each other based on their axiomatic definitions. Furthermore, an approach of multiple attribute decision making problems where attribute weights are unknown and the evaluation values of attributes for each alternative are given in the form of GHFEs is investigated.

Entropy of image fuzzy number by extension principle

  • Hong, Dug-Hun
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2002.12a
    • /
    • pp.5-8
    • /
    • 2002
  • In this paper, we introduce a simple new method on calculating the entropy of the image fuzzy set gotten by the extension principle without calculating its membership function.

Selection of data set with fuzzy entropy function (퍼지 엔트로피 함수를 이용한 데이터추출)

  • Lee, Sang-Hyuk;Cheon, Seong-Pyo;Kim, Sung-Shin
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2004.04a
    • /
    • pp.349-352
    • /
    • 2004
  • In this literature, the selection of data set among the universe set is carried out with the fuzzy entropy function. By the definition of fuzzy entropy, we have proposed the fuzzy entropy function and the proposed fuzzy entropy function is proved through the definition. The proposed fuzzy entropy function calculate the certainty or uncertainty value of data set, hence we can choose the data set that satisfying certain bound or reference. Therefore the reliable data set can be obtained by the proposed fuzzy entropy function. With the simple example we verify that the proposed fuzzy entropy function select reliable data set.

  • PDF

Selection of data set with fuzzy entropy function

  • Lee, Sang-Hyuk;Cheon, Seong-Pyo;Kim, Sung shin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.5
    • /
    • pp.655-659
    • /
    • 2004
  • In this literature, the selection of data set among the universe set is carried out with the fuzzy entropy function. By the definition of fuzzy entropy, the fuzzy entropy function is proposed and the proposed fuzzy entropy function is proved through the definition. The proposed fuzzy entropy function calculate the certainty or uncertainty value of data set, hence we can choose the data set that satisfying certain bound or reference. Therefore the reliable data set can be obtained by the proposed fuzzy entropy function. With the simple example we verify that the proposed fuzzy entropy function select reliable data set.

On entropy for intuitionistic fuzzy sets applying the Euclidean distance

  • Hong, Dug-Hun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.12 no.6
    • /
    • pp.583-588
    • /
    • 2002
  • Recently, Szmidt and Kacprzyk[Fuzzy Sets and Systems 118(2001) 467-477] proposed a non-probabilistic-type entropy measure for intuitionistic fuzzy sets. Tt is a result of a geometric interpretation of intuitionistic fuzzy sets and uses a ratio of distances between them. They showed that the proposed measure can be defined in terms of the ratio of intuitionistic fuzzy cardinalities: of $F\bigcapF^c and F\bigcupF^c$, while applying the Hamming distances. In this note, while applying the Euclidean distances, it is also shown that the proposed measure can be defined in terms of the ratio of some function of intuitionistic fuzzy cardinalities: of $F\bigcapF^c and F\bigcupF^c$.

On entropy for intuitionistic fuzzy sets applying the Euclidean distance

  • Hong, Dug-Hun
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2002.12a
    • /
    • pp.13-16
    • /
    • 2002
  • Recently, Szmidt and Kacprzyk[Fuzzy Sets and Systems 118(2001) 467-477] Proposed a non-probabilistic-type entropy measure for intuitionistic fuzzy sets. It is a result of a geometric interpretation of intuitionistic fuzzy sets and uses a ratio of distances between them. They showed that the proposed measure can be defined in terms of the ratio of intuitionistic fuzzy cardinalities: of F∩F$\^$c/ and F∪F$\^$c/, while applying the Hamming distances. In this note, while applying the Euclidean distances, it is also shown that the proposed measure can be defined in terms of the ratio of some function of intuitionistic fuzzy cardinalities: of F∩F$\^$c/ and F∪F$\^$c/.

Shannon's Information Theory and Document Indexing (Shannon의 정보이론과 문헌정보)

  • Chung Young Mee
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.6
    • /
    • pp.87-103
    • /
    • 1979
  • Information storage and retrieval is a part of general communication process. In the Shannon's information theory, information contained in a message is a measure of -uncertainty about information source and the amount of information is measured by entropy. Indexing is a process of reducing entropy of information source since document collection is divided into many smaller groups according to the subjects documents deal with. Significant concepts contained in every document are mapped into the set of all sets of index terms. Thus index itself is formed by paired sets of index terms and documents. Without indexing the entropy of document collection consisting of N documents is $log_2\;N$, whereas the average entropy of smaller groups $(W_1,\;W_2,...W_m)$ is as small $(as\;(\sum\limits^m_{i=1}\;H(W_i))/m$. Retrieval efficiency is a measure of information system's performance, which is largely affected by goodness of index. If all and only documents evaluated relevant to user's query can be retrieved, the information system is said $100\%$ efficient. Document file W may be potentially classified into two sets of relevant documents and non-relevant documents to a specific query. After retrieval, the document file W' is reclassified into four sets of relevant-retrieved, relevant-not retrieved, non-relevant-retrieved and non-relevant-not retrieved. It is shown in the paper that the difference in two entropies of document file Wand document file W' is a proper measure of retrieval efficiency.

  • PDF

Graphical Study on the Entropy of Order Statistics

  • Park, Sang-Un
    • Communications for Statistical Applications and Methods
    • /
    • v.5 no.2
    • /
    • pp.307-313
    • /
    • 1998
  • The entropy measure is considered to denote the uncertainty of order statistics filters and choose the length of consecutive order statistic filters. However, it needs much calculations to get the amount of the entropy of all possible sets of consecutive order statistics, and the results of those calculations return many numerical values. Thus we provide an efficient graphical presentation of those numerical values, which make it easy to understand the distribution of the entropy among order statistics.

  • PDF