• Title/Summary/Keyword: bayesian learning

Search Result 291, Processing Time 0.029 seconds

On-line Bayesian Learning based on Wireless Sensor Network (무선 센서 네트워크에 기반한 온라인 베이지안 학습)

  • Lee, Ho-Suk
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.06d
    • /
    • pp.105-108
    • /
    • 2007
  • Bayesian learning network is employed for diverse applications. This paper discusses the Bayesian learning network algorithm structure which can be applied in the wireless sensor network environment for various online applications. First, this paper discusses Bayesian parameter learning, Bayesian DAG structure learning, characteristics of wireless sensor network, and data gathering in the wireless sensor network. Second, this paper discusses the important considerations about the online Bayesian learning network and the conceptual structure of the learning network algorithm.

  • PDF

Bayesian Learning for Self Organizing Maps (자기조직화 지도를 위한 베이지안 학습)

  • 전성해;전홍석;황진수
    • The Korean Journal of Applied Statistics
    • /
    • v.15 no.2
    • /
    • pp.251-267
    • /
    • 2002
  • Self Organizing Maps(SOM) by Kohonen is very fast algorithm in neural networks. But it doesn't show sure rules of training results. In this paper, we introduce to Bayesian Learning for Self Organizing Maps(BLSOM) which combines self organizing maps with Bayesian learning. So it supports explanatory power of models and improves prediction. BLSOM has global optima anywhere but SOM has not. This is proved by experiment in this paper.

Calculating the Importance of Attributes in Naive Bayesian Classification Learning (나이브 베이시안 분류학습에서 속성의 중요도 계산방법)

  • Lee, Chang-Hwan
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.5
    • /
    • pp.83-87
    • /
    • 2011
  • Naive Bayesian learning has been widely used in machine learning. However, in traditional naive Bayesian learning, we make two assumptions: (1) each attribute is independent of each other (2) each attribute has same importance in terms of learning. However, in reality, not all attributes are the same with respect to their importance. In this paper, we propose a new paradigm of calculating the importance of attributes for naive Bayesian learning. The performance of the proposed methods has been compared with those of other methods including SBC and general naive Bayesian. The proposed method shows better performance in most cases.

An Information-theoretic Approach for Value-Based Weighting in Naive Bayesian Learning (나이브 베이시안 학습에서 정보이론 기반의 속성값 가중치 계산방법)

  • Lee, Chang-Hwan
    • Journal of KIISE:Databases
    • /
    • v.37 no.6
    • /
    • pp.285-291
    • /
    • 2010
  • In this paper, we propose a new paradigm of weighting methods for naive Bayesian learning. We propose more fine-grained weighting methods, called value weighting method, in the context of naive Bayesian learning. While the current weighting methods assign a weight to an attribute, we assign a weight to an attribute value. We develop new methods, using Kullback-Leibler function, for both value weighting and feature weighting in the context of naive Bayesian. The performance of the proposed methods has been compared with the attribute weighting method and general naive bayesian. The proposed method shows better performance in most of the cases.

A Matrix-Based Genetic Algorithm for Structure Learning of Bayesian Networks

  • Ko, Song;Kim, Dae-Won;Kang, Bo-Yeong
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.11 no.3
    • /
    • pp.135-142
    • /
    • 2011
  • Unlike using the sequence-based representation for a chromosome in previous genetic algorithms for Bayesian structure learning, we proposed a matrix representation-based genetic algorithm. Since a good chromosome representation helps us to develop efficient genetic operators that maintain a functional link between parents and their offspring, we represent a chromosome as a matrix that is a general and intuitive data structure for a directed acyclic graph(DAG), Bayesian network structure. This matrix-based genetic algorithm enables us to develop genetic operators more efficient for structuring Bayesian network: a probability matrix and a transpose-based mutation operator to inherit a structure with the correct edge direction and enhance the diversity of the offspring. To show the outstanding performance of the proposed method, we analyzed the performance between two well-known genetic algorithms and the proposed method using two Bayesian network scoring measures.

Gradient Descent Approach for Value-Based Weighting (점진적 하강 방법을 이용한 속성값 기반의 가중치 계산방법)

  • Lee, Chang-Hwan;Bae, Joo-Hyun
    • The KIPS Transactions:PartB
    • /
    • v.17B no.5
    • /
    • pp.381-388
    • /
    • 2010
  • Naive Bayesian learning has been widely used in many data mining applications, and it performs surprisingly well on many applications. However, due to the assumption that all attributes are equally important in naive Bayesian learning, the posterior probabilities estimated by naive Bayesian are sometimes poor. In this paper, we propose more fine-grained weighting methods, called value weighting, in the context of naive Bayesian learning. While the current weighting methods assign a weight to each attribute, we assign a weight to each attribute value. We investigate how the proposed value weighting effects the performance of naive Bayesian learning. We develop new methods, using gradient descent method, for both value weighting and feature weighting in the context of naive Bayesian. The performance of the proposed methods has been compared with the attribute weighting method and general Naive bayesian, and the value weighting method showed better in most cases.

Efficient Learning of Bayesian Networks using Entropy (효율적인 베이지안망 학습을 위한 엔트로피 적용)

  • Heo, Go-Eun;Jung, Yong-Gyu
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.9 no.3
    • /
    • pp.31-36
    • /
    • 2009
  • Bayesian networks are known as the best tools to express and predict the domain knowledge with uncertain environments. However, bayesian learning could be too difficult to do effective and reliable searching. To solve the problems of overtime demand, the nodes should be arranged orderly, so that effective structural learning can be possible. This paper suggests the classification learning model to reduce the errors in the independent condition, in which a lot of variables exist and data can increase the reliability by calculating the each entropy of probabilities depending on each circumstances. Also efficient learning models are suggested to decide the order of nodes, that has lowest entropy by calculating the numerical values of entropy of each node in K2 algorithm. Consequently the model of the most suitably settled Bayesian networks could be constructed as quickly as possible.

  • PDF

Learning Distribution Graphs Using a Neuro-Fuzzy Network for Naive Bayesian Classifier (퍼지신경망을 사용한 네이브 베이지안 분류기의 분산 그래프 학습)

  • Tian, Xue-Wei;Lim, Joon S.
    • Journal of Digital Convergence
    • /
    • v.11 no.11
    • /
    • pp.409-414
    • /
    • 2013
  • Naive Bayesian classifiers are a powerful and well-known type of classifiers that can be easily induced from a dataset of sample cases. However, the strong conditional independence assumptions can sometimes lead to weak classification performance. Normally, naive Bayesian classifiers use Gaussian distributions to handle continuous attributes and to represent the likelihood of the features conditioned on the classes. The probability density of attributes, however, is not always well fitted by a Gaussian distribution. Another eminent type of classifier is the neuro-fuzzy classifier, which can learn fuzzy rules and fuzzy sets using supervised learning. Since there are specific structural similarities between a neuro-fuzzy classifier and a naive Bayesian classifier, the purpose of this study is to apply learning distribution graphs constructed by a neuro-fuzzy network to naive Bayesian classifiers. We compare the Gaussian distribution graphs with the fuzzy distribution graphs for the naive Bayesian classifier. We applied these two types of distribution graphs to classify leukemia and colon DNA microarray data sets. The results demonstrate that a naive Bayesian classifier with fuzzy distribution graphs is more reliable than that with Gaussian distribution graphs.

Frequentist and Bayesian Learning Approaches to Artificial Intelligence

  • Jun, Sunghae
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.16 no.2
    • /
    • pp.111-118
    • /
    • 2016
  • Artificial intelligence (AI) is making computer systems intelligent to do right thing. The AI is used today in a variety of fields, such as journalism, medical, industry as well as entertainment. The impact of AI is becoming larger day after day. In general, the AI system has to lead the optimal decision under uncertainty. But it is difficult for the AI system can derive the best conclusion. In addition, we have a trouble to represent the intelligent capacity of AI in numeric values. Statistics has the ability to quantify the uncertainty by two approaches of frequentist and Bayesian. So in this paper, we propose a methodology of the connection between statistics and AI efficiently. We compute a fixed value for estimating the population parameter using the frequentist learning. Also we find a probability distribution to estimate the parameter of conceptual population using Bayesian learning. To show how our proposed research could be applied to practical domain, we collect the patent big data related to Apple company, and we make the AI more intelligent to understand Apple's technology.

Portfolio Management with the Business Cycle and Bayesian Learning (경기주기와 베이지안 학습(Bayesian learning) 기법을 고려한 개인의 자산관리 연구)

  • Park, Seyoung;Lee, Hyun-Tak;Rhee, Yuna;Jang, Bong-Gyu
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.39 no.2
    • /
    • pp.49-66
    • /
    • 2014
  • This paper studies optimal consumption and investment behaviors of an individual when risky asset returns and her income are affected by the business cycle. The investor considers the incomplete information risk of unobservable macroeconomic conditions and updates her belief of expected risky asset returns through Bayesian learning. We find that the optimal investment strategy, certainty equivalent wealth, and portfolio hedging demand significantly depend on the belief about the macroeconomic conditions.