• Title/Summary/Keyword: bayesian neural networks

Search Result 64, Processing Time 0.021 seconds

Texture segmentation using Neural Networks and multi-scale Bayesian image segmentation technique (신경회로망과 다중스케일 Bayesian 영상 분할 기법을 이용한 결 분할)

  • Kim Tae-Hyung;Eom Il-Kyu;Kim Yoo-Shin
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.4 s.304
    • /
    • pp.39-48
    • /
    • 2005
  • This paper proposes novel texture segmentation method using Bayesian estimation method and neural networks. We use multi-scale wavelet coefficients and the context information of neighboring wavelets coefficients as the input of networks. The output of neural networks is modeled as a posterior probability. The context information is obtained by HMT(Hidden Markov Tree) model. This proposed segmentation method shows better performance than ML(Maximum Likelihood) segmentation using HMT model. And post-processed texture segmentation results as using multi-scale Bayesian image segmentation technique called HMTseg in each segmentation by HMT and the proposed method also show that the proposed method is superior to the method using HMT.

Bayesian Analysis for Neural Network Models

  • Chung, Younshik;Jung, Jinhyouk;Kim, Chansoo
    • Communications for Statistical Applications and Methods
    • /
    • v.9 no.1
    • /
    • pp.155-166
    • /
    • 2002
  • Neural networks have been studied as a popular tool for classification and they are very flexible. Also, they are used for many applications of pattern classification and pattern recognition. This paper focuses on Bayesian approach to feed-forward neural networks with single hidden layer of units with logistic activation. In this model, we are interested in deciding the number of nodes of neural network model with p input units, one hidden layer with m hidden nodes and one output unit in Bayesian setup for fixed m. Here, we use the latent variable into the prior of the coefficient regression, and we introduce the 'sequential step' which is based on the idea of the data augmentation by Tanner and Wong(1787). The MCMC method(Gibbs sampler and Metropolish algorithm) can be used to overcome the complicated Bayesian computation. Finally, a proposed method is applied to a simulated data.

New Cellular Neural Networks Template for Image Halftoning based on Bayesian Rough Sets

  • Elsayed Radwan;Basem Y. Alkazemi;Ahmed I. Sharaf
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.4
    • /
    • pp.85-94
    • /
    • 2023
  • Image halftoning is a technique for varying grayscale images into two-tone binary images. Unfortunately, the static representation of an image-half toning, wherever each pixel intensity is combined by its local neighbors only, causes missing subjective problem. Also, the existing noise causes an instability criterion. In this paper an image half-toning is represented as a dynamical system for recognizing the global representation. Also, noise is reduced based on a probabilistic model. Since image half-toning is considered as 2-D matrix with a full connected pass, this structure is recognized by the dynamical system of Cellular Neural Networks (CNNs) which is defined by its template. Bayesian Rough Sets is used in exploiting the ideal CNNs construction that synthesis its dynamic. Also, Bayesian rough sets contribute to enhance the quality of the halftone image by removing noise and discovering the effective parameters in the CNNs template. The novelty of this method lies in finding a probabilistic based technique to discover the term of CNNs template and define new learning rules for CNNs internal work. A numerical experiment is conducted on image half-toning corrupted by Gaussian noise.

Efficient Markov Chain Monte Carlo for Bayesian Analysis of Neural Network Models

  • Paul E. Green;Changha Hwang;Lee, Sangbock
    • Journal of the Korean Statistical Society
    • /
    • v.31 no.1
    • /
    • pp.63-75
    • /
    • 2002
  • Most attempts at Bayesian analysis of neural networks involve hierarchical modeling. We believe that similar results can be obtained with simpler models that require less computational effort, as long as appropriate restrictions are placed on parameters in order to ensure propriety of posterior distributions. In particular, we adopt a model first introduced by Lee (1999) that utilizes an improper prior for all parameters. Straightforward Gibbs sampling is possible, with the exception of the bias parameters, which are embedded in nonlinear sigmoidal functions. In addition to the problems posed by nonlinearity, direct sampling from the posterior distributions of the bias parameters is compounded due to the duplication of hidden nodes, which is a source of multimodality. In this regard, we focus on sampling from the marginal posterior distribution of the bias parameters with Markov chain Monte Carlo methods that combine traditional Metropolis sampling with a slice sampler described by Neal (1997, 2001). The methods are illustrated with data examples that are largely confined to the analysis of nonparametric regression models.

Comparison of Hyper-Parameter Optimization Methods for Deep Neural Networks

  • Kim, Ho-Chan;Kang, Min-Jae
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.969-974
    • /
    • 2020
  • Research into hyper parameter optimization (HPO) has recently revived with interest in models containing many hyper parameters, such as deep neural networks. In this paper, we introduce the most widely used HPO methods, such as grid search, random search, and Bayesian optimization, and investigate their characteristics through experiments. The MNIST data set is used to compare results in experiments to find the best method that can be used to achieve higher accuracy in a relatively short time simulation. The learning rate and weight decay have been chosen for this experiment because these are the commonly used parameters in this kind of experiment.

Protein Secondary Structure Prediction using Multiple Neural Network Likelihood Models

  • Kim, Seong-Gon;Kim, Yong-Gi
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.10 no.4
    • /
    • pp.314-318
    • /
    • 2010
  • Predicting Alpha-helicies, Beta-sheets and Turns of a proteins secondary structure is a complex non-linear task that has been approached by several techniques such as Neural Networks, Genetic Algorithms, Decision Trees and other statistical or heuristic methods. This project introduces a new machine learning method by combining Bayesian Inference with offline trained Multilayered Perceptron (MLP) models as the likelihood for secondary structure prediction of proteins. With varying window sizes of neighboring amino acid information, the information is extracted and passed back and forth between the Neural Net and the Bayesian Inference process until the posterior probability of the secondary structure converges.

Pattern Classification Using Hybrid Monte Carlo Neural Networks (변종 몬테 칼로 신경망을 이용한 패턴 분류)

  • Jeon, Seong-Hae;Choe, Seong-Yong;O, Im-Geol;Lee, Sang-Ho;Jeon, Hong-Seok
    • The KIPS Transactions:PartB
    • /
    • v.8B no.3
    • /
    • pp.231-236
    • /
    • 2001
  • 일반적인 다층 신경망에서 가중치의 갱신 알고리즘으로 사용하는 오류 역전과 방식은 가중치 갱신 결과를 고정된(fixed) 한 개의 값으로 결정한다. 이는 여러 갱신의 가능성을 오직 한 개의 값으로 고정하기 때문에 다양한 가능성들을 모두 수용하지 못하는 면이 있다. 하지만 모든 가능성을 확률적 분포로 표현하는 갱신 알고리즘을 도입하면 이런 문제는 해결된다. 이러한 알고리즘을 사용한 베이지안 신경망 모형(Bayesian Neural Networks Models)은 주어진 입력값(Input)에 대해 블랙 박스(Black-Box)와같은 신경망 구조의 각 층(Layer)을 거친 출력값(Out put)을 계산한다. 이 때 주어진 입력 데이터에 대한 결과의 예측값은 사후분포(posterior distribution)의 기댓값(mean)에 의해 계산할 수 있다. 주어진 사전분포(prior distribution)와 학습데이터에 의한 우도함수(likelihood functions)에 의해 계산한 사후확률의 함수는 매우 복잡한 구조를 가짐으로 기댓값의 적분계산에 대한 어려움이 발생한다. 따라서 수치해석적인 방법보다는 확률적 추정에 의한 근사 방법인 몬테 칼로 시뮬레이션을 이용할 수 있다. 이러한 방법으로서 Hybrid Monte Carlo 알고리즘은 좋은 결과를 제공하여준다(Neal 1996). 본 논문에서는 Hybrid Monte Carlo 알고리즘을 적용한 신경망이 기존의 CHAID, CART 그리고 QUEST와 같은 여러 가지 분류 알고리즘에 비해서 우수한 결과를 제공하는 것을 나타내고 있다.

  • PDF

Nonlinear Networked Control Systems with Random Nature using Neural Approach and Dynamic Bayesian Networks

  • Cho, Hyun-Cheol;Lee, Kwon-Soon
    • International Journal of Control, Automation, and Systems
    • /
    • v.6 no.3
    • /
    • pp.444-452
    • /
    • 2008
  • We propose an intelligent predictive control approach for a nonlinear networked control system (NCS) with time-varying delay and random observation. The control is given by the sum of a nominal control and a corrective control. The nominal control is determined analytically using a linearized system model with fixed time delay. The corrective control is generated online by a neural network optimizer. A Markov chain (MC) dynamic Bayesian network (DBN) predicts the dynamics of the stochastic system online to allow predictive control design. We apply our proposed method to a satellite attitude control system and evaluate its control performance through computer simulation.

Large-Scale Text Classification with Deep Neural Networks (깊은 신경망 기반 대용량 텍스트 데이터 분류 기술)

  • Jo, Hwiyeol;Kim, Jin-Hwa;Kim, Kyung-Min;Chang, Jeong-Ho;Eom, Jae-Hong;Zhang, Byoung-Tak
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.5
    • /
    • pp.322-327
    • /
    • 2017
  • The classification problem in the field of Natural Language Processing has been studied for a long time. Continuing forward with our previous research, which classifies large-scale text using Convolutional Neural Networks (CNN), we implemented Recurrent Neural Networks (RNN), Long-Short Term Memory (LSTM) and Gated Recurrent Units (GRU). The experiment's result revealed that the performance of classification algorithms was Multinomial Naïve Bayesian Classifier < Support Vector Machine (SVM) < LSTM < CNN < GRU, in order. The result can be interpreted as follows: First, the result of CNN was better than LSTM. Therefore, the text classification problem might be related more to feature extraction problem than to natural language understanding problems. Second, judging from the results the GRU showed better performance in feature extraction than LSTM. Finally, the result that the GRU was better than CNN implies that text classification algorithms should consider feature extraction and sequential information. We presented the results of fine-tuning in deep neural networks to provide some intuition regard natural language processing to future researchers.

Multi-Sensor Signal based Situation Recognition with Bayesian Networks

  • Kim, Jin-Pyung;Jang, Gyu-Jin;Jung, Jae-Young;Kim, Moon-Hyun
    • Journal of Electrical Engineering and Technology
    • /
    • v.9 no.3
    • /
    • pp.1051-1059
    • /
    • 2014
  • In this paper, we propose an intelligent situation recognition model by collecting and analyzing multiple sensor signals. Multiple sensor signals are collected for fixed time window. A training set of collected sensor data for each situation is provided to K2-learning algorithm to generate Bayesian networks representing causal relationship between sensors for the situation. Statistical characteristics of sensor values and topological characteristics of generated graphs are learned for each situation. A neural network is designed to classify the current situation based on the extracted features from collected multiple sensor values. The proposed method is implemented and tested with UCI machine learning repository data.