• 제목/요약/키워드: Bayesian Learning Algorithm

검색결과 97건 처리시간 0.03초

음성신호를 이용한 감성인식에서의 패턴인식 방법 (The Pattern Recognition Methods for Emotion Recognition with Speech Signal)

  • 박창현;심귀보
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2006년도 춘계학술대회 학술발표 논문집 제16권 제1호
    • /
    • pp.347-350
    • /
    • 2006
  • In this paper, we apply several pattern recognition algorithms to emotion recognition system with speech signal and compare the results. Firstly, we need emotional speech databases. Also, speech features for emotion recognition is determined on the database analysis step. Secondly, recognition algorithms are applied to these speech features. The algorithms we try are artificial neural network, Bayesian learning, Principal Component Analysis, LBG algorithm. Thereafter, the performance gap of these methods is presented on the experiment result section. Truly, emotion recognition technique is not mature. That is, the emotion feature selection, relevant classification method selection, all these problems are disputable. So, we wish this paper to be a reference for the disputes.

  • PDF

Pattern Recognition Methods for Emotion Recognition with speech signal

  • Park Chang-Hyun;Sim Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제6권2호
    • /
    • pp.150-154
    • /
    • 2006
  • In this paper, we apply several pattern recognition algorithms to emotion recognition system with speech signal and compare the results. Firstly, we need emotional speech databases. Also, speech features for emotion recognition are determined on the database analysis step. Secondly, recognition algorithms are applied to these speech features. The algorithms we try are artificial neural network, Bayesian learning, Principal Component Analysis, LBG algorithm. Thereafter, the performance gap of these methods is presented on the experiment result section.

A review of tree-based Bayesian methods

  • Linero, Antonio R.
    • Communications for Statistical Applications and Methods
    • /
    • 제24권6호
    • /
    • pp.543-559
    • /
    • 2017
  • Tree-based regression and classification ensembles form a standard part of the data-science toolkit. Many commonly used methods take an algorithmic view, proposing greedy methods for constructing decision trees; examples include the classification and regression trees algorithm, boosted decision trees, and random forests. Recent history has seen a surge of interest in Bayesian techniques for constructing decision tree ensembles, with these methods frequently outperforming their algorithmic counterparts. The goal of this article is to survey the landscape surrounding Bayesian decision tree methods, and to discuss recent modeling and computational developments. We provide connections between Bayesian tree-based methods and existing machine learning techniques, and outline several recent theoretical developments establishing frequentist consistency and rates of convergence for the posterior distribution. The methodology we present is applicable for a wide variety of statistical tasks including regression, classification, modeling of count data, and many others. We illustrate the methodology on both simulated and real datasets.

Online Probability Density Estimation of Nonstationary Random Signal using Dynamic Bayesian Networks

  • Cho, Hyun-Cheol;Fadali, M. Sami;Lee, Kwon-Soon
    • International Journal of Control, Automation, and Systems
    • /
    • 제6권1호
    • /
    • pp.109-118
    • /
    • 2008
  • We present two estimators for discrete non-Gaussian and nonstationary probability density estimation based on a dynamic Bayesian network (DBN). The first estimator is for off line computation and consists of a DBN whose transition distribution is represented in terms of kernel functions. The estimator parameters are the weights and shifts of the kernel functions. The parameters are determined through a recursive learning algorithm using maximum likelihood (ML) estimation. The second estimator is a DBN whose parameters form the transition probabilities. We use an asymptotically convergent, recursive, on-line algorithm to update the parameters using observation data. The DBN calculates the state probabilities using the estimated parameters. We provide examples that demonstrate the usefulness and simplicity of the two proposed estimators.

베이지안 학습을 이용한 문서의 자동분류 (An Automatic Document Classification with Bayesian Learning)

  • 김진상;신양규
    • Journal of the Korean Data and Information Science Society
    • /
    • 제11권1호
    • /
    • pp.19-30
    • /
    • 2000
  • 정보통신기술의 비약적인 발전은 온라인으로 생성되는 전자문서의 양을 폭발적으로 증가시키고 있다. 따라서 수동으로 문서를 분류하던 종래의 방법 대신 문서의 자동분유 기술 개발이 특별히 요구되고 있다. 본 논문에서는 베이지안 학습 기법을 이용하여 문서를 자동으로 분류하는 방법을 연구하고, 20개의 유즈넷 뉴스그룹 문서들을 분류하도록 시험하였다. 사용한 알고리즘은 Naive Bayes Classifier이며, 구현한 시스템을 이용해 유즈넷 문서를 대상으로 자동분류를 실험한 결과 분류의 정확률이 약 77%로 나타났다.

  • PDF

종 분화 진화 알고리즘을 이용한 안정된 베이지안 네트워크 앙상블 구축 (Construction of Robust Bayesian Network Ensemble using a Speciated Evolutionary Algorithm)

  • 유지오;김경중;조성배
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제31권12호
    • /
    • pp.1569-1580
    • /
    • 2004
  • 베이지안 네트워크는 불확실한 상황을 모델링하기 위한 확률 기반의 모델로서 확실한 수학적 토대를 가지고 있다. 베이지안 네트워크의 구조론 자동 학습하기 위한 연구가 많이 있었고, 최근에는 진화 알고리즘을 이용한 연구가 많이 진행되고 있다. 그러나 대부분은 마지막 세대의 가장 좋은 개체만을 이용하고 있다. 시스템이 요구하는 다양한 요구 조건을 하나의 적합도 평가 수식으로 나타내기 어렵기 때문에, 마지막 세대의 가장 좋은 개체는 종종 편향되거나 변화하는 환경에 덜 적응적일 수 있다. 본 논문에서는 적합도 공유 방법으로 다양한 베이지안 네트워크를 생성하고, 이를 베이즈 규칙을 통해 결합하여 변화하는 환경에 적응적인 추론 모델을 구축할 수 있는 방법을 제안한다. 성능 평가를 위해 ASIA와 ALARM 네트워크에서 인공적으로 생성한 데이타를 이용한 구조 학습 및 추론 실험을 수행하였다. 다양한 조건에서 학습된 네트워크를 실험한 결과, 제안한 방법이 변화하는 환경에서 더욱 강건하고 적응적인 모델을 생성할 수 있음을 알 수 있었다.

Using Bayesian tree-based model integrated with genetic algorithm for streamflow forecasting in an urban basin

  • Nguyen, Duc Hai;Bae, Deg-Hyo
    • 한국수자원학회:학술대회논문집
    • /
    • 한국수자원학회 2021년도 학술발표회
    • /
    • pp.140-140
    • /
    • 2021
  • Urban flood management is a crucial and challenging task, particularly in developed cities. Therefore, accurate prediction of urban flooding under heavy precipitation is critically important to address such a challenge. In recent years, machine learning techniques have received considerable attention for their strong learning ability and suitability for modeling complex and nonlinear hydrological processes. Moreover, a survey of the published literature finds that hybrid computational intelligent methods using nature-inspired algorithms have been increasingly employed to predict or simulate the streamflow with high reliability. The present study is aimed to propose a novel approach, an ensemble tree, Bayesian Additive Regression Trees (BART) model incorporating a nature-inspired algorithm to predict hourly multi-step ahead streamflow. For this reason, a hybrid intelligent model was developed, namely GA-BART, containing BART model integrating with Genetic algorithm (GA). The Jungrang urban basin located in Seoul, South Korea, was selected as a case study for the purpose. A database was established based on 39 heavy rainfall events during 2003 and 2020 that collected from the rain gauges and monitoring stations system in the basin. For the goal of this study, the different step ahead models will be developed based in the methods, including 1-hour, 2-hour, 3-hour, 4-hour, 5-hour, and 6-hour step ahead streamflow predictions. In addition, the comparison of the hybrid BART model with a baseline model such as super vector regression models is examined in this study. It is expected that the hybrid BART model has a robust performance and can be an optional choice in streamflow forecasting for urban basins.

  • PDF

What are the benefits and challenges of multi-purpose dam operation modeling via deep learning : A case study of Seomjin River

  • Eun Mi Lee;Jong Hun Kam
    • 한국수자원학회:학술대회논문집
    • /
    • 한국수자원학회 2023년도 학술발표회
    • /
    • pp.246-246
    • /
    • 2023
  • Multi-purpose dams are operated accounting for both physical and socioeconomic factors. This study aims to evaluate the utility of a deep learning algorithm-based model for three multi-purpose dam operation (Seomjin River dam, Juam dam, and Juam Control dam) in Seomjin River. In this study, the Gated Recurrent Unit (GRU) algorithm is applied to predict hourly water level of the dam reservoirs over 2002-2021. The hyper-parameters are optimized by the Bayesian optimization algorithm to enhance the prediction skill of the GRU model. The GRU models are set by the following cases: single dam input - single dam output (S-S), multi-dam input - single dam output (M-S), and multi-dam input - multi-dam output (M-M). Results show that the S-S cases with the local dam information have the highest accuracy above 0.8 of NSE. Results from the M-S and M-M model cases confirm that upstream dam information can bring important information for downstream dam operation prediction. The S-S models are simulated with altered outflows (-40% to +40%) to generate the simulated water level of the dam reservoir as alternative dam operational scenarios. The alternative S-S model simulations show physically inconsistent results, indicating that our deep learning algorithm-based model is not explainable for multi-purpose dam operation patterns. To better understand this limitation, we further analyze the relationship between observed water level and outflow of each dam. Results show that complexity in outflow-water level relationship causes the limited predictability of the GRU algorithm-based model. This study highlights the importance of socioeconomic factors from hidden multi-purpose dam operation processes on not only physical processes-based modeling but also aritificial intelligence modeling.

  • PDF

Genetic Algorithm based hyperparameter tuned CNN for identifying IoT intrusions

  • Alexander. R;Pradeep Mohan Kumar. K
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권3호
    • /
    • pp.755-778
    • /
    • 2024
  • In recent years, the number of devices being connected to the internet has grown enormously, as has the intrusive behavior in the network. Thus, it is important for intrusion detection systems to report all intrusive behavior. Using deep learning and machine learning algorithms, intrusion detection systems are able to perform well in identifying attacks. However, the concern with these deep learning algorithms is their inability to identify a suitable network based on traffic volume, which requires manual changing of hyperparameters, which consumes a lot of time and effort. So, to address this, this paper offers a solution using the extended compact genetic algorithm for the automatic tuning of the hyperparameters. The novelty in this work comes in the form of modeling the problem of identifying attacks as a multi-objective optimization problem and the usage of linkage learning for solving the optimization problem. The solution is obtained using the feature map-based Convolutional Neural Network that gets encoded into genes, and using the extended compact genetic algorithm the model is optimized for the detection accuracy and latency. The CIC-IDS-2017 and 2018 datasets are used to verify the hypothesis, and the most recent analysis yielded a substantial F1 score of 99.23%. Response time, CPU, and memory consumption evaluations are done to demonstrate the suitability of this model in a fog environment.

주성분 분석과 나이브 베이지안 분류기를 이용한 퍼지 군집화 모형 (Fuzzy Clustering Model using Principal Components Analysis and Naive Bayesian Classifier)

  • 전성해
    • 정보처리학회논문지B
    • /
    • 제11B권4호
    • /
    • pp.485-490
    • /
    • 2004
  • 자조의 표현에서 군집화는 주어진 데이터를 서로 유사한 개체들끼리 몇 개의 집단으로 묶는 작업을 수행한다. 군집화의 유사도 결정 측도는 맡은 연구들에서 매우 다양한 것들이 사용되었다. 하지만 군집화 결과의 성능 측정에 대한 객관적인 기준 설정이 어렵기 때문에 군집화 결과에 대한 해석은 매우 주관적이고, 애매한 경우가 많다. 퍼지 군집화는 이러한 주관적인 군집화 문제에 있어서 객관성 있는 군집 결정 방안을 제시하여 준다. 각 개체들이 특정 군집에 속하게 될 퍼지 멤버 함수값을 원소로 하는 유사도 행렬을 통하여 군집화를 수행한다. 본 논문에서는 차원 축소기법의 하나인 주성분 분석과 강력한 통계적 학습 이론인 베이지안 학습을 결합한 군집화 모형을 제안하여, 객관적인 퍼지 군집화를 수행하였다. 제안 알고리즘의 성능 평가를 위하여 UCI Machine Loaming Repository의 Iris와 Glass Identification 데이터를 이용한 실험 결과를 제시하였다.