• Title/Summary/Keyword: Bayesian neural network

Search Result 133, Processing Time 0.025 seconds

An artificial intelligence-based design model for circular CFST stub columns under axial load

  • Ipek, Suleyman;Erdogan, Aysegul;Guneyisi, Esra Mete
    • Steel and Composite Structures
    • /
    • v.44 no.1
    • /
    • pp.119-139
    • /
    • 2022
  • This paper aims to use the artificial intelligence approach to develop a new model for predicting the ultimate axial strength of the circular concrete-filled steel tubular (CFST) stub columns. For this, the results of 314 experimentally tested circular CFST stub columns were employed in the generation of the design model. Since the influence of the column diameter, steel tube thickness, concrete compressive strength, steel tube yield strength, and column length on the ultimate axial strengths of columns were investigated in these experimental studies, here, in the development of the design model, these variables were taken into account as input parameters. The model was developed using the backpropagation algorithm named Bayesian Regularization. The accuracy, reliability, and consistency of the developed model were evaluated statistically, and also the design formulae given in the codes (EC4, ACI, AS, AIJ, and AISC) and the previous empirical formulations proposed by other researchers were used for the validation and comparison purposes. Based on this evaluation, it can be expressed that the developed design model has a strong and reliable prediction performance with a considerably high coefficient of determination (R-squared) value of 0.9994 and a low average percent error of 4.61. Besides, the sensitivity of the developed model was also monitored in terms of dimensional properties of columns and mechanical characteristics of materials. As a consequence, it can be stated that for the design of the ultimate axial capacity of the circular CFST stub columns, a novel artificial intelligence-based design model with a good and robust prediction performance was proposed herein.

Development of Medical Cost Prediction Model Based on the Machine Learning Algorithm (머신러닝 알고리즘 기반의 의료비 예측 모델 개발)

  • Han Bi KIM;Dong Hoon HAN
    • Journal of Korea Artificial Intelligence Association
    • /
    • v.1 no.1
    • /
    • pp.11-16
    • /
    • 2023
  • Accurate hospital case modeling and prediction are crucial for efficient healthcare. In this study, we demonstrate the implementation of regression analysis methods in machine learning systems utilizing mathematical statics and machine learning techniques. The developed machine learning model includes Bayesian linear, artificial neural network, decision tree, decision forest, and linear regression analysis models. Through the application of these algorithms, corresponding regression models were constructed and analyzed. The results suggest the potential of leveraging machine learning systems for medical research. The experiment aimed to create an Azure Machine Learning Studio tool for the speedy evaluation of multiple regression models. The tool faciliates the comparision of 5 types of regression models in a unified experiment and presents assessment results with performance metrics. Evaluation of regression machine learning models highlighted the advantages of boosted decision tree regression, and decision forest regression in hospital case prediction. These findings could lay the groundwork for the deliberate development of new directions in medical data processing and decision making. Furthermore, potential avenues for future research may include exploring methods such as clustering, classification, and anomaly detection in healthcare systems.

Analyzing Influence of Outlier Elimination on Accuracy of Software Effort Estimation (소프트웨어 공수 예측의 정확성에 대한 이상치 제거의 영향 분석)

  • Seo, Yeong-Seok;Yoon, Kyung-A;Bae, Doo-Hwan
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.10
    • /
    • pp.589-599
    • /
    • 2008
  • Accurate software effort estimation has always been a challenge for the software industrial and academic software engineering communities. Many studies have focused on effort estimation methods to improve the estimation accuracy of software effort. Although data quality is one of important factors for accurate effort estimation, most of the work has not considered it. In this paper, we investigate the influence of outlier elimination on the accuracy of software effort estimation through empirical studies applying two outlier elimination methods(Least trimmed square regression and K-means clustering) and three effort estimation methods(Least squares regression, Neural network and Bayesian network) associatively. The empirical studies are performed using two industry data sets(the ISBSG Release 9 and the Bank data set which consists of the project data collected from a bank in Korea) with or without outlier elimination.

Identification of major risk factors association with respiratory diseases by data mining (데이터마이닝 모형을 활용한 호흡기질환의 주요인 선별)

  • Lee, Jea-Young;Kim, Hyun-Ji
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.2
    • /
    • pp.373-384
    • /
    • 2014
  • Data mining is to clarify pattern or correlation of mass data of complicated structure and to predict the diverse outcomes. This technique is used in the fields of finance, telecommunication, circulation, medicine and so on. In this paper, we selected risk factors of respiratory diseases in the field of medicine. The data we used was divided into respiratory diseases group and health group from the Gyeongsangbuk-do database of Community Health Survey conducted in 2012. In order to select major risk factors, we applied data mining techniques such as neural network, logistic regression, Bayesian network, C5.0 and CART. We divided total data into training and testing data, and applied model which was designed by training data to testing data. By the comparison of prediction accuracy, CART was identified as best model. Depression, smoking and stress were proved as the major risk factors of respiratory disease.

AptaCDSS - A Cardiovascular Disease Level Prediction and Clinical Decision Support System using Aptamer Biochip (AptaCDSS - 압타머칩을 이용한 심혈관질환 질환단계 예측 및 진단의사결정지원시스템)

  • Eom, Jae-Hong;Kim, Byoung-Hee;Lee, Je-Keun;Heo, Min-Oh;Park, Young-Jin;Kim, Min-Hyeok;Kim, Sung-Chun;Zhang, Byoung-Tak
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10a
    • /
    • pp.28-32
    • /
    • 2006
  • 최근 연구결과에 의하면 심장질환을 포함한 심혈관질환은 성별에 관계없이 미국 및 전 세계적으로 질병사망의 주요 원인으로 조사되었다. 본 연구에서는 보다 효율적으로 진단하기 위해 진단의사 결정 보조시스템에 대해서 다룬다. 개발된 시스템은 혈청 내의 특정 단백질의 상대적 양을 측정할 수 있는 바이오칩인 압타머칩을 이용해 생성한 환자들의 칩 데이터를 Support Vector Machine, Neural Network, Decision Tree, Bayesian Network 등의 총 4가지 기계학습 알고리즘으로 분석하여 질환단계를 예측하고 진단을 위한 보조정보를 제공한다. 논문에서는 총 135개 샘플로 구성된 3K 압타머칩 데이터에 대해 측정된 초기 시스템의 질환단계 분류성능을 제시하고 보다 유용한 진단의사결정 보조 시스템을 구성하기 위한 요소들에 대해서 논의한다.

  • PDF

Multi-focus Image Fusion Technique Based on Parzen-windows Estimates (Parzen 윈도우 추정에 기반한 다중 초점 이미지 융합 기법)

  • Atole, Ronnel R.;Park, Daechul
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.8 no.4
    • /
    • pp.75-88
    • /
    • 2008
  • This paper presents a spatial-level nonparametric multi-focus image fusion technique based on kernel estimates of input image blocks' underlying class-conditional probability density functions. Image fusion is approached as a classification task whose posterior class probabilities, P($wi{\mid}Bikl$), are calculated with likelihood density functions that are estimated from the training patterns. For each of the C input images Ii, the proposed method defines i classes wi and forms the fused image Z(k,l) from a decision map represented by a set of $P{\times}Q$ blocks Bikl whose features maximize the discriminant function based on the Bayesian decision principle. Performance of the proposed technique is evaluated in terms of RMSE and Mutual Information (MI) as the output quality measures. The width of the kernel functions, ${\sigma}$, were made to vary, and different kernels and block sizes were applied in performance evaluation. The proposed scheme is tested with C=2 and C=3 input images and results exhibited good performance.

  • PDF

Nonstandard Machine Learning Algorithms for Microarray Data Mining

  • Zhang, Byoung-Tak
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2001.10a
    • /
    • pp.165-196
    • /
    • 2001
  • DNA chip 또는 microarray는 다수의 유전자 또는 유전자 조각을 (보통 수천내지 수만 개)칩상에 고정시켜 놓고 DNA hybridization 반응을 이용하여 유전자들의 발현 양상을 분석할 수 있는 기술이다. 이러한 high-throughput기술은 예전에는 생각하지 못했던 여러가지 분자생물학의 문제에 대한 해답을 제시해 줄 수 있을 뿐 만 아니라, 분자수준에서의 질병 진단, 신약 개발, 환경 오염 문제의 해결 등 그 응용 가능성이 무한하다. 이 기술의 실용적인 적용을 위해서는 DNA chip을 제작하기 위한 하드웨어/웻웨어 기술 외에도 이러한 데이터로부터 최대한 유용하고 새로운 지식을 창출하기 위한 bioinformatics 기술이 핵심이라고 할 수 있다. 유전자 발현 패턴을 데이터마이닝하는 문제는 크게 clustering, classification, dependency analysis로 구분할 수 있으며 이러한 기술은 통계학과인공지능 기계학습에 기반을 두고 있다. 주로 사용된 기법으로는 principal component analysis, hierarchical clustering, k-means, self-organizing maps, decision trees, multilayer perceptron neural networks, association rules 등이다. 본 세미나에서는 이러한 기본적인 기계학습 기술 외에 최근에 연구되고 있는 새로운 학습 기술로서 probabilistic graphical model (PGM)을 소개하고 이를 DNA chip 데이터 분석에 응용하는 연구를 살펴본다. PGM은 인공신경망, 그래프 이론, 확률 이론이 결합되어 형성된 기계학습 모델로서 인간 두뇌의 기억과 학습 기작에 기반을 두고 있으며 다른 기계학습 모델과의 큰 차이점 중의 하나는 generative model이라는 것이다. 즉 일단 모델이 만들어지면 이것으로부터 새로운 데이터를 생성할 수 있는 능력이 있어서, 만들어진 모델을 검증하고 이로부터 새로운 사실을 추론해 낼 수 있어 biological data mining 문제에서와 같이 새로운 지식을 발견하는 exploratory analysis에 적합하다. 또한probabilistic graphical model은 기존의 신경망 모델과는 달리 deterministic한의사결정이 아니라 확률에 기반한 soft inference를 하고 학습된 모델로부터 관련된 요인들간의 인과관계(causal relationship) 또는 상호의존관계(dependency)를 분석하기에 적합한 장점이 있다. 군체적인 PGM 모델의 예로서, Bayesian network, nonnegative matrix factorization (NMF), generative topographic mapping (GTM)의 구조와 학습 및 추론알고리즘을소개하고 이를 DNA칩 데이터 분석 평가 대회인 CAMDA-2000과 CAMDA-2001에서 사용된cancer diagnosis 문제와 gene-drug dependency analysis 문제에 적용한 결과를 살펴본다.

  • PDF

A Study on Detection of Small Size Malicious Code using Data Mining Method (데이터 마이닝 기법을 이용한 소규모 악성코드 탐지에 관한 연구)

  • Lee, Taek-Hyun;Kook, Kwang-Ho
    • Convergence Security Journal
    • /
    • v.19 no.1
    • /
    • pp.11-17
    • /
    • 2019
  • Recently, the abuse of Internet technology has caused economic and mental harm to society as a whole. Especially, malicious code that is newly created or modified is used as a basic means of various application hacking and cyber security threats by bypassing the existing information protection system. However, research on small-capacity executable files that occupy a large portion of actual malicious code is rather limited. In this paper, we propose a model that can analyze the characteristics of known small capacity executable files by using data mining techniques and to use them for detecting unknown malicious codes. Data mining analysis techniques were performed in various ways such as Naive Bayesian, SVM, decision tree, random forest, artificial neural network, and the accuracy was compared according to the detection level of virustotal. As a result, more than 80% classification accuracy was verified for 34,646 analysis files.

Data-Driven Modeling of Freshwater Aquatic Systems: Status and Prospects (자료기반 물환경 모델의 현황 및 발전 방향)

  • Cha, YoonKyung;Shin, Jihoon;Kim, YoungWoo
    • Journal of Korean Society on Water Environment
    • /
    • v.36 no.6
    • /
    • pp.611-620
    • /
    • 2020
  • Although process-based models have been a preferred approach for modeling freshwater aquatic systems over extended time intervals, the increasing utility of data-driven models in a big data environment has made the data-driven models increasingly popular in recent decades. In this study, international peer-reviewed journals for the relevant fields were searched in the Web of Science Core Collection, and an extensive literature review, which included total 2,984 articles published during the last two decades (2000-2020), was performed. The review results indicated that the rate of increase in the number of published studies using data-driven models exceeded those using process-based models since 2010. The increase in the use of data-driven models was partly attributable to the increasing availability of data from new data sources, e.g., remotely sensed hyperspectral or multispectral data. Consistently throughout the past two decades, South Korea has been one of the top ten countries in which the greatest number of studies using the data-driven models were published. Among the major data-driven approaches, i.e., artificial neural network, decision tree, and Bayesian model, were illustrated with case studies. Based on the review, this study aimed to inform the current state of knowledge regarding the biogeochemical water quality and ecological models using data-driven approaches, and provide the remaining challenges and future prospects.

Real-time prediction on the slurry concentration of cutter suction dredgers using an ensemble learning algorithm

  • Han, Shuai;Li, Mingchao;Li, Heng;Tian, Huijing;Qin, Liang;Li, Jinfeng
    • International conference on construction engineering and project management
    • /
    • 2020.12a
    • /
    • pp.463-481
    • /
    • 2020
  • Cutter suction dredgers (CSDs) are widely used in various dredging constructions such as channel excavation, wharf construction, and reef construction. During a CSD construction, the main operation is to control the swing speed of cutter to keep the slurry concentration in a proper range. However, the slurry concentration cannot be monitored in real-time, i.e., there is a "time-lag effect" in the log of slurry concentration, making it difficult for operators to make the optimal decision on controlling. Concerning this issue, a solution scheme that using real-time monitored indicators to predict current slurry concentration is proposed in this research. The characteristics of the CSD monitoring data are first studied, and a set of preprocessing methods are presented. Then we put forward the concept of "index class" to select the important indices. Finally, an ensemble learning algorithm is set up to fit the relationship between the slurry concentration and the indices of the index classes. In the experiment, log data over seven days of a practical dredging construction is collected. For comparison, the Deep Neural Network (DNN), Long Short Time Memory (LSTM), Support Vector Machine (SVM), Random Forest (RF), Gradient Boosting Decision Tree (GBDT), and the Bayesian Ridge algorithm are tried. The results show that our method has the best performance with an R2 of 0.886 and a mean square error (MSE) of 5.538. This research provides an effective way for real-time predicting the slurry concentration of CSDs and can help to improve the stationarity and production efficiency of dredging construction.

  • PDF