• Title/Summary/Keyword: Bayesian Learning Algorithm

Search Result 98, Processing Time 0.026 seconds

A novel radioactive particle tracking algorithm based on deep rectifier neural network

  • Dam, Roos Sophia de Freitas;dos Santos, Marcelo Carvalho;do Desterro, Filipe Santana Moreira;Salgado, William Luna;Schirru, Roberto;Salgado, Cesar Marques
    • Nuclear Engineering and Technology
    • /
    • v.53 no.7
    • /
    • pp.2334-2340
    • /
    • 2021
  • Radioactive particle tracking (RPT) is a minimally invasive nuclear technique that tracks a radioactive particle inside a volume of interest by means of a mathematical location algorithm. During the past decades, many algorithms have been developed including ones based on artificial intelligence techniques. In this study, RPT technique is applied in a simulated test section that employs a simplified mixer filled with concrete, six scintillator detectors and a137Cs radioactive particle emitting gamma rays of 662 keV. The test section was developed using MCNPX code, which is a mathematical code based on Monte Carlo simulation, and 3516 different radioactive particle positions (x,y,z) were simulated. Novelty of this paper is the use of a location algorithm based on a deep learning model, more specifically a 6-layers deep rectifier neural network (DRNN), in which hyperparameters were defined using a Bayesian optimization method. DRNN is a type of deep feedforward neural network that substitutes the usual sigmoid based activation functions, traditionally used in vanilla Multilayer Perceptron Networks, for rectified activation functions. Results show the great accuracy of the DRNN in a RPT tracking system. Root mean squared error for x, y and coordinates of the radioactive particle is, respectively, 0.03064, 0.02523 and 0.07653.

Recommender system using BERT sentiment analysis (BERT 기반 감성분석을 이용한 추천시스템)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.2
    • /
    • pp.1-15
    • /
    • 2021
  • If it is difficult for us to make decisions, we ask for advice from friends or people around us. When we decide to buy products online, we read anonymous reviews and buy them. With the advent of the Data-driven era, IT technology's development is spilling out many data from individuals to objects. Companies or individuals have accumulated, processed, and analyzed such a large amount of data that they can now make decisions or execute directly using data that used to depend on experts. Nowadays, the recommender system plays a vital role in determining the user's preferences to purchase goods and uses a recommender system to induce clicks on web services (Facebook, Amazon, Netflix, Youtube). For example, Youtube's recommender system, which is used by 1 billion people worldwide every month, includes videos that users like, "like" and videos they watched. Recommended system research is deeply linked to practical business. Therefore, many researchers are interested in building better solutions. Recommender systems use the information obtained from their users to generate recommendations because the development of the provided recommender systems requires information on items that are likely to be preferred by the user. We began to trust patterns and rules derived from data rather than empirical intuition through the recommender systems. The capacity and development of data have led machine learning to develop deep learning. However, such recommender systems are not all solutions. Proceeding with the recommender systems, there should be no scarcity in all data and a sufficient amount. Also, it requires detailed information about the individual. The recommender systems work correctly when these conditions operate. The recommender systems become a complex problem for both consumers and sellers when the interaction log is insufficient. Because the seller's perspective needs to make recommendations at a personal level to the consumer and receive appropriate recommendations with reliable data from the consumer's perspective. In this paper, to improve the accuracy problem for "appropriate recommendation" to consumers, the recommender systems are proposed in combination with context-based deep learning. This research is to combine user-based data to create hybrid Recommender Systems. The hybrid approach developed is not a collaborative type of Recommender Systems, but a collaborative extension that integrates user data with deep learning. Customer review data were used for the data set. Consumers buy products in online shopping malls and then evaluate product reviews. Rating reviews are based on reviews from buyers who have already purchased, giving users confidence before purchasing the product. However, the recommendation system mainly uses scores or ratings rather than reviews to suggest items purchased by many users. In fact, consumer reviews include product opinions and user sentiment that will be spent on evaluation. By incorporating these parts into the study, this paper aims to improve the recommendation system. This study is an algorithm used when individuals have difficulty in selecting an item. Consumer reviews and record patterns made it possible to rely on recommendations appropriately. The algorithm implements a recommendation system through collaborative filtering. This study's predictive accuracy is measured by Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). Netflix is strategically using the referral system in its programs through competitions that reduce RMSE every year, making fair use of predictive accuracy. Research on hybrid recommender systems combining the NLP approach for personalization recommender systems, deep learning base, etc. has been increasing. Among NLP studies, sentiment analysis began to take shape in the mid-2000s as user review data increased. Sentiment analysis is a text classification task based on machine learning. The machine learning-based sentiment analysis has a disadvantage in that it is difficult to identify the review's information expression because it is challenging to consider the text's characteristics. In this study, we propose a deep learning recommender system that utilizes BERT's sentiment analysis by minimizing the disadvantages of machine learning. This study offers a deep learning recommender system that uses BERT's sentiment analysis by reducing the disadvantages of machine learning. The comparison model was performed through a recommender system based on Naive-CF(collaborative filtering), SVD(singular value decomposition)-CF, MF(matrix factorization)-CF, BPR-MF(Bayesian personalized ranking matrix factorization)-CF, LSTM, CNN-LSTM, GRU(Gated Recurrent Units). As a result of the experiment, the recommender system based on BERT was the best.

Synthesis of Machine Knowledge and Fuzzy Post-Adjustment to Design an Intelligent Stock Investment System

  • Lee, Kun-Chang;Kim, Won-Chul
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.17 no.2
    • /
    • pp.145-162
    • /
    • 1992
  • This paper proposes two design principles for expert systems to solve a stock market timing (SMART) problems : machine knowledge and fuzzy post-adjustment, Machine knowledge is derived from past SMART instances by using an inductive learning algorithm. A knowledge-based solution, which can be regarded as a prior SMART strategy, is then obtained on the basis of the machine knowledge. Fuzzy post-adjustment (FPA) refers to a Bayesian-like reasoning, allowing the prior SMART strategy to be revised by the fuzzy evaluation of environmental factors that might effect the SMART strategy. A prototype system, named K-SISS2 (Knowledge-based Stock Investment Support System 2), was implemented using the two design principles and tested for solving the SMART problem that is aimed at choosing the best time to buy or sell stocks. The prototype system worked very well in an actual stock investment situation, illustrating basic ideas and techniques underlying the suggested design principles.

  • PDF

Machine Learning-based Data Analysis for Designing High-strength Nb-based Superalloys (고강도 Nb기 초내열 합금 설계를 위한 기계학습 기반 데이터 분석)

  • Eunho Ma;Suwon Park;Hyunjoo Choi;Byoungchul Hwang;Jongmin Byun
    • Journal of Powder Materials
    • /
    • v.30 no.3
    • /
    • pp.217-222
    • /
    • 2023
  • Machine learning-based data analysis approaches have been employed to overcome the limitations in accurately analyzing data and to predict the results of the design of Nb-based superalloys. In this study, a database containing the composition of the alloying elements and their room-temperature tensile strengths was prepared based on a previous study. After computing the correlation between the tensile strength at room temperature and the composition, a material science analysis was conducted on the elements with high correlation coefficients. These alloying elements were found to have a significant effect on the variation in the tensile strength of Nb-based alloys at room temperature. Through this process, a model was derived to predict the properties using four machine learning algorithms. The Bayesian ridge regression algorithm proved to be the optimal model when Y, Sc, W, Cr, Mo, Sn, and Ti were used as input features. This study demonstrates the successful application of machine learning techniques to effectively analyze data and predict outcomes, thereby providing valuable insights into the design of Nb-based superalloys.

Pattern Recognition using Robust Feedforward Neural Networks (로버스트 다층전방향 신경망을 이용한 패턴인식)

  • Hwang, Chang-Ha;Kim, Sang-Min
    • Journal of the Korean Data and Information Science Society
    • /
    • v.9 no.2
    • /
    • pp.345-355
    • /
    • 1998
  • The back propagation(BP) algorithm allows multilayer feedforward neural networks to learn input-output mappings from training samples. It iteratively adjusts the network parameters(weights) to minimize the sum of squared approximation errors using a gradient descent technique. However, the mapping acquired through the BP algorithm may be corrupt when errorneous training data are employed. In this paper two types of robust backpropagation algorithms are discussed both from a theoretical point of view and in the case studies of nonlinear regression function estimation and handwritten Korean character recognition. For future research we suggest Bayesian learning approach to neural networks and compare it with two robust backpropagation algorithms.

  • PDF

Hybrid GA-ANN and PSO-ANN methods for accurate prediction of uniaxial compression capacity of CFDST columns

  • Quang-Viet Vu;Sawekchai Tangaramvong;Thu Huynh Van;George Papazafeiropoulos
    • Steel and Composite Structures
    • /
    • v.47 no.6
    • /
    • pp.759-779
    • /
    • 2023
  • The paper proposes two hybrid metaheuristic optimization and artificial neural network (ANN) methods for the close prediction of the ultimate axial compressive capacity of concentrically loaded concrete filled double skin steel tube (CFDST) columns. Two metaheuristic optimization, namely genetic algorithm (GA) and particle swarm optimization (PSO), approaches enable the dynamic training architecture underlying an ANN model by optimizing the number and sizes of hidden layers as well as the weights and biases of the neurons, simultaneously. The former is termed as GA-ANN, and the latter as PSO-ANN. These techniques utilize the gradient-based optimization with Bayesian regularization that enhances the optimization process. The proposed GA-ANN and PSO-ANN methods construct the predictive ANNs from 125 available experimental datasets and present the superior performance over standard ANNs. Both the hybrid GA-ANN and PSO-ANN methods are encoded within a user-friendly graphical interface that can reliably map out the accurate ultimate axial compressive capacity of CFDST columns with various geometry and material parameters.

Prediction of East Asian Brain Age using Machine Learning Algorithms Trained With Community-based Healthy Brain MRI

  • Chanda Simfukwe;Young Chul Youn
    • Dementia and Neurocognitive Disorders
    • /
    • v.21 no.4
    • /
    • pp.138-146
    • /
    • 2022
  • Background and Purpose: Magnetic resonance imaging (MRI) helps with brain development analysis and disease diagnosis. Brain volumes measured from different ages using MRI provides useful information in clinical evaluation and research. Therefore, we trained machine learning models that predict the brain age gap of healthy subjects in the East Asian population using T1 brain MRI volume images. Methods: In total, 154 T1-weighted MRIs of healthy subjects (55-83 years of age) were collected from an East Asian community. The information of age, gender, and education level was collected for each participant. The MRIs of the participants were preprocessed using FreeSurfer(https://surfer.nmr.mgh.harvard.edu/) to collect the brain volume data. We trained the models using different supervised machine learning regression algorithms from the scikit-learn (https://scikit-learn.org/) library. Results: The trained models comprised 19 features that had been reduced from 55 brain volume labels. The algorithm BayesianRidge (BR) achieved a mean absolute error (MAE) and r squared (R2) of 3 and 0.3 years, respectively, in predicting the age of the new subjects compared to other regression methods. The results of feature importance analysis showed that the right pallidum, white matter hypointensities on T1-MRI scans, and left hippocampus comprise some of the essential features in predicting brain age. Conclusions: The MAE and R2 accuracies of the BR model predicting brain age gap in the East Asian population showed that the model could reduce the dimensionality of neuroimaging data to provide a meaningful biomarker for individual brain aging.

Utilizing Visual Information for Non-contact Predicting Method of Friction Coefficient (마찰계수의 비접촉 추정을 위한 영상정보 활용방법)

  • Kim, Doo-Gyu;Kim, Ja-Young;Lee, Ji-Hong;Choi, Dong-Geol;Kweon, In-So
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.4
    • /
    • pp.28-34
    • /
    • 2010
  • In this paper, we proposed an algorithm for utilizing visual information for non-contact predicting method of friction coefficient. Coefficient of friction is very important in driving on road and traversing over obstacle. Our algorithm is based on terrain classification for visual image. The proposed method, non-contacting approach, has advantage over other methods that extract material characteristic of road by sensors contacting road surface. This method is composed of learning group(experiment, grouping material) and predicting friction coefficient group(Bayesian classification prediction function). Every group include previous work of vision. Advantage of our algorithm before entering such terrain can be very useful for avoiding slippery areas. We make experiment on measurement of friction coefficient of terrain. This result is utilized real friction coefficient as prediction method. We show error between real friction coefficient and predicted friction coefficient for performance evaluation of our algorithm.

MCMC Algorithm for Dirichlet Distribution over Gridded Simplex (그리드 단체 위의 디리슐레 분포에서 마르코프 연쇄 몬테 칼로 표집)

  • Sin, Bong-Kee
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.1
    • /
    • pp.94-99
    • /
    • 2015
  • With the recent machine learning paradigm of using nonparametric Bayesian statistics and statistical inference based on random sampling, the Dirichlet distribution finds many uses in a variety of graphical models. It is a multivariate generalization of the gamma distribution and is defined on a continuous (K-1)-simplex. This paper presents a sampling method for a Dirichlet distribution for the problem of dividing an integer X into a sequence of K integers which sum to X. The target samples in our problem are all positive integer vectors when multiplied by a given X. They must be sampled from the correspondingly gridded simplex. In this paper we develop a Markov Chain Monte Carlo (MCMC) proposal distribution for the neighborhood grid points on the simplex and then present the complete algorithm based on the Metropolis-Hastings algorithm. The proposed algorithm can be used for the Markov model, HMM, and Semi-Markov model for accurate state-duration modeling. It can also be used for the Gamma-Dirichlet HMM to model q the global-local duration distributions.

Transmission Delay Estimation-based Forwarding Strategy for Load Distribution in Software-Defined Network (SDN 환경에서 효율적 Flow 전송을 위한 전송 지연 평가 기반 부하 분산 기법 연구)

  • Kim, Do Hyeon;Hong, Choong Seon
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.5
    • /
    • pp.310-315
    • /
    • 2017
  • In a centralized control structure, the software defined network controller manages all openflow enabled switched in a data plane and controls the telecommunication between all hosts. In addition, the network manager can easily deploy the network function to the application layer with a software defined network controller. For this reason, many methods for network management using a software defined network concept have been proposed. The main policies for network management are related to traffic Quality of Service and resource management. In order to provide Quality of Service and load distribution for network users, we propose an efficient routing method using a naive bayesian algorithm and transmission delay estimation module. In this method, the forwarding path is decided by flow class and estimated transmission delay result in the software defined network controller. With this method, the load on the network node can be distributed to improve overall network performance. The network user also gets better dynamic Quality of Service.