• Title/Summary/Keyword: (ML) Machine learning

Search Result 277, Processing Time 0.024 seconds

Discriminative Training of Sequence Taggers via Local Feature Matching

  • Kim, Minyoung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.14 no.3
    • /
    • pp.209-215
    • /
    • 2014
  • Sequence tagging is the task of predicting frame-wise labels for a given input sequence and has important applications to diverse domains. Conventional methods such as maximum likelihood (ML) learning matches global features in empirical and model distributions, rather than local features, which directly translates into frame-wise prediction errors. Recent probabilistic sequence models such as conditional random fields (CRFs) have achieved great success in a variety of situations. In this paper, we introduce a novel discriminative CRF learning algorithm to minimize local feature mismatches. Unlike overall data fitting originating from global feature matching in ML learning, our approach reduces the total error over all frames in a sequence. We also provide an efficient gradient-based learning method via gradient forward-backward recursion, which requires the same computational complexity as ML learning. For several real-world sequence tagging problems, we empirically demonstrate that the proposed learning algorithm achieves significantly more accurate prediction performance than standard estimators.

Toward a grey box approach for cardiovascular physiome

  • Hwang, Minki;Leem, Chae Hun;Shim, Eun Bo
    • The Korean Journal of Physiology and Pharmacology
    • /
    • v.23 no.5
    • /
    • pp.305-310
    • /
    • 2019
  • The physiomic approach is now widely used in the diagnosis of cardiovascular diseases. There are two possible methods for cardiovascular physiome: the traditional mathematical model and the machine learning (ML) algorithm. ML is used in almost every area of society for various tasks formerly performed by humans. Specifically, various ML techniques in cardiovascular medicine are being developed and improved at unprecedented speed. The benefits of using ML for various tasks is that the inner working mechanism of the system does not need to be known, which can prove convenient in situations where determining the inner workings of the system can be difficult. The computation speed is also often higher than that of the traditional mathematical models. The limitations with ML are that it inherently leads to an approximation, and special care must be taken in cases where a high accuracy is required. Traditional mathematical models are, however, constructed based on underlying laws either proven or assumed. The results from the mathematical models are accurate as long as the model is. Combining the advantages of both the mathematical models and ML would increase both the accuracy and efficiency of the simulation for many problems. In this review, examples of cardiovascular physiome where approaches of mathematical modeling and ML can be combined are introduced.

A Study on Comparison of Lung Cancer Prediction Using Ensemble Machine Learning

  • NAM, Yu-Jin;SHIN, Won-Ji
    • Korean Journal of Artificial Intelligence
    • /
    • v.7 no.2
    • /
    • pp.19-24
    • /
    • 2019
  • Lung cancer is a chronic disease which ranks fourth in cancer incidence with 11 percent of the total cancer incidence in Korea. To deal with such issues, there is an active study on the usefulness and utilization of the Clinical Decision Support System (CDSS) which utilizes machine learning. Thus, this study reviews existing studies on artificial intelligence technology that can be used in determining the lung cancer, and conducted a study on the applicability of machine learning in determination of the lung cancer by comparison and analysis using Azure ML provided by Microsoft. The results of this study show different predictions yielded by three algorithms: Support Vector Machine (SVM), Two-Class Support Decision Jungle and Multiclass Decision Jungle. This study has its limitations in the size of the Big data used in Machine Learning. Although the data provided by Kaggle is the most suitable one for this study, it is assumed that there is a limit in learning the data fully due to the lack of absolute figures. Therefore, it is claimed that if the agency's cooperation in the subsequent research is used to compare and analyze various kinds of algorithms other than those used in this study, a more accurate screening machine for lung cancer could be created.

Recent Research & Development Trends in Automated Machine Learning (자동 기계학습(AutoML) 기술 동향)

  • Moon, Y.H.;Shin, I.H.;Lee, Y.J.;Min, O.G.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.4
    • /
    • pp.32-42
    • /
    • 2019
  • The performance of machine learning algorithms significantly depends on how a configuration of hyperparameters is identified and how a neural network architecture is designed. However, this requires expert knowledge of relevant task domains and a prohibitive computation time. To optimize these two processes using minimal effort, many studies have investigated automated machine learning in recent years. This paper reviews the conventional random, grid, and Bayesian methods for hyperparameter optimization (HPO) and addresses its recent approaches, which speeds up the identification of the best set of hyperparameters. We further investigate existing neural architecture search (NAS) techniques based on evolutionary algorithms, reinforcement learning, and gradient derivatives and analyze their theoretical characteristics and performance results. Moreover, future research directions and challenges in HPO and NAS are described.

A sensitivity analysis of machine learning models on fire-induced spalling of concrete: Revealing the impact of data manipulation on accuracy and explainability

  • Mohammad K. al-Bashiti;M.Z. Naser
    • Computers and Concrete
    • /
    • v.33 no.4
    • /
    • pp.409-423
    • /
    • 2024
  • Using an extensive database, a sensitivity analysis across fifteen machine learning (ML) classifiers was conducted to evaluate the impact of various data manipulation techniques, evaluation metrics, and explainability tools. The results of this sensitivity analysis reveal that the examined models can achieve an accuracy ranging from 72-93% in predicting the fire-induced spalling of concrete and denote the light gradient boosting machine, extreme gradient boosting, and random forest algorithms as the best-performing models. Among such models, the six key factors influencing spalling were maximum exposure temperature, heating rate, compressive strength of concrete, moisture content, silica fume content, and the quantity of polypropylene fiber. Our analysis also documents some conflicting results observed with the deep learning model. As such, this study highlights the necessity of selecting suitable models and carefully evaluating the presence of possible outcome biases.

Numerical data-driven machine learning model to predict the strength reduction of fire damaged RC columns

  • HyunKyoung Kim;Hyo-Gyoung Kwak;Ju-Young Hwang
    • Computers and Concrete
    • /
    • v.32 no.6
    • /
    • pp.625-637
    • /
    • 2023
  • The application of ML approaches in determining the resisting capacity of fire damaged RC columns is introduced in this paper, on the basis of analysis data driven ML modeling. Considering the characteristics of the structural behavior of fire damaged RC columns, the representative five approaches of Kernel SVM, ANN, RF, XGB and LGBM are adopted and applied. Additional partial monotonic constraints are adopted in modelling, to ensure the monotone decrease of resisting capacity in RC column with fire exposure time. Furthermore, additional suggestions are also added to mitigate the heterogeneous composition of the training data. Since the use of ML approaches will significantly reduce the computation time in determining the resisting capacity of fire damaged RC columns, which requires many complex solution procedures from the heat transfer analysis to the rigorous nonlinear analyses and their repetition with time, the introduced ML approach can more effectively be used in large complex structures with many RC members. Because of the very small amount of experimental data, the training data are analytically determined from a heat transfer analysis and a subsequent nonlinear finite element (FE) analysis, and their accuracy was previously verified through a correlation study between the numerical results and experimental data. The results obtained from the application of ML approaches show that the resisting capacity of fire damaged RC columns can effectively be predicted by ML approaches.

Towards Effective Analysis and Tracking of Mozilla and Eclipse Defects using Machine Learning Models based on Bugs Data

  • Hassan, Zohaib;Iqbal, Naeem;Zaman, Abnash
    • Soft Computing and Machine Intelligence
    • /
    • v.1 no.1
    • /
    • pp.1-10
    • /
    • 2021
  • Analysis and Tracking of bug reports is a challenging field in software repositories mining. It is one of the fundamental ways to explores a large amount of data acquired from defect tracking systems to discover patterns and valuable knowledge about the process of bug triaging. Furthermore, bug data is publically accessible and available of the following systems, such as Bugzilla and JIRA. Moreover, with robust machine learning (ML) techniques, it is quite possible to process and analyze a massive amount of data for extracting underlying patterns, knowledge, and insights. Therefore, it is an interesting area to propose innovative and robust solutions to analyze and track bug reports originating from different open source projects, including Mozilla and Eclipse. This research study presents an ML-based classification model to analyze and track bug defects for enhancing software engineering management (SEM) processes. In this work, Artificial Neural Network (ANN) and Naive Bayesian (NB) classifiers are implemented using open-source bug datasets, such as Mozilla and Eclipse. Furthermore, different evaluation measures are employed to analyze and evaluate the experimental results. Moreover, a comparative analysis is given to compare the experimental results of ANN with NB. The experimental results indicate that the ANN achieved high accuracy compared to the NB. The proposed research study will enhance SEM processes and contribute to the body of knowledge of the data mining field.

A Systems Engineering Approach for Predicting NPP Response under Steam Generator Tube Rupture Conditions using Machine Learning

  • Tran Canh Hai, Nguyen;Aya, Diab
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.18 no.2
    • /
    • pp.94-107
    • /
    • 2022
  • Accidents prevention and mitigation is the highest priority of nuclear power plant (NPP) operation, particularly in the aftermath of the Fukushima Daiichi accident, which has reignited public anxieties and skepticism regarding nuclear energy usage. To deal with accident scenarios more effectively, operators must have ample and precise information about key safety parameters as well as their future trajectories. This work investigates the potential of machine learning in forecasting NPP response in real-time to provide an additional validation method and help reduce human error, especially in accident situations where operators are under a lot of stress. First, a base-case SGTR simulation is carried out by the best-estimate code RELAP5/MOD3.4 to confirm the validity of the model against results reported in the APR1400 Design Control Document (DCD). Then, uncertainty quantification is performed by coupling RELAP5/MOD3.4 and the statistical tool DAKOTA to generate a large enough dataset for the construction and training of neural-based machine learning (ML) models, namely LSTM, GRU, and hybrid CNN-LSTM. Finally, the accuracy and reliability of these models in forecasting system response are tested by their performance on fresh data. To facilitate and oversee the process of developing the ML models, a Systems Engineering (SE) methodology is used to ensure that the work is consistently in line with the originating mission statement and that the findings obtained at each subsequent phase are valid.

Using machine learning to forecast and assess the uncertainty in the response of a typical PWR undergoing a steam generator tube rupture accident

  • Tran Canh Hai Nguyen ;Aya Diab
    • Nuclear Engineering and Technology
    • /
    • v.55 no.9
    • /
    • pp.3423-3440
    • /
    • 2023
  • In this work, a multivariate time-series machine learning meta-model is developed to predict the transient response of a typical nuclear power plant (NPP) undergoing a steam generator tube rupture (SGTR). The model employs Recurrent Neural Networks (RNNs), including the Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and a hybrid CNN-LSTM model. To address the uncertainty inherent in such predictions, a Bayesian Neural Network (BNN) was implemented. The models were trained using a database generated by the Best Estimate Plus Uncertainty (BEPU) methodology; coupling the thermal hydraulics code, RELAP5/SCDAP/MOD3.4 to the statistical tool, DAKOTA, to predict the variation in system response under various operational and phenomenological uncertainties. The RNN models successfully captures the underlying characteristics of the data with reasonable accuracy, and the BNN-LSTM approach offers an additional layer of insight into the level of uncertainty associated with the predictions. The results demonstrate that LSTM outperforms GRU, while the hybrid CNN-LSTM model is computationally the most efficient. This study aims to gain a better understanding of the capabilities and limitations of machine learning models in the context of nuclear safety. By expanding the application of ML models to more severe accident scenarios, where operators are under extreme stress and prone to errors, ML models can provide valuable support and act as expert systems to assist in decision-making while minimizing the chances of human error.

Hybrid machine learning with moth-flame optimization methods for strength prediction of CFDST columns under compression

  • Quang-Viet Vu;Dai-Nhan Le;Thai-Hoan Pham;Wei Gao;Sawekchai Tangaramvong
    • Steel and Composite Structures
    • /
    • v.51 no.6
    • /
    • pp.679-695
    • /
    • 2024
  • This paper presents a novel technique that combines machine learning (ML) with moth-flame optimization (MFO) methods to predict the axial compressive strength (ACS) of concrete filled double skin steel tubes (CFDST) columns. The proposed model is trained and tested with a dataset containing 125 tests of the CFDST column subjected to compressive loading. Five ML models, including extreme gradient boosting (XGBoost), gradient tree boosting (GBT), categorical gradient boosting (CAT), support vector machines (SVM), and decision tree (DT) algorithms, are utilized in this work. The MFO algorithm is applied to find optimal hyperparameters of these ML models and to determine the most effective model in predicting the ACS of CFDST columns. Predictive results given by some performance metrics reveal that the MFO-CAT model provides superior accuracy compared to other considered models. The accuracy of the MFO-CAT model is validated by comparing its predictive results with existing design codes and formulae. Moreover, the significance and contribution of each feature in the dataset are examined by employing the SHapley Additive exPlanations (SHAP) method. A comprehensive uncertainty quantification on probabilistic characteristics of the ACS of CFDST columns is conducted for the first time to examine the models' responses to variations of input variables in the stochastic environments. Finally, a web-based application is developed to predict ACS of the CFDST column, enabling rapid practical utilization without requesting any programing or machine learning expertise.