• Title/Summary/Keyword: Machine learning (ML)

Search Result 290, Processing Time 0.027 seconds

Prediction of ultimate shear strength and failure modes of R/C ledge beams using machine learning framework

  • Ahmed M. Yousef;Karim Abd El-Hady;Mohamed E. El-Madawy
    • Structural Monitoring and Maintenance
    • /
    • v.9 no.4
    • /
    • pp.337-357
    • /
    • 2022
  • The objective of this study is to present a data-driven machine learning (ML) framework for predicting ultimate shear strength and failure modes of reinforced concrete ledge beams. Experimental tests were collected on these beams with different loading, geometric and material properties. The database was analyzed using different ML algorithms including decision trees, discriminant analysis, support vector machine, logistic regression, nearest neighbors, naïve bayes, ensemble and artificial neural networks to identify the governing and critical parameters of reinforced concrete ledge beams. The results showed that ML framework can effectively identify the failure mode of these beams either web shear failure, flexural failure or ledge failure. ML framework can also derive equations for predicting the ultimate shear strength for each failure mode. A comparison of the ultimate shear strength of ledge failure was conducted between the experimental results and the results from the proposed equations and the design equations used by international codes. These comparisons indicated that the proposed ML equations predict the ultimate shear strength of reinforced concrete ledge beams better than the design equations of AASHTO LRFD-2020 or PCI-2020.

Form-finding of lifting self-forming GFRP elastic gridshells based on machine learning interpretability methods

  • Soheila, Kookalani;Sandy, Nyunn;Sheng, Xiang
    • Structural Engineering and Mechanics
    • /
    • v.84 no.5
    • /
    • pp.605-618
    • /
    • 2022
  • Glass fiber reinforced polymer (GFRP) elastic gridshells consist of long continuous GFRP tubes that form elastic deformations. In this paper, a method for the form-finding of gridshell structures is presented based on the interpretable machine learning (ML) approaches. A comparative study is conducted on several ML algorithms, including support vector regression (SVR), K-nearest neighbors (KNN), decision tree (DT), random forest (RF), AdaBoost, XGBoost, category boosting (CatBoost), and light gradient boosting machine (LightGBM). A numerical example is presented using a standard double-hump gridshell considering two characteristics of deformation as objective functions. The combination of the grid search approach and k-fold cross-validation (CV) is implemented for fine-tuning the parameters of ML models. The results of the comparative study indicate that the LightGBM model presents the highest prediction accuracy. Finally, interpretable ML approaches, including Shapely additive explanations (SHAP), partial dependence plot (PDP), and accumulated local effects (ALE), are applied to explain the predictions of the ML model since it is essential to understand the effect of various values of input parameters on objective functions. As a result of interpretability approaches, an optimum gridshell structure is obtained and new opportunities are verified for form-finding investigation of GFRP elastic gridshells during lifting construction.

Traffic Classification Using Machine Learning Algorithms in Practical Network Monitoring Environments (실제 네트워크 모니터링 환경에서의 ML 알고리즘을 이용한 트래픽 분류)

  • Jung, Kwang-Bon;Choi, Mi-Jung;Kim, Myung-Sup;Won, Young-J.;Hong, James W.
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.8B
    • /
    • pp.707-718
    • /
    • 2008
  • The methodology of classifying traffics is changing from payload based or port based to machine learning based in order to overcome the dynamic changes of application's characteristics. However, current state of traffic classification using machine learning (ML) algorithms is ongoing under the offline environment. Specifically, most of the current works provide results of traffic classification using cross validation as a test method. Also, they show classification results based on traffic flows. However, these traffic classification results are not useful for practical environments of the network traffic monitoring. This paper compares the classification results using cross validation with those of using split validation as the test method. Also, this paper compares the classification results based on flow to those based on bytes. We classify network traffics by using various feature sets and machine learning algorithms such as J48, REPTree, RBFNetwork, Multilayer perceptron, BayesNet, and NaiveBayes. In this paper, we find the best feature sets and the best ML algorithm for classifying traffics using the split validation.

Enhancing prediction accuracy of concrete compressive strength using stacking ensemble machine learning

  • Yunpeng Zhao;Dimitrios Goulias;Setare Saremi
    • Computers and Concrete
    • /
    • v.32 no.3
    • /
    • pp.233-246
    • /
    • 2023
  • Accurate prediction of concrete compressive strength can minimize the need for extensive, time-consuming, and costly mixture optimization testing and analysis. This study attempts to enhance the prediction accuracy of compressive strength using stacking ensemble machine learning (ML) with feature engineering techniques. Seven alternative ML models of increasing complexity were implemented and compared, including linear regression, SVM, decision tree, multiple layer perceptron, random forest, Xgboost and Adaboost. To further improve the prediction accuracy, a ML pipeline was proposed in which the feature engineering technique was implemented, and a two-layer stacked model was developed. The k-fold cross-validation approach was employed to optimize model parameters and train the stacked model. The stacked model showed superior performance in predicting concrete compressive strength with a correlation of determination (R2) of 0.985. Feature (i.e., variable) importance was determined to demonstrate how useful the synthetic features are in prediction and provide better interpretability of the data and the model. The methodology in this study promotes a more thorough assessment of alternative ML algorithms and rather than focusing on any single ML model type for concrete compressive strength prediction.

Machine learning-based prediction of wind forces on CAARC standard tall buildings

  • Yi Li;Jie-Ting Yin;Fu-Bin Chen;Qiu-Sheng Li
    • Wind and Structures
    • /
    • v.36 no.6
    • /
    • pp.355-366
    • /
    • 2023
  • Although machine learning (ML) techniques have been widely used in various fields of engineering practice, their applications in the field of wind engineering are still at the initial stage. In order to evaluate the feasibility of machine learning algorithms for prediction of wind loads on high-rise buildings, this study took the exposure category type, wind direction and the height of local wind force as the input features and adopted four different machine learning algorithms including k-nearest neighbor (KNN), support vector machine (SVM), gradient boosting regression tree (GBRT) and extreme gradient (XG) boosting to predict wind force coefficients of CAARC standard tall building model. All the hyper-parameters of four ML algorithms are optimized by tree-structured Parzen estimator (TPE). The result shows that mean drag force coefficients and RMS lift force coefficients can be well predicted by the GBRT algorithm model while the RMS drag force coefficients can be forecasted preferably by the XG boosting algorithm model. The proposed machine learning based algorithms for wind loads prediction can be an alternative of traditional wind tunnel tests and computational fluid dynamic simulations.

DL-ML Fusion Hybrid Model for Malicious Web Site URL Detection Based on URL Lexical Features (악성 URL 탐지를 위한 URL Lexical Feature 기반의 DL-ML Fusion Hybrid 모델)

  • Dae-yeob Kim
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.881-891
    • /
    • 2023
  • Recently, various studies on malicious URL detection using artificial intelligence have been conducted, and most of the research have shown great detection performance. However, not only does classical machine learning require a process of analyzing features, but the detection performance of a trained model also depends on the data analyst's ability. In this paper, we propose a DL-ML Fusion Hybrid Model for malicious web site URL detection based on URL lexical features. the propose model combines the automatic feature extraction layer of deep learning and classical machine learning to improve the feature engineering issue. 60,000 malicious and normal URLs were collected for the experiment and the results showed 23.98%p performance improvement in maximum. In addition, it was possible to train a model in an efficient way with the automation of feature engineering.

Role of Machine Learning in Intrusion Detection System: A Systematic Review

  • Alhasani, Areej;Al omrani, Faten;Alzahrani, Taghreed;alFahhad, Rehab;Alotaibi, Mohamed
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.3
    • /
    • pp.155-162
    • /
    • 2022
  • Over the last 10 years, there has been rapid growth in the use of Machine Learning (ML) techniques to automate the process of intrusion threat detection at a scale never imagined before. This has prompted researchers, software engineers, and network specialists to rethink the applications of machine ML techniques particularly in the area of cybersecurity. As a result there exists numerous research documentations on the use ML techniques to detect and block cyber-attacks. This article is a systematic review involving the identification of published scholarly articles as found on IEEE Explore and Scopus databases. The articles exclusively related to the use of machine learning in Intrusion Detection Systems (IDS). Methods, concepts, results, and conclusions as found in the texts are analyzed. A description on the process taken in the identification of the research articles included: First, an introduction to the topic which is followed by a methodology section. A table is used to list identified research articles in the form of title, authors, methodology, and key findings.

Physical-Layer Technology Trend and Prospect for AI-based Mobile Communication (AI 기반 이동통신 물리계층 기술 동향과 전망)

  • Chang, K.;Ko, Y.J.;Kim, I.G.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.5
    • /
    • pp.14-29
    • /
    • 2020
  • The 6G mobile communication system will become a backbone infrastructure around 2030 for the future digital world by providing distinctive services such as five-sense holograms, ultra-high reliability/low-latency, ultra-high-precision positioning, ultra-massive connectivity, and gigabit-per-second data rate for aerial and maritime terminals. The recent remarkable advances in machine learning (ML) technology have recognized its efficiency in wireless networking fields such as resource management and cell-configuration optimization. Further innovation in ML is expected to play an important role in solving new problems arising from 6G network management and service delivery. In contrast, an approach to apply ML to a physical-layer (PHY) target tackles the basic problems in radio links, such as overcoming signal distortion and interference. This paper reviews the methodologies of ML-based PHY, relevant industrial trends, and candiate technologies, including future research directions and standardization impacts.

Research Status on Machine Learning for Self-Organizing Network-II (Self-Organizing Network에서 기계학습 연구동향-II)

  • Kwon, D.S.;Na, J.H.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.4
    • /
    • pp.115-134
    • /
    • 2020
  • Several studies on machine learning (ML) based self-organizing networks (SONs) have been conducted, specifically for LTE, since studies to apply ML to optimize mobile communication systems started with 2G. However, they are still in the infancy stage. Owing to the complicated KPIs and stringent user requirements of 5G, it is necessary to design the 5G SON engine with intelligence to enable users to seamlessly and unlimitedly achieve connectivity regardless of the state of the mobile communication network. Therefore, in this study, we analyze and summarize the current state of machine learning studies applied to SONs as solutions to the complicated optimization problems that are caused by the unpredictable context of mobile communication scenarios.

The Investigation of Employing Supervised Machine Learning Models to Predict Type 2 Diabetes Among Adults

  • Alhmiedat, Tareq;Alotaibi, Mohammed
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.9
    • /
    • pp.2904-2926
    • /
    • 2022
  • Currently, diabetes is the most common chronic disease in the world, affecting 23.7% of the population in the Kingdom of Saudi Arabia. Diabetes may be the cause of lower-limb amputations, kidney failure and blindness among adults. Therefore, diagnosing the disease in its early stages is essential in order to save human lives. With the revolution in technology, Artificial Intelligence (AI) could play a central role in the early prediction of diabetes by employing Machine Learning (ML) technology. In this paper, we developed a diagnosis system using machine learning models for the detection of type 2 diabetes among adults, through the adoption of two different diabetes datasets: one for training and the other for the testing, to analyze and enhance the prediction accuracy. This work offers an enhanced classification accuracy as a result of employing several pre-processing methods before applying the ML models. According to the obtained results, the implemented Random Forest (RF) classifier offers the best classification accuracy with a classification score of 98.95%.