• 제목/요약/키워드: Machine learning (ML)

Search Result 300, Processing Time 0.023 seconds

Coupling numerical modeling and machine-learning for back analysis of cantilever retaining wall failure

  • Amichai Mitelman;Gili Lifshitz Sherzer
    • Computers and Concrete
    • /
    • v.31 no.4
    • /
    • pp.307-314
    • /
    • 2023
  • In this paper we back-analyze a failure event of a 9 m high concrete cantilever wall subjected to earth loading. Granular soil was deposited into the space between the wall and a nearby rock slope. The wall segments were not designed to carry lateral earth loading and collapsed due to excessive bending. As many geotechnical programs rely on the Mohr-Coulomb (MC) criterion for elastoplastic analysis, it is useful to apply this failure criterion to the concrete material. Accordingly, the back-analysis is aimed to search for the suitable MC parameters of the concrete. For this study, we propose a methodology for accelerating the back-analysis task by automating the numerical modeling procedure and applying a machine-learning (ML) analysis on FE model results. Through this analysis it is found that the residual cohesion and friction angle have a highly significant impact on model results. Compared to traditional back-analysis studies where good agreement between model and reality are deemed successful based on a limited number of models, the current ML analysis demonstrate that a range of possible combinations of parameters can yield similar results. The proposed methodology can be modified for similar calibration and back-analysis tasks.

Machine learning-based probabilistic predictions of shear resistance of welded studs in deck slab ribs transverse to beams

  • Vitaliy V. Degtyarev;Stephen J. Hicks
    • Steel and Composite Structures
    • /
    • v.49 no.1
    • /
    • pp.109-123
    • /
    • 2023
  • Headed studs welded to steel beams and embedded within the concrete of deck slabs are vital components of modern composite floor systems, where safety and economy depend on the accurate predictions of the stud shear resistance. The multitude of existing deck profiles and the complex behavior of studs in deck slab ribs makes developing accurate and reliable mechanical or empirical design models challenging. The paper addresses this issue by presenting a machine learning (ML) model developed from the natural gradient boosting (NGBoost) algorithm capable of producing probabilistic predictions and a database of 464 push-out tests, which is considerably larger than the databases used for developing existing design models. The proposed model outperforms models based on other ML algorithms and existing descriptive equations, including those in EC4 and AISC 360, while offering probabilistic predictions unavailable from other models and producing higher shear resistances for many cases. The present study also showed that the stud shear resistance is insensitive to the concrete elastic modulus, stud welding type, location of slab reinforcement, and other parameters considered important by existing models. The NGBoost model was interpreted by evaluating the feature importance and dependence determined with the SHapley Additive exPlanations (SHAP) method. The model was calibrated via reliability analyses in accordance with the Eurocodes to ensure that its predictions meet the required reliability level and facilitate its use in design. An interactive open-source web application was created and deployed to the cloud to allow for convenient and rapid stud shear resistance predictions with the developed model.

Hyperparameter Tuning Based Machine Learning classifier for Breast Cancer Prediction

  • Md. Mijanur Rahman;Asikur Rahman Raju;Sumiea Akter Pinky;Swarnali Akter
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.196-202
    • /
    • 2024
  • Currently, the second most devastating form of cancer in people, particularly in women, is Breast Cancer (BC). In the healthcare industry, Machine Learning (ML) is commonly employed in fatal disease prediction. Due to breast cancer's favorable prognosis at an early stage, a model is created to utilize the Dataset on Wisconsin Diagnostic Breast Cancer (WDBC). Conversely, this model's overarching axiom is to compare the effectiveness of five well-known ML classifiers, including Logistic Regression (LR), Decision Tree (DT), Random Forest (RF), K-Nearest Neighbor (KNN), and Naive Bayes (NB) with the conventional method. To counterbalance the effect with conventional methods, the overarching tactic we utilized was hyperparameter tuning utilizing the grid search method, which improved accuracy, secondary precision, third recall, and finally the F1 score. In this study hyperparameter tuning model, the rate of accuracy increased from 94.15% to 98.83% whereas the accuracy of the conventional method increased from 93.56% to 97.08%. According to this investigation, KNN outperformed all other classifiers in terms of accuracy, achieving a score of 98.83%. In conclusion, our study shows that KNN works well with the hyper-tuning method. These analyses show that this study prediction approach is useful in prognosticating women with breast cancer with a viable performance and more accurate findings when compared to the conventional approach.

Seismic vulnerability of reinforced concrete structures using machine learning

  • Ioannis Karampinis;Lazaros Iliadis
    • Earthquakes and Structures
    • /
    • v.27 no.2
    • /
    • pp.83-95
    • /
    • 2024
  • The prediction of seismic behavior of the existing building stock is one of the most impactful and complex problems faced by countries with frequent and intense seismic activities. Human lives can be threatened or lost, the economic life is disrupted and large amounts of monetary reparations can be potentially required. However, authorities at a regional or national level have limited resources at their disposal in order to allocate to preventative measures. Thus, in order to do so, it is essential for them to be able to rank a given population of structures according to their expected degree of damage in an earthquake. In this paper, the authors present a ranking approach, based on Machine Learning (ML) algorithms for pairwise comparisons, coupled with ad hoc ranking rules. The case study employed data from 404 reinforced concrete structures with various degrees of damage from the Athens 1999 earthquake. The two main components of our experiments pertain to the performance of the ML models and the success of the overall ranking process. The former was evaluated using the well-known respective metrics of Precision, Recall, F1-score, Accuracy and Area Under Curve (AUC). The performance of the overall ranking was evaluated using Kendall's tau distance and by viewing the problem as a classification into bins. The obtained results were promising, and were shown to outperform currently employed engineering practices. This demonstrated the capabilities and potential of these models in identifying the most vulnerable structures and, thus, mitigating the effects of earthquakes on society.

A SE Approach for Machine Learning Prediction of the Response of an NPP Undergoing CEA Ejection Accident

  • Ditsietsi Malale;Aya Diab
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.19 no.2
    • /
    • pp.18-31
    • /
    • 2023
  • Exploring artificial intelligence and machine learning for nuclear safety has witnessed increased interest in recent years. To contribute to this area of research, a machine learning model capable of accurately predicting nuclear power plant response with minimal computational cost is proposed. To develop a robust machine learning model, the Best Estimate Plus Uncertainty (BEPU) approach was used to generate a database to train three models and select the best of the three. The BEPU analysis was performed by coupling Dakota platform with the best estimate thermal hydraulics code RELAP/SCDAPSIM/MOD 3.4. The Code Scaling Applicability and Uncertainty approach was adopted, along with Wilks' theorem to obtain a statistically representative sample that satisfies the USNRC 95/95 rule with 95% probability and 95% confidence level. The generated database was used to train three models based on Recurrent Neural Networks; specifically, Long Short-Term Memory, Gated Recurrent Unit, and a hybrid model with Long Short-Term Memory coupled to Convolutional Neural Network. In this paper, the System Engineering approach was utilized to identify requirements, stakeholders, and functional and physical architecture to develop this project and ensure success in verification and validation activities necessary to ensure the efficient development of ML meta-models capable of predicting of the nuclear power plant response.

An Artificial Intelligence Approach to Waterbody Detection of the Agricultural Reservoirs in South Korea Using Sentinel-1 SAR Images (Sentinel-1 SAR 영상과 AI 기법을 이용한 국내 중소규모 농업저수지의 수표면적 산출)

  • Choi, Soyeon;Youn, Youjeong;Kang, Jonggu;Park, Ganghyun;Kim, Geunah;Lee, Seulchan;Choi, Minha;Jeong, Hagyu;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_3
    • /
    • pp.925-938
    • /
    • 2022
  • Agricultural reservoirs are an important water resource nationwide and vulnerable to abnormal climate effects such as drought caused by climate change. Therefore, it is required enhanced management for appropriate operation. Although water-level tracking is necessary through continuous monitoring, it is challenging to measure and observe on-site due to practical problems. This study presents an objective comparison between multiple AI models for water-body extraction using radar images that have the advantages of wide coverage, and frequent revisit time. The proposed methods in this study used Sentinel-1 Synthetic Aperture Radar (SAR) images, and unlike common methods of water extraction based on optical images, they are suitable for long-term monitoring because they are less affected by the weather conditions. We built four AI models such as Support Vector Machine (SVM), Random Forest (RF), Artificial Neural Network (ANN), and Automated Machine Learning (AutoML) using drone images, sentinel-1 SAR and DSM data. There are total of 22 reservoirs of less than 1 million tons for the study, including small and medium-sized reservoirs with an effective storage capacity of less than 300,000 tons. 45 images from 22 reservoirs were used for model training and verification, and the results show that the AutoML model was 0.01 to 0.03 better in the water Intersection over Union (IoU) than the other three models, with Accuracy=0.92 and mIoU=0.81 in a test. As the result, AutoML performed as well as the classical machine learning methods and it is expected that the applicability of the water-body extraction technique by AutoML to monitor reservoirs automatically.

Machine Learning Perspective Gene Optimization for Efficient Induction Machine Design

  • Selvam, Ponmurugan Panneer;Narayanan, Rengarajan
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.3
    • /
    • pp.1202-1211
    • /
    • 2018
  • In this paper, induction machine operation efficiency and torque is improved using Machine Learning based Gene Optimization (ML-GO) Technique is introduced. Optimized Genetic Algorithm (OGA) is used to select the optimal induction machine data. In OGA, selection, crossover and mutation process is carried out to find the optimal electrical machine data for induction machine design. Initially, many number of induction machine data are given as input for OGA. Then, fitness value is calculated for all induction machine data to find whether the criterion is satisfied or not through fitness function (i.e., objective function such as starting to full load torque ratio, rotor current, power factor and maximum flux density of stator and rotor teeth). When the criterion is not satisfied, annealed selection approach in OGA is used to move the selection criteria from exploration to exploitation to attain the optimal solution (i.e., efficient machine data). After the selection process, two point crossovers is carried out to select two crossover points within a chromosomes (i.e., design variables) and then swaps two parent's chromosomes for producing two new offspring. Finally, Adaptive Levy Mutation is used in OGA to select any value in random manner and gets mutated to obtain the optimal value. This process gets iterated till finding the optimal value for induction machine design. Experimental evaluation of ML-GO technique is carried out with performance metrics such as torque, rotor current, induction machine operation efficiency and rotor power factor compared to the state-of-the-art works.

Using Machine Learning to Improve Evolutionary Multi-Objective Optimization

  • Alotaibi, Rakan
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.6
    • /
    • pp.203-211
    • /
    • 2022
  • Multi-objective optimization problems (MOPs) arise in many real-world applications. MOPs involve two or more objectives with the aim to be optimized. With these problems improvement of one objective may led to deterioration of another. The primary goal of most multi-objective evolutionary algorithms (MOEA) is to generate a set of solutions for approximating the whole or part of the Pareto optimal front, which could provide decision makers a good insight to the problem. Over the last decades or so, several different and remarkable multi-objective evolutionary algorithms, have been developed with successful applications. However, MOEAs are still in their infancy. The objective of this research is to study how to use and apply machine learning (ML) to improve evolutionary multi-objective optimization (EMO). The EMO method is the multi-objective evolutionary algorithm based on decomposition (MOEA/D). The MOEA/D has become one of the most widely used algorithmic frameworks in the area of multi-objective evolutionary computation and won has won an international algorithm contest.

Automatic COVID-19 Prediction with Optimized Machine Learning Classifiers Using Clinical Inpatient Data

  • Abbas Jafar;Myungho Lee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.539-541
    • /
    • 2023
  • COVID-19 is a viral pandemic disease that spreads widely all around the world. The only way to identify COVID-19 patients at an early stage is to stop the spread of the virus. Different approaches are used to diagnose, such as RT-PCR, Chest X-rays, and CT images. However, these are time-consuming and require a specialized lab. Therefore, there is a need to develop a time-efficient diagnosis method to detect COVID-19 patients. The proposed machine learning (ML) approach predicts the presence of coronavirus based on clinical symptoms. The clinical dataset is collected from the Israeli Ministry of Health. We used different ML classifiers (i.e., XGB, DT, RF, and NB) to diagnose COVID-19. Later, classifiers are optimized with the Bayesian hyperparameter optimization approach to improve the performance. The optimized RF outperformed the others and achieved an accuracy of 97.62% on the testing data that help the early diagnosis of COVID-19 patients.

Performance Analysis of Machine Learning Algorithms for Application Traffic Classification (애플리케이션 트래픽 분류를 위한 머신러닝 알고리즘 성능 분석)

  • Kim, Sung-Yun;Kim, Myung-Sup
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2008.05a
    • /
    • pp.968-970
    • /
    • 2008
  • 기존에 트래픽 분류 방법으로 payload 분석이나 well-known port를 이용한 방법을 많이 사용했다. 하지만 동적으로 변하는 애플리케이션이 늘어남에 따라 기존 방법으로 애플리케이션 트래픽 분류가 어렵다. 이러한 문제의 대안으로 Machine Learning(ML) 알고리즘을 이용한 애플리케이션 트래픽 분류방법이 연구되고 있다. 기존의 논문에서는 일정 시간동안 수집한 data set을 사용하기 때문에 적게 발생한 애플리케이션은 제대로 분류하지 못하여도 전체적으로는 좋은 성능을 보일 수 있다. 본 논문에서는 이러한 문제를 해결하기 위해 각 애플리케이션마다 동일한 수의 data set을 수집하여 애플리케이션 트래픽을 분류하는 방법을 제시한다. ML 알고리즘 중 J48, REPTree, BayesNet, NaiveBayes, Multilayer Perceptron 알고리즘을 이용하여 애플리케이션 트래픽 분류의 정확도를 비교한다.